text
stringlengths
10
951k
source
stringlengths
39
44
Empirical research Empirical research is research using empirical evidence. It is also a way of gaining knowledge by means of direct and indirect observation or experience. Empiricism values such research more than other kinds. Empirical evidence (the record of one's direct observations or experiences) can be analyzed quantitatively or qualitatively. Quantifying the evidence or making sense of it in qualitative form, a researcher can answer empirical questions, which should be clearly defined and answerable with the evidence collected (usually called data). Research design varies by field and by the question being investigated. Many researchers combine qualitative and quantitative forms of analysis to better answer questions which cannot be studied in laboratory settings, particularly in the social sciences and in education. In some fields, quantitative research may begin with a research question (e.g., "Does listening to vocal music during the learning of a word list have an effect on later memory for these words?") which is tested through experimentation. Usually, researcher has a certain theory regarding the topic under investigation. Based on this theory, statements or hypotheses will be proposed (e.g., "Listening to vocal voice has a negative effect on learning a word list."). From these hypotheses, predictions about specific events are derived (e.g., "People who study a word list while listening to vocal music will remember fewer words on a later memory test than people who study a word list in silence."). These predictions can then be tested with a suitable experiment. Depending on the outcomes of the experiment, the theory on which the hypotheses and predictions were based will be supported or not, or may need to be modified and then subjected to further testing. The term empirical was originally used to refer to certain ancient Greek practitioners of medicine who rejected adherence to the dogmatic doctrines of the day, preferring instead to rely on the observation of phenomena as perceived in experience. Later empiricism referred to a theory of knowledge in philosophy which adheres to the principle that knowledge arises from experience and evidence gathered specifically using the senses. In scientific use, the term empirical refers to the gathering of data using only evidence that is observable by the senses or in some cases using calibrated scientific instruments. What early philosophers described as empiricist and empirical research have in common is the dependence on observable data to formulate and test theories and come to conclusions. The researcher attempts to describe accurately the interaction between the instrument (or the human senses) and the entity being observed. If instrumentation is involved, the researcher is expected to calibrate his/her instrument by applying it to known standard objects and documenting the results before applying it to unknown objects. In other words, it describes the research that has not taken place before and their results. In practice, the accumulation of evidence for or against any particular theory involves planned research designs for the collection of empirical data, and academic rigor plays a large part of judging the merits of research design. Several typologies for such designs have been suggested, one of the most popular of which comes from Campbell and Stanley. They are responsible for popularizing the widely cited distinction among pre-experimental, experimental, and quasi-experimental designs and are staunch advocates of the central role of randomized experiments in educational research. Accurate analysis of data using standardized statistical methods in scientific studies is critical to determining the validity of empirical research. Statistical formulas such as regression, uncertainty coefficient, t-test, chi square, and various types of ANOVA (analyses of variance) are fundamental to forming logical, valid conclusions. If empirical data reach significance under the appropriate statistical formula, the research hypothesis is supported. If not, the null hypothesis is supported (or, more accurately, not rejected), meaning no effect of the independent variable(s) was observed on the dependent variable(s). The outcome of empirical research using statistical hypothesis testing is never "proof". It can only "support" a hypothesis, "reject" it, or do neither. These methods yield only probabilities. Among scientific researchers, empirical "evidence" (as distinct from empirical "research") refers to objective evidence that appears the same regardless of the observer. For example, a thermometer will not display different temperatures for each individual who observes it. Temperature, as measured by an accurate, well calibrated thermometer, is empirical evidence. By contrast, non-empirical evidence is subjective, depending on the observer. Following the previous example, observer A might truthfully report that a room is warm, while observer B might truthfully report that the same room is cool, though both observe the same reading on the thermometer. The use of empirical evidence negates this effect of personal (i.e., subjective) experience or time. The varying perception of empiricism and rationalism shows concern with the limit to which there is dependency on experience of sense as an effort of gaining knowledge. According to rationalism, there are a number of different ways in which sense experience is gained independently for the knowledge and concepts. According to empiricism, sense experience is considered as the main source of every piece of knowledge and the concepts. In general, rationalists are known for the development of their own views following two different way. First, the key argument can be placed that there are cases in which the content of knowledge or concepts end up outstripping the information. This outstripped information is provided by the sense experience (Hjørland, 2010, 2). Second, there is construction of accounts as to how reasoning helps in the provision of addition knowledge about a specific or broader scope. Empiricists are known to be presenting complementary senses related to thought. First there is development of accounts of how there is provision of information by experience that is cited by rationalists. This is insofar for having it in the initial place. At times, empiricists tend to be opting skepticism as an option of rationalism. If experience is not helpful in the provision of knowledge or concept cited by rationalists, then they do not exist (Pearce, 2010, 35). Second, empiricists hold the tendency of attacking the accounts of rationalists while considering reasoning to be an important source of knowledge or concepts. The overall disagreement between empiricists and rationalists show primary concerns in how there is gaining of knowledge with respect to the sources of knowledge and concept. In some of the cases, disagreement at the point of gaining knowledge results in the provision of conflicting responses to other aspects as well. There might be a disagreement in the overall feature of warrant, while limiting the knowledge and thought. Empiricists are known for sharing the view that there is no existence of innate knowledge and rather that is derivation of knowledge out of experience. These experiences are either reasoned using the mind or sensed through the five senses human possess (Bernard, 2011, 5). On the other hand, rationalists are known to be sharing the view that there is existence of innate knowledge and this is different for the objects of innate knowledge being chosen. In order to follow rationalism, there must be adoption of one of the three claims related to the theory that are deduction or intuition, innate knowledge, and innate concept. The more there is removal of concept from mental operations and experience, there can be performance over experience with increased plausibility in being innate. Further ahead, empiricism in context with a specific subject provides a rejection of corresponding version related to innate knowledge and deduction or intuition (Weiskopf, 2008, 16). Insofar as there is acknowledgement of concepts and knowledge within the area of subject, the knowledge has major dependence on experience through human senses. A.D. de Groot's empirical cycle:
https://en.wikipedia.org/wiki?curid=9545
Engineering statistics Engineering statistics combines engineering and statistics using scientific methods for analyzing data. Engineering statistics involves data concerning manufacturing processes such as: component dimensions, tolerances, type of material, and fabrication process control. There are many methods used in engineering analysis and they are often displayed as histograms to give a visual of the data as opposed to being just numerical. Examples of methods are: Engineering statistics dates back to 1000 B.C. when the Abacus was developed as means to calculate numerical data. In the 1600s, the development of information processing to systematically analyze and process data began. In 1654, the Slide Rule technique was developed by Robert Bissaker for advanced data calculations. In 1833, a British mathematician named Charles Babbage designed the idea of an automatic computer which inspired developers at Harvard University and IBM to design the first mechanical automatic-sequence-controlled calculator called MARK I. The integration of computers and calculators into the industry brought about a more efficient means of analyzing data and the beginning of engineering statistics.
https://en.wikipedia.org/wiki?curid=9546
Edgar Allan Poe Edgar Allan Poe (; born Edgar Poe; January 19, 1809 – October 7, 1849) was an American writer, poet, editor, and literary critic. Poe is best known for his poetry and short stories, particularly his tales of mystery and the macabre. He is widely regarded as a central figure of Romanticism in the United States and of American literature as a whole, and he was one of the country's earliest practitioners of the short story. He is also generally considered the inventor of the detective fiction genre and is further credited with contributing to the emerging genre of science fiction. Poe was the first well-known American writer to earn a living through writing alone, resulting in a financially difficult life and career. Poe was born in Boston, the second child of actors David and Elizabeth "Eliza" Poe. His father abandoned the family in 1810, and his mother died the following year. Thus orphaned, Poe was taken in by John and Frances Allan of Richmond, Virginia. They never formally adopted him, but he was with them well into young adulthood. Tension developed later as Poe and John Allan repeatedly clashed over Poe's debts, including those incurred by gambling, and the cost of Poe's education. Poe attended the University of Virginia but left after a year due to lack of money. He quarreled with Allan over the funds for his education and enlisted in the United States Army in 1827 under an assumed name. It was at this time that his publishing career began with the anonymous collection "Tamerlane and Other Poems" (1827), credited only to "a Bostonian". Poe and Allan reached a temporary rapprochement after the death of Allan's wife in 1829. Poe later failed as an officer cadet at West Point, declaring a firm wish to be a poet and writer, and he ultimately parted ways with Allan. Poe switched his focus to prose and spent the next several years working for literary journals and periodicals, becoming known for his own style of literary criticism. His work forced him to move among several cities, including Baltimore, Philadelphia, and New York City. He married his 13-year-old cousin, Virginia Clemm, in 1836, but Virginia died of tuberculosis in 1847. In January 1845, Poe published his poem "The Raven" to instant success. He planned for years to produce his own journal "The Penn" (later renamed "The Stylus"), but before it could be produced, he died in Baltimore on October 7, 1849, at age 40. The cause of his death is unknown and has been variously attributed to disease, alcoholism, substance abuse, suicide, and other causes. Poe and his works influenced literature around the world, as well as specialized fields such as cosmology and cryptography. He and his work appear throughout popular culture in literature, music, films, and television. A number of his homes are dedicated museums today. The Mystery Writers of America present an annual award known as the Edgar Award for distinguished work in the mystery genre. Edgar Poe was born in Boston, Massachusetts on January 19, 1809, the second child of English-born actress Elizabeth Arnold Hopkins Poe and actor David Poe Jr. He had an elder brother named William Henry Leonard Poe and a younger sister named Rosalie Poe. Their grandfather, David Poe Sr., emigrated from County Cavan, Ireland around 1750. Edgar may have been named after a character in William Shakespeare's "King Lear", which the couple were performing in 1809. His father abandoned the family in 1810, and his mother died a year later from consumption (pulmonary tuberculosis). Poe was then taken into the home of John Allan, a successful merchant in Richmond, Virginia who dealt in a variety of goods, including cloth, wheat, tombstones, tobacco, and slaves. The Allans served as a foster family and gave him the name "Edgar Allan Poe", though they never formally adopted him. The Allan family had Poe baptized into the Episcopal Church in 1812. John Allan alternately spoiled and aggressively disciplined his foster son. The family sailed to the United Kingdom in 1815, and Poe attended the grammar school for a short period in Irvine, Ayrshire, Scotland (where Allan was born) before rejoining the family in London in 1816. There he studied at a boarding school in Chelsea until summer 1817. He was subsequently entered at the Reverend John Bransby's Manor House School at Stoke Newington, then a suburb north of London. Poe moved with the Allans back to Richmond in 1820. In 1824, he served as the lieutenant of the Richmond youth honor guard as the city celebrated the visit of the Marquis de Lafayette. In March 1825, Allan's uncle and business benefactor William Galt died, who was said to be one of the wealthiest men in Richmond, leaving Allan several acres of real estate. The inheritance was estimated at $750,000 (). By summer 1825, Allan celebrated his expansive wealth by purchasing a two-story brick house called Moldavia. Poe may have become engaged to Sarah Elmira Royster before he registered at the University of Virginia in February 1826 to study ancient and modern languages. The university was in its infancy, established on the ideals of its founder Thomas Jefferson. It had strict rules against gambling, horses, guns, tobacco, and alcohol, but these rules were mostly ignored. Jefferson had enacted a system of student self-government, allowing students to choose their own studies, make their own arrangements for boarding, and report all wrongdoing to the faculty. The unique system was still in chaos, and there was a high dropout rate. During his time there, Poe lost touch with Royster and also became estranged from his foster father over gambling debts. He claimed that Allan had not given him sufficient money to register for classes, purchase texts, and procure and furnish a dormitory. Allan did send additional money and clothes, but Poe's debts increased. Poe gave up on the university after a year but did not feel welcome returning to Richmond, especially when he learned that his sweetheart Royster had married another man, Alexander Shelton. He traveled to Boston in April 1827, sustaining himself with odd jobs as a clerk and newspaper writer, and he started using the pseudonym Henri Le Rennet during this period. Poe was unable to support himself, so he enlisted in the United States Army as a private on May 27, 1827, using the name "Edgar A. Perry". He claimed that he was even though he was 18. He first served at Fort Independence in Boston Harbor for five dollars a month. That same year, he released his first book, a 40-page collection of poetry titled "Tamerlane and Other Poems", attributed with the byline "by a Bostonian". Only 50 copies were printed, and the book received virtually no attention. Poe's regiment was posted to Fort Moultrie in Charleston, South Carolina and traveled by ship on the brig "Waltham" on November 8, 1827. Poe was promoted to "artificer", an enlisted tradesman who prepared shells for artillery, and had his monthly pay doubled. He served for two years and attained the rank of Sergeant Major for Artillery (the highest rank that a non-commissioned officer could achieve); he then sought to end his five-year enlistment early. Poe revealed his real name and his circumstances to his commanding officer, Lieutenant Howard, who would only allow Poe to be discharged if he reconciled with Allan. Poe wrote a letter to Allan, who was unsympathetic and spent several months ignoring Poe's pleas; Allan may not have written to Poe even to make him aware of his foster mother's illness. Frances Allan died on February 28, 1829, and Poe visited the day after her burial. Perhaps softened by his wife's death, Allan agreed to support Poe's attempt to be discharged in order to receive an appointment to the United States Military Academy at West Point, New York. Poe was finally discharged on April 15, 1829, after securing a replacement to finish his enlisted term for him. Before entering West Point, he moved back to Baltimore for a time to stay with his widowed aunt Maria Clemm, her daughter Virginia Eliza Clemm (Poe's first cousin), his brother Henry, and his invalid grandmother Elizabeth Cairnes Poe. Meanwhile, Poe published his second book "Al Aaraaf, Tamerlane and Minor Poems" in Baltimore in 1829. Poe traveled to West Point and matriculated as a cadet on July 1, 1830. In October 1830, Allan married his second wife Louisa Patterson. The marriage and bitter quarrels with Poe over the children born to Allan out of extramarital affairs led to the foster father finally disowning Poe. Poe decided to leave West Point by purposely getting court-martialed. On February 8, 1831, he was tried for gross neglect of duty and disobedience of orders for refusing to attend formations, classes, or church. He tactically pleaded not guilty to induce dismissal, knowing that he would be found guilty. Poe left for New York in February 1831 and released a third volume of poems, simply titled "Poems." The book was financed with help from his fellow cadets at West Point, many of whom donated 75 cents to the cause, raising a total of $170. They may have been expecting verses similar to the satirical ones that Poe had been writing about commanding officers. It was printed by Elam Bliss of New York, labeled as "Second Edition," and including a page saying, "To the U.S. Corps of Cadets this volume is respectfully dedicated". The book once again reprinted the long poems "Tamerlane" and "Al Aaraaf" but also six previously unpublished poems, including early versions of "To Helen", "Israfel", and "The City in the Sea". Poe returned to Baltimore to his aunt, brother, and cousin in March 1831. His elder brother Henry had been in ill health, in part due to problems with alcoholism, and he died on August 1, 1831. After his brother's death, Poe began more earnest attempts to start his career as a writer, but he chose a difficult time in American publishing to do so. He was one of the first Americans to live by writing alone and was hampered by the lack of an international copyright law. American publishers often produced unauthorized copies of British works rather than paying for new work by Americans. The industry was also particularly hurt by the Panic of 1837. There was a booming growth in American periodicals around this time, fueled in part by new technology, but many did not last beyond a few issues. Publishers often refused to pay their writers or paid them much later than they promised, and Poe repeatedly resorted to humiliating pleas for money and other assistance. After his early attempts at poetry, Poe had turned his attention to prose. He placed a few stories with a Philadelphia publication and began work on his only drama "Politian". The "Baltimore Saturday Visiter" awarded him a prize in October 1833 for his short story "MS. Found in a Bottle". The story brought him to the attention of John P. Kennedy, a Baltimorean of considerable means who helped Poe place some of his stories and introduced him to Thomas W. White, editor of the "Southern Literary Messenger" in Richmond. Poe became assistant editor of the periodical in August 1835, but White discharged him within a few weeks for being drunk on the job. Poe returned to Baltimore where he obtained a license to marry his cousin Virginia on September 22, 1835, though it is unknown if they were married at that time. He was 26 and she was 13. Poe was reinstated by White after promising good behavior, and he went back to Richmond with Virginia and her mother. He remained at the "Messenger" until January 1837. During this period, Poe claimed that its circulation increased from 700 to 3,500. He published several poems, book reviews, critiques, and stories in the paper. On May 16, 1836, he and Virginia held a Presbyterian wedding ceremony at their Richmond boarding house, with a witness falsely attesting Clemm's age as 21. Poe's novel "The Narrative of Arthur Gordon Pym of Nantucket" was published and widely reviewed in 1838. In the summer of 1839, Poe became assistant editor of "Burton's Gentleman's Magazine". He published numerous articles, stories, and reviews, enhancing his reputation as a trenchant critic which he had established at the "Messenger". Also in 1839, the collection "Tales of the Grotesque and Arabesque" was published in two volumes, though he made little money from it and it received mixed reviews. Poe left "Burton's" after about a year and found a position as assistant at "Graham's Magazine". In June 1840, Poe published a prospectus announcing his intentions to start his own journal called "The Stylus", although he originally intended to call it "The Penn", as it would have been based in Philadelphia. He bought advertising space for his prospectus in the June 6, 1840 issue of Philadelphia's "Saturday Evening Post": ""Prospectus of the Penn Magazine, a Monthly Literary journal to be edited and published in the city of Philadelphia by Edgar A. Poe."" The journal was never produced before Poe's death. Around this time, Poe attempted to secure a position within the administration of President John Tyler, claiming that he was a member of the Whig Party. He hoped to be appointed to the United States Custom House in Philadelphia with help from President Tyler's son Robert, an acquaintance of Poe's friend Frederick Thomas. Poe failed to show up for a meeting with Thomas to discuss the appointment in mid-September 1842, claiming to have been sick, though Thomas believed that he had been drunk. Poe was promised an appointment, but all positions were filled by others. One evening in January 1842, Virginia showed the first signs of consumption, now known as tuberculosis, while singing and playing the piano, which Poe described as breaking a blood vessel in her throat. She only partially recovered, and Poe began to drink more heavily under the stress of her illness. He left "Graham's" and attempted to find a new position, for a time angling for a government post. He returned to New York where he worked briefly at the "Evening Mirror" before becoming editor of the "Broadway Journal", and later its owner. There Poe alienated himself from other writers by publicly accusing Henry Wadsworth Longfellow of plagiarism, though Longfellow never responded. On January 29, 1845, his poem "The Raven" appeared in the "Evening Mirror" and became a popular sensation. It made Poe a household name almost instantly, though he was paid only $9 for its publication. It was concurrently published in "" under the pseudonym "Quarles". The "Broadway Journal" failed in 1846, and Poe moved to a cottage in Fordham, New York, in what is now the Bronx. That home is now known as the Edgar Allan Poe Cottage, relocated to a park near the southeast corner of the Grand Concourse and Kingsbridge Road. Nearby, Poe befriended the Jesuits at St. John's College, now Fordham University. Virginia died at the cottage on January 30, 1847. Biographers and critics often suggest that Poe's frequent theme of the "death of a beautiful woman" stems from the repeated loss of women throughout his life, including his wife. Poe was increasingly unstable after his wife's death. He attempted to court poet Sarah Helen Whitman who lived in Providence, Rhode Island. Their engagement failed, purportedly because of Poe's drinking and erratic behavior. There is also strong evidence that Whitman's mother intervened and did much to derail their relationship. Poe then returned to Richmond and resumed a relationship with his childhood sweetheart Sarah Elmira Royster. On October 3, 1849, Poe was found delirious on the streets of Baltimore, "in great distress, and… in need of immediate assistance", according to Joseph W. Walker, who found him. He was taken to the Washington Medical College, where he died on Sunday, October 7, 1849, at 5:00 in the morning. Poe was not coherent long enough to explain how he came to be in his dire condition and, oddly, was wearing clothes that were not his own. He is said to have repeatedly called out the name "Reynolds" on the night before his death, though it is unclear to whom he was referring. Some sources say that Poe's final words were, "Lord help my poor soul". All medical records have been lost, including Poe's death certificate. Newspapers at the time reported Poe's death as "congestion of the brain" or "cerebral inflammation", common euphemisms for death from disreputable causes such as alcoholism. The actual cause of death remains a mystery. Speculation has included "delirium tremens", heart disease, epilepsy, syphilis, meningeal inflammation, cholera, and rabies. One theory dating from 1872 suggests that cooping was the cause of Poe's death, a form of electoral fraud in which citizens were forced to vote for a particular candidate, sometimes leading to violence and even murder. Immediately after Poe's death, his literary rival Rufus Wilmot Griswold wrote a slanted high-profile obituary under a pseudonym, filled with falsehoods that cast him as a lunatic and a madman, and which described him as a person who "walked the streets, in madness or melancholy, with lips moving in indistinct curses, or with eyes upturned in passionate prayers, (never for himself, for he felt, or professed to feel, that he was already damned)". The long obituary appeared in the "New York Tribune" signed "Ludwig" on the day that Poe was buried. It was soon further published throughout the country. The piece began, "Edgar Allan Poe is dead. He died in Baltimore the day before yesterday. This announcement will startle many, but few will be grieved by it." "Ludwig" was soon identified as Griswold, an editor, critic, and anthologist who had borne a grudge against Poe since 1842. Griswold somehow became Poe's literary executor and attempted to destroy his enemy's reputation after his death. Griswold wrote a biographical article of Poe called "Memoir of the Author", which he included in an 1850 volume of the collected works. There he depicted Poe as a depraved, drunken, drug-addled madman and included Poe's letters as evidence. Many of his claims were either lies or distorted half-truths. For example, it is seriously disputed that Poe really was a drug addict. Griswold's book was denounced by those who knew Poe well, but it became a popularly accepted biographical source. This occurred in part because it was the only full biography available and was widely reprinted, and in part because readers thrilled at the thought of reading works by an "evil" man. Letters that Griswold presented as proof were later revealed to be forgeries. After Poe's death, Griswold convinced Poe's mother-in-law to sign away the rights to his works. Griswold went on to publish the collected works attached with his own fabricated biography of Poe that invented stories of his drunkenness, immorality and instability. Poe's best known fiction works are Gothic, adhering to the genre's conventions to appeal to the public taste. His most recurring themes deal with questions of death, including its physical signs, the effects of decomposition, concerns of premature burial, the reanimation of the dead, and mourning. Many of his works are generally considered part of the dark romanticism genre, a literary reaction to transcendentalism which Poe strongly disliked. He referred to followers of the transcendental movement as "Frog-Pondians", after the pond on Boston Common, and ridiculed their writings as "metaphor—run mad," lapsing into "obscurity for obscurity's sake" or "mysticism for mysticism's sake". Poe once wrote in a letter to Thomas Holley Chivers that he did not dislike transcendentalists, "only the pretenders and sophists among them". Beyond horror, Poe also wrote satires, humor tales, and hoaxes. For comic effect, he used irony and ludicrous extravagance, often in an attempt to liberate the reader from cultural conformity. "Metzengerstein" is the first story that Poe is known to have published and his first foray into horror, but it was originally intended as a burlesque satirizing the popular genre. Poe also reinvented science fiction, responding in his writing to emerging technologies such as hot air balloons in "The Balloon-Hoax". Poe wrote much of his work using themes aimed specifically at mass-market tastes. To that end, his fiction often included elements of popular pseudosciences, such as phrenology and physiognomy. Poe's writing reflects his literary theories, which he presented in his criticism and also in essays such as "The Poetic Principle". He disliked didacticism and allegory, though he believed that meaning in literature should be an undercurrent just beneath the surface. Works with obvious meanings, he wrote, cease to be art. He believed that work of quality should be brief and focus on a specific single effect. To that end, he believed that the writer should carefully calculate every sentiment and idea. Poe describes his method in writing "The Raven" in the essay "The Philosophy of Composition", and he claims to have strictly followed this method. It has been questioned whether he really followed this system, however. T. S. Eliot said: "It is difficult for us to read that essay without reflecting that if Poe plotted out his poem with such calculation, he might have taken a little more pains over it: the result hardly does credit to the method." Biographer Joseph Wood Krutch described the essay as "a rather highly ingenious exercise in the art of rationalization". During his lifetime, Poe was mostly recognized as a literary critic. Fellow critic James Russell Lowell called him "the most discriminating, philosophical, and fearless critic upon imaginative works who has written in America", suggesting—rhetorically—that he occasionally used prussic acid instead of ink. Poe's caustic reviews earned him the reputation of being a "tomahawk man". A favorite target of Poe's criticism was Boston's acclaimed poet Henry Wadsworth Longfellow, who was often defended by his literary friends in what was later called "The Longfellow War". Poe accused Longfellow of "the heresy of the didactic", writing poetry that was preachy, derivative, and thematically plagiarized. Poe correctly predicted that Longfellow's reputation and style of poetry would decline, concluding, "We grant him high qualities, but deny him the Future". Poe was also known as a writer of fiction and became one of the first American authors of the 19th century to become more popular in Europe than in the United States. Poe is particularly respected in France, in part due to early translations by Charles Baudelaire. Baudelaire's translations became definitive renditions of Poe's work throughout Europe. Poe's early detective fiction tales featuring C. Auguste Dupin laid the groundwork for future detectives in literature. Sir Arthur Conan Doyle said, "Each [of Poe's detective stories] is a root from which a whole literature has developed... Where was the detective story until Poe breathed the breath of life into it?" The Mystery Writers of America have named their awards for excellence in the genre the "Edgars". Poe's work also influenced science fiction, notably Jules Verne, who wrote a sequel to Poe's novel "The Narrative of Arthur Gordon Pym of Nantucket" called "An Antarctic Mystery", also known as "The Sphinx of the Ice Fields". Science fiction author H. G. Wells noted, ""Pym" tells what a very intelligent mind could imagine about the south polar region a century ago". In 2013, "The Guardian" cited "Pym" as one of the greatest novels ever written in the English language, and noted its influence on later authors such as Doyle, Henry James, B. Traven, and David Morrell. Horror author and historian H. P. Lovecraft was heavily influenced by Poe’s horror tales, dedicating an entire section of his long essay, “Supernatural Horror in Literature”, to his influence on the genre. In his letters, Lovecraft stated, “When I write stories, Edgar Allan Poe is my model.” Alfred Hitchcock once said, "It's because I liked Edgar Allan Poe's stories so much that I began to make suspense films". Like many famous artists, Poe's works have spawned imitators. One trend among imitators of Poe has been claims by clairvoyants or psychics to be "channeling" poems from Poe's spirit. One of the most notable of these was Lizzie Doten, who published "Poems from the Inner Life" in 1863, in which she claimed to have "received" new compositions by Poe's spirit. The compositions were re-workings of famous Poe poems such as "The Bells", but which reflected a new, positive outlook. Even so, Poe has received not only praise, but criticism as well. This is partly because of the negative perception of his personal character and its influence upon his reputation. William Butler Yeats was occasionally critical of Poe and once called him "vulgar". Transcendentalist Ralph Waldo Emerson reacted to "The Raven" by saying, "I see nothing in it", and derisively referred to Poe as "the jingle man". Aldous Huxley wrote that Poe's writing "falls into vulgarity" by being "too poetical"—the equivalent of wearing a diamond ring on every finger. It is believed that only twelve copies have survived of Poe's first book "Tamerlane and Other Poems". In December 2009, one copy sold at Christie's auctioneers in New York City for $662,500, a record price paid for a work of American literature. "", an essay written in 1848, included a cosmological theory that presaged the Big Bang theory by 80 years, as well as the first plausible solution to Olbers' paradox. Poe eschewed the scientific method in "Eureka" and instead wrote from pure intuition. For this reason, he considered it a work of art, not science, but insisted that it was still true and considered it to be his career masterpiece. Even so, "Eureka" is full of scientific errors. In particular, Poe's suggestions ignored Newtonian principles regarding the density and rotation of planets. Poe had a keen interest in cryptography. He had placed a notice of his abilities in the Philadelphia paper "Alexander's Weekly (Express) Messenger", inviting submissions of ciphers which he proceeded to solve. In July 1841, Poe had published an essay called "A Few Words on Secret Writing" in "Graham's Magazine". Capitalizing on public interest in the topic, he wrote "The Gold-Bug" incorporating ciphers as an essential part of the story. Poe's success with cryptography relied not so much on his deep knowledge of that field (his method was limited to the simple substitution cryptogram) as on his knowledge of the magazine and newspaper culture. His keen analytical abilities, which were so evident in his detective stories, allowed him to see that the general public was largely ignorant of the methods by which a simple substitution cryptogram can be solved, and he used this to his advantage. The sensation that Poe created with his cryptography stunts played a major role in popularizing cryptograms in newspapers and magazines. Two ciphers he published in 1841 under the name "W. B. Tyler" were not solved until 1992 and 2000 respectively. One was a quote from Joseph Addison's play "Cato"; the other is still unidentified. Poe had an influence on cryptography beyond increasing public interest during his lifetime. William Friedman, America's foremost cryptologist, was heavily influenced by Poe. Friedman's initial interest in cryptography came from reading "The Gold-Bug" as a child, an interest that he later put to use in deciphering Japan's PURPLE code during World War II. The historical Edgar Allan Poe has appeared as a fictionalized character, often representing the "mad genius" or "tormented artist" and exploiting his personal struggles. Many such depictions also blend in with characters from his stories, suggesting that Poe and his characters share identities. Often, fictional depictions of Poe use his mystery-solving skills in such novels as "The Poe Shadow" by Matthew Pearl. No childhood home of Poe is still standing, including the Allan family's Moldavia estate. The oldest standing home in Richmond, the Old Stone House, is in use as the Edgar Allan Poe Museum, though Poe never lived there. The collection includes many items that Poe used during his time with the Allan family, and also features several rare first printings of Poe works. 13 West Range is the dorm room that Poe is believed to have used while studying at the University of Virginia in 1826; it is preserved and available for visits. Its upkeep is now overseen by a group of students and staff known as the Raven Society. The earliest surviving home in which Poe lived is in Baltimore, preserved as the Edgar Allan Poe House and Museum. Poe is believed to have lived in the home at the age of 23 when he first lived with Maria Clemm and Virginia (as well as his grandmother and possibly his brother William Henry Leonard Poe). It is open to the public and is also the home of the Edgar Allan Poe Society. Of the several homes that Poe, his wife Virginia, and his mother-in-law Maria rented in Philadelphia, only the last house has survived. The Spring Garden home, where the author lived in 1843–1844, is today preserved by the National Park Service as the Edgar Allan Poe National Historic Site. Poe's final home is preserved as the Edgar Allan Poe Cottage in the Bronx. In Boston, a commemorative plaque on Boylston Street is several blocks away from the actual location of Poe's birth. The house which was his birthplace at 62 Carver Street no longer exists; also, the street has since been renamed "Charles Street South". A "square" at the intersection of Broadway, Fayette, and Carver Streets had once been named in his honor, but it disappeared when the streets were rearranged. In 2009, the intersection of Charles and Boylston Streets (two blocks north of his birthplace) was designated "Edgar Allan Poe Square". In March 2014, fundraising was completed for construction of a permanent memorial sculpture, known as "Poe Returning to Boston," at this location. The winning design by Stefanie Rocknak depicts a life-sized Poe striding against the wind, accompanied by a flying raven; his suitcase lid has fallen open, leaving a "paper trail" of literary works embedded in the sidewalk behind him. The public unveiling on October 5, 2014 was attended by former U.S. poet laureate Robert Pinsky. Other Poe landmarks include a building on the Upper West Side where Poe temporarily lived when he first moved to New York. A plaque suggests that Poe wrote "The Raven" here. On Sullivan's Island in Charleston, South Carolina, the setting of Poe's tale "The Gold-Bug" and where Poe served in the Army in 1827 at Fort Moultrie, there is a restaurant called Poe's Tavern. In Fell's Point, Baltimore, a bar still stands where legend says that Poe was last seen drinking before his death. Now known as "The Horse You Came in On", local lore insists that a ghost whom they call "Edgar" haunts the rooms above. Early daguerreotypes of Poe continue to arouse great interest among literary historians. Notable among them are: Between 1949 and 2009, a bottle of cognac and three roses were left at Poe's original grave marker every January 19 by an unknown visitor affectionately referred to as the "Poe Toaster". Sam Porpora was a historian at the Westminster Church in Baltimore where Poe is buried, and he claimed on August 15, 2007 that he had started the tradition in 1949. Porpora said that the tradition began in order to raise money and enhance the profile of the church. His story has not been confirmed, and some details which he gave to the press are factually inaccurate. The Poe Toaster's last appearance was on January 19, 2009, the day of Poe's bicentennial. Tales Poetry Other works
https://en.wikipedia.org/wiki?curid=9549
Electricity Electricity is the set of physical phenomena associated with the presence and motion of matter that has a property of electric charge. Electricity is related to magnetism, both being part of the phenomenon of electromagnetism, as described by Maxwell's equations. Various common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others. The presence of an electric charge, which can be either positive or negative, produces an electric field. The movement of electric charges is an electric current and produces a magnetic field. When a charge is placed in a location with a non-zero electric field, a force will act on it. The magnitude of this force is given by Coulomb's law. Thus, if that charge were to move, the electric field would be doing work on the electric charge. Thus we can speak of electric potential at a certain point in space, which is equal to the work done by an external agent in carrying a unit of positive charge from an arbitrarily chosen reference point to that point without any acceleration and is typically measured in volts. Electricity is at the heart of many modern technologies, being used for: Electrical phenomena have been studied since antiquity, though progress in theoretical understanding remained slow until the seventeenth and eighteenth centuries. The theory of electromagnetism was developed in the 19th century, and by the end of that century electricity was being put to industrial and residential use by electrical engineers. The rapid expansion in electrical technology at this time transformed industry and society, becoming a driving force for the Second Industrial Revolution. Electricity's extraordinary versatility means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. Electrical power is now the backbone of modern industrial society. Long before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the "Thunderer of the Nile", and described them as the "protectors" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by electric catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Possibly the earliest and nearest approach to the discovery of the identity of lightning, and electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning "ra‘ad" () applied to the electric ray. Ancient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature. Electricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote "De Magnete", in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word "electricus" ("of amber" or "like amber", from ἤλεκτρον, "elektron", the Greek word for "amber") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words "electric" and "electricity", which made their first appearance in print in Thomas Browne's "Pseudodoxia Epidemica" of 1646. Further work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky.
https://en.wikipedia.org/wiki?curid=9550
Empedocles Empedocles (; , "Empedoklēs"; , fl. 444–443 BC) was a Greek pre-Socratic philosopher and a native citizen of Akragas, a Greek city in Sicily. Empedocles' philosophy is best known for originating the cosmogonic theory of the four classical elements. He also proposed forces he called Love and Strife which would mix and separate the elements, respectively. Influenced by Pythagoras (died c. 495 BC) and the Pythagoreans, Empedocles challenged the practice of animal sacrifice and killing animals for food. He developed a distinctive doctrine of reincarnation. He is generally considered the last Greek philosopher to have recorded his ideas in verse. Some of his work survives, more than is the case for any other pre-Socratic philosopher. Empedocles' death was mythologized by ancient writers, and has been the subject of a number of literary treatments. Empedocles (Empedokles) was a native citizen of Akragas in Sicily. He came from a rich and noble family. Very little is known about his life. His grandfather, also called Empedokles, had won a victory in the horse-race at Olympia in [the 71st Olympiad] OL. LXXI (496–95 BC). His father's name, according to the best accounts, was Meton. All we can be said to know about the dates of Empedocles is, that his grandfather was still alive in 496 BC; that he himself was active at Akragas after 472 BC, the date of Theron’s death; and that he died later than 444 BC. Empedocles "broke up the assembly of the Thousand. perhaps some oligarchical association or club." He is said to have been magnanimous in his support of the poor; severe in persecuting the overbearing conduct of the oligarchs; and he even declined the sovereignty of the city when it was offered to him. According to John Burnet: "there is another side to his public character ... He claimed to be a god, and to receive the homage of his fellow-citizens in that capacity. The truth is, Empedokles was not a mere statesman; he had a good deal of the 'medicine-man' about him. ... We can see what this means from the fragments of the "Purifications". Empedokles was a preacher of the new religion which sought to secure release from the 'wheel of birth' by purity and abstinence. Orphicism seems to have been strong at Akragas in the days of Theron, and there are even some verbal coincidences between the poems of Empedokles and the Orphicsing Odes which Pindar addressed to that prince." His brilliant oratory, his penetrating knowledge of nature, and the reputation of his marvellous powers, including the curing of diseases, and averting epidemics, produced many myths and stories surrounding his name. In his poem "Purifications" he claimed miraculous powers, including the destruction of evil, the curing of old age, and the controlling of wind and rain. Empedocles was acquainted or connected by friendship with the physicians Pausanias (his eromenos) and Acron; with various Pythagoreans; and even, it is said, with Parmenides and Anaxagoras. The only pupil of Empedocles who is mentioned is the sophist and rhetorician Gorgias. Timaeus and Dicaearchus spoke of the journey of Empedocles to the Peloponnese, and of the admiration, which was paid to him there; others mentioned his stay at Athens, and in the newly founded colony of Thurii, 446 BC; there are also fanciful reports of him travelling far to the east to the lands of the Magi. The contemporary "Life of Empedocles" by Xanthus has been lost. According to Aristotle, he died at the age of sixty (), even though other writers have him living up to the age of one hundred and nine. Likewise, there are myths concerning his death: a tradition, which is traced to Heraclides Ponticus, represented him as having been removed from the Earth; whereas others had him perishing in the flames of Mount Etna. According to Burnet: "We are told that Empedokles leapt into the crater of Etna that he might be deemed a god. This appears to be a malicious version of a tale set on foot by his adherents that he had been snatched up to heaven in the night. Both stories would easily get accepted; for there was no local tradition. Empedokles did not die in Sicily, but in the Peloponnese, or, perhaps, at Thourioi. It is not at all unlikely that he visited Athens. ... Timaios refuted the common stories [about Empedokles] at some length. (Diog. viii. 71 sqq.; Ritter and. Preller [162].). He was quite positive that Empedokles never returned to Sicily after he went to Olympia to have his poem recited to the Hellenes. The plan for the colonisation of Thourioi would, of course, be discussed at Olympia, and we know that Greeks from the Peloponnese and elsewhere joined it. He may very well have gone to Athens in connexion with this." Empedocles is considered the last Greek philosopher to write in verse. There is a debate about whether the surviving fragments of his teaching should be attributed to two separate poems, "Purifications" and "On Nature", with different subject matter, or whether they may all derive from one poem with two titles, or whether one title refers to part of the whole poem. Some scholars argue that the title "Purifications" refers to the first part of a larger work called (as a whole) "On Nature". There is also a debate about which fragments should be attributed to each of the poems, if there are two poems, or if part of it is called "Purifications"; because ancient writers rarely mentioned which poem they were quoting. Empedocles was undoubtedly acquainted with the didactic poems of Xenophanes and Parmenides—allusions to the latter can be found in the fragments—but he seems to have surpassed them in the animation and richness of his style, and in the clearness of his descriptions and diction. Aristotle called him the father of rhetoric, and, although he acknowledged only the meter as a point of comparison between the poems of Empedocles and the epics of Homer, he described Empedocles as Homeric and powerful in his diction. Lucretius speaks of him with enthusiasm, and evidently viewed him as his model. The two poems together comprised 5000 lines. About 550 lines of his poetry survive. In the old editions of Empedocles, only about 100 lines were typically ascribed to his "Purifications", which was taken to be a poem about ritual purification, or the poem that contained all his religious and ethical thought. Early editors supposed that it was a poem that offered a mythical account of the world which may, nevertheless, have been part of Empedocles' philosophical system. According to Diogenes Laërtius it began with the following verses: Friends who inhabit the mighty town by tawny Acragas which crowns the citadel, caring for good deeds, greetings; I, an immortal God, no longer mortal, wander among you, honoured by all, adorned with holy diadems and blooming garlands. To whatever illustrious towns I go, I am praised by men and women, and accompanied by thousands, who thirst for deliverance, some ask for prophecies, and some entreat, for remedies against all kinds of disease. In the older editions, it is to this work that editors attributed the story about souls, where we are told that there were once spirits who lived in a state of bliss, but having committed a crime (the nature of which is unknown) they were punished by being forced to become mortal beings, reincarnated from body to body. Humans, animals, and even plants are such spirits. The moral conduct recommended in the poem may allow us to become like gods again. If, as is now widely held, this title "Purifications" refers to the poem "On Nature", or to a part of that poem, this story will have been at the beginning of the main work on nature and the cosmic cycle. The relevant verses are also sometimes attributed to the proem of "On Nature", even by those who think that there was a separate poem called "Purifications". There are about 450 lines of his poem "On Nature" extant, including 70 lines which have been reconstructed from some papyrus scraps known as the "Strasbourg Papyrus". The poem originally consisted of 2000 lines of hexameter verse, and was addressed to Pausanias. It was this poem which outlined his philosophical system. In it, Empedocles explains not only the nature and history of the universe, including his theory of the four classical elements, but he describes theories on causation, perception, and thought, as well as explanations of terrestrial phenomena and biological processes. Although acquainted with the theories of the Eleatics and the Pythagoreans, Empedocles did not belong to any one definite school. An eclectic in his thinking, he combined much that had been suggested by Parmenides, Pythagoras and the Ionian schools. He was a firm believer in Orphic mysteries, as well as a scientific thinker and a precursor of physics. Aristotle mentions Empedocles among the Ionic philosophers, and he places him in very close relation to the atomist philosophers and to Anaxagoras. According to House (1956) Empedocles, like the Ionian philosophers and the atomists, continued the tradition of tragic thought which tried to find the basis of the relationship of the one and many. Each of the various philosophers, following Parmenides, derived from the Eleatics, the conviction that an existence could not pass into non-existence, and vice versa. Yet, each one had his peculiar way of describing this relation of Divine and mortal thought and thus of the relation of the One and the Many. In order to account for change in the world, in accordance with the ontological requirements of the Eleatics, they viewed changes as the result of mixture and separation of unalterable fundamental realities. Empedocles held that the four elements (Water, Air, Earth, and Fire) were those unchangeable fundamental realities, which were themselves transfigured into successive worlds by the powers of Love and Strife (Heraclitus had explicated the Logos or the "unity of opposites"). Empedocles established four ultimate elements which make all the structures in the world—fire, air, water, earth. Empedocles called these four elements "roots", which he also identified with the mythical names of Zeus, Hera, Nestis, and Aidoneus (e.g., "Now hear the fourfold roots of everything: enlivening Hera, Hades, shining Zeus. And Nestis, moistening mortal springs with tears"). Empedocles never used the term "element" (, "stoicheion"), which seems to have been first used by Plato. According to the different proportions in which these four indestructible and unchangeable elements are combined with each other the difference of the structure is produced. It is in the aggregation and segregation of elements thus arising, that Empedocles, like the atomists, found the real process which corresponds to what is popularly termed growth, increase or decrease. Nothing new comes or can come into being; the only change that can occur is a change in the juxtaposition of element with element. This theory of the four elements became the standard dogma for the next two thousand years. The four elements, however, are simple, eternal, and unalterable, and as change is the consequence of their mixture and separation, it was also necessary to suppose the existence of moving powers that bring about mixture and separation. The four elements are both eternally brought into union and parted from one another by two divine powers, Love and ("Philotes" and "Neikos"). Love () is responsible for the attraction of different forms of matter, and Strife () is the cause of their separation. If the four elements make up the universe, then Love and Strife explain their variation and harmony. Love and Strife are attractive and repulsive forces, respectively, which are plainly observable in human behavior, but also pervade the universe. The two forces wax and wane in their dominance, but neither force ever wholly escapes the imposition of the other. According to Burnet: "Empedokles sometimes gave an efficient power to Love and Strife, and sometimes put them on a level with the other four. The fragments leave no room for doubt that they were thought of as spatial and corporeal. ... Love is said to be "equal in length and breadth" to the others, and Strife is described as equal to each of them in weight (fr. 17). These physical speculations were part of a history of the universe which also dealt with the origin and development of life." As the best and original state, there was a time when the pure elements and the two powers co-existed in a condition of rest and inertness in the form of a sphere. The elements existed together in their purity, without mixture and separation, and the uniting power of Love predominated in the sphere: the separating power of Strife guarded the extreme edges of the sphere. Since that time, strife gained more sway and the bond which kept the pure elementary substances together in the sphere was dissolved. The elements became the world of phenomena we see today, full of contrasts and oppositions, operated on by both Love and Strife. The sphere of Empedocles being the embodiment of pure existence is the embodiment or representative of God. Empedocles assumed a cyclical universe whereby the elements return and prepare the formation of the sphere for the next period of the universe. Empedocles attempted to explain the separation of elements, the formation of earth and sea, of Sun and Moon, of atmosphere. He also dealt with the first origin of plants and animals, and with the physiology of humans. As the elements entered into combinations, there appeared strange results—heads without necks, arms without shoulders. Then as these fragmentary structures met, there were seen horned heads on human bodies, bodies of oxen with human heads, and figures of double sex. But most of these products of natural forces disappeared as suddenly as they arose; only in those rare cases where the parts were found to be adapted to each other did the complex structures last. Thus the organic universe sprang from spontaneous aggregations that suited each other as if this had been intended. Soon various influences reduced creatures of double sex to a male and a female, and the world was replenished with organic life. It is possible to see this theory as an anticipation of Charles Darwin's theory of natural selection, although Empedocles was not trying to explain evolution. Empedocles is credited with the first comprehensive theory of light and vision. He put forward the idea that we see objects because light streams out of our eyes and touches them. While flawed, this became the fundamental basis on which later Greek philosophers and mathematicians like Euclid would construct some of the most important theories of light, vision, and optics. Knowledge is explained by the principle that elements in the things outside us are perceived by the corresponding elements in ourselves. Like is known by like. The whole body is full of pores and hence respiration takes place over the whole frame. In the organs of sense these pores are specially adapted to receive the effluences which are continually rising from bodies around us; thus perception occurs. In vision, certain particles go forth from the eye to meet similar particles given forth from the object, and the resultant contact constitutes vision. Perception is not merely a passive reflection of external objects. Empedocles noted the limitation and narrowness of human perceptions. We see only a part but fancy that we have grasped the whole. But the senses cannot lead to truth; thought and reflection must look at the thing from every side. It is the business of a philosopher, while laying bare the fundamental difference of elements, to show the identity that exists between what seem unconnected parts of the universe. In a famous fragment, Empedocles attempted to explain the phenomena of respiration by means of an elaborate analogy with the clepsydra, an ancient device for conveying liquids from one vessel to another. This fragment has sometimes been connected to a passage in Aristotle's "Physics" where Aristotle refers to people who twisted wineskins and captured air in clepsydras to demonstrate that void does not exist. There is however, no evidence that Empedocles performed any experiment with clepsydras. The fragment certainly implies that Empedocles knew about the corporeality of air, but he says nothing whatever about the void. The clepsydra was a common utensil and everyone who used it must have known, in some sense, that the invisible air could resist liquid. Like Pythagoras, Empedocles believed in the transmigration of the soul/metempsychosis, that souls can be reincarnated between humans, animals and even plants. For Empedocles, all living things were on the same spiritual plane; plants and animals are links in a chain where humans are a link too. Empedocles was a vegetarian and advocated vegetarianism, since the bodies of animals are the dwelling places of punished souls. Wise people, who have learned the secret of life, are next to the divine, and their souls, free from the cycle of reincarnations, are able to rest in happiness for eternity. Diogenes Laërtius records the legend that Empedocles died by throwing himself into Mount Etna in Sicily, so that the people would believe his body had vanished and he had turned into an immortal god; the volcano, however, threw back one of his bronze sandals, revealing the deceit. Another legend maintains that he threw himself into the volcano to prove to his disciples that he was immortal; he believed he would come back as a god after being consumed by the fire. Horace also refers to the death of Empedocles in his work "Ars Poetica" and admits poets the right to destroy themselves. In "Icaro-Menippus", a comedic dialogue written by the second century satirist Lucian of Samosata, Empedocles' final fate is re-evaluated. Rather than being incinerated in the fires of Mount Etna, he was carried up into the heavens by a volcanic eruption. Although a bit singed by the ordeal, Empedocles survives and continues his life on the Moon, surviving by feeding on dew. Empedocles' death has inspired two major modern literary treatments. Empedocles' death is the subject of Friedrich Hölderlin's play "Tod des Empedokles" ("The Death of Empedocles"), two versions of which were written between the years 1798 and 1800. A third version was made public in 1826. In Matthew Arnold's poem "Empedocles on Etna", a narrative of the philosopher's last hours before he jumps to his death in the crater first published in 1852, Empedocles predicts: To the elements it came from Everything will return. Our bodies to earth, Our blood to water, Heat to fire, Breath to air. In his "History of Western Philosophy", Bertrand Russell humorously quotes an unnamed poet on the subject – "Great Empedocles, that ardent soul, Leapt into Etna, and was roasted whole." In "J R" by William Gaddis, Karl Marx's famous dictum ("From each according to his abilities, to each according to his needs") is misattributed to Empedocles. In 2006, a massive underwater volcano off the coast of Sicily was named Empedocles. In 2016, Scottish musician Momus wrote and sang the song "The Death of Empedokles" for his album "Scobberlotchers".
https://en.wikipedia.org/wiki?curid=9553
Ericaceae The Ericaceae are a family of flowering plants, commonly known as the heath or heather family, found most commonly in acid and infertile growing conditions. The family is large, with c. 4250 known species spread across 124 genera, making it the 14th most species-rich family of flowering plants. The many well-known and economically important members of the Ericaceae include the cranberry, blueberry, huckleberry, rhododendron (including azaleas), and various common heaths and heathers ("Erica", "Cassiope", "Daboecia", and "Calluna" for example). The Ericaceae contain a morphologically diverse range of taxa, including herbs, dwarf shrubs, shrubs, and trees. Their leaves are usually evergreen, alternate or whorled, simple and without stipules. Their flowers are hermaphrodite and show considerable variability. The petals are often fused (sympetalous) with shapes ranging from narrowly tubular to funnelform or widely urn-shaped. The corollas are usually radially symmetrical (actinomorphic) and urn-shaped, but many flowers of the genus "Rhododendron" are somewhat bilaterally symmetrical (zygomorphic). Anthers open by pores. Adanson used the term Vaccinia to describe a similar family, but Jussieu first used the term Ericaceae. The name comes from the type genus "Erica", which appears to be derived from the Greek word "ereike". The exact meaning is difficult to interpret, but some sources show it as meaning 'heather'. The name may have been used informally to refer to the plants before Linnaean times, and simply been formalised when Linnaeus described "Erica" in 1753, and then again when Jussieu described the Ericaceae in 1789. Historically, the Ericaceae included both subfamilies and tribes. In 1971, Stevens, who outlined the history from 1876 and in some instances 1839, recognised six subfamilies (Rhododendroideae, Ericoideae, Vaccinioideae, Pyroloideae, Monotropoideae, and Wittsteinioideae), and further subdivided four of the subfamilies into tribes, the Rhododendroideae having seven tribes (Bejarieae, Rhodoreae, Cladothamneae, Epigaeae, Phyllodoceae, and Diplarcheae). Within tribe Rhodoreae, five genera were described, "Rhododendron" L. (including "Azalea" L. pro parte), "Therorhodion" Small, "Ledum" L., "Tsusiophyllum" Max., "Menziesia" J. E. Smith, that were eventually transferred into "Rhododendron", along with Diplarche from the monogeneric tribe Diplarcheae. In 2002, systematic research resulted in the inclusion of the formerly recognised families Empetraceae, Epacridaceae, Monotropaceae, Prionotaceae, and Pyrolaceae into the Ericaceae based on a combination of molecular, morphological, anatomical, and embryological data, analysed within a phylogenetic framework. The move significantly increased the morphological and geographical range found within the group. One possible classification of the resulting family includes 9 subfamilies, 126 genera, and about 4000 species: The Ericaceae have a nearly worldwide distribution. They are absent from continental Antarctica, parts of the high Arctic, central Greenland, northern and central Australia, and much of the lowland tropics and neotropics. The family is largely composed of plants that can tolerate acidic, infertile conditions. Like other stress-tolerant plants, many Ericaceae have mycorrhizal fungi to assist with extracting nutrients from infertile soils, as well as evergreen foliage to conserve absorbed nutrients. This trait is not found in the Clethraceae and Cyrillaceae, the two families most closely related to the Ericaceae. Most Ericaceae (excluding the Monotropoideae, and some Styphelioideae) form a distinctive accumulation of mycorrhizae, in which fungi grow in and around the roots and provide the plant with nutrients. The Pyroloideae are mixotrophic and gain sugars from the mycorrhizae, as well as nutrients. In many parts of the world, a "heath" or "heathland" is an environment characterised by an open dwarf-shrub community found on low-quality acidic soils, generally dominated by plants in the Ericaceae. A common example is "Erica tetralix". This plant family is also typical of peat bogs and blanket bogs; examples include "Rhododendron groenlandicum" and "Kalmia polifolia". In eastern North America, members of this family often grow in association with an oak canopy, in a habitat known as an oak-heath forest. In heathland, plants in the family Ericaceae serve as hostplants to the butterfly "Plebejus argus""." Some evidence suggests eutrophic rainwater can convert ericoid heaths with species such as "Erica tetralix" to grasslands. Nitrogen is particularly suspect in this regard, and may be causing measurable changes to the distribution and abundance of some ericaceous species.
https://en.wikipedia.org/wiki?curid=9555
Electrical network An electrical network is an interconnection of electrical components (e.g., batteries, resistors, inductors, capacitors, switches, transistors) or a model of such an interconnection, consisting of electrical elements (e.g., voltage sources, current sources, resistances, inductances, capacitances). An electrical circuit is a network consisting of a closed loop, giving a return path for the current. Linear electrical networks, a special type consisting only of sources (voltage or current), linear lumped elements (resistors, capacitors, inductors), and linear distributed elements (transmission lines), have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, and transient response. A resistive circuit is a circuit containing only resistors and ideal current and voltage sources. Analysis of resistive circuits is less complicated than analysis of circuits containing capacitors and inductors. If the sources are constant (DC) sources, the result is a DC circuit. The effective resistance and current distribution properties of arbitrary resistor networks can be modeled in terms of their graph measures and geometrical properties. A network that contains active electronic components is known as an "electronic circuit". Such networks are generally nonlinear and require more complex design and analysis tools. An active network contains at least one voltage source or current source that can supply energy to the network indefinitely. A passive network does not contain an active source. An active network contains one or more sources of electromotive force. Practical examples of such sources include a battery or a generator. Active elements can inject power to the circuit, provide power gain, and control the current flow within the circuit. Passive networks do not contain any sources of electromotive force. They consist of passive elements like resistors and capacitors. A network is linear if its signals obey the principle of superposition; otherwise it is non-linear. Passive networks are generally taken to be linear, but there are exceptions. For instance, an inductor with an iron core can be driven into saturation if driven with a large enough current. In this region, the behaviour of the inductor is very non-linear. Discrete passive components (resistors, capacitors and inductors) are called "lumped elements" because all of their, respectively, resistance, capacitance and inductance is assumed to be located ("lumped") at one place. This design philosophy is called the lumped-element model and networks so designed are called "lumped-element circuits". This is the conventional approach to circuit design. At high enough frequencies the lumped assumption no longer holds because there is a significant fraction of a wavelength across the component dimensions. A new design model is needed for such cases called the distributed-element model. Networks designed to this model are called "distributed-element circuits". A distributed-element circuit that includes some lumped components is called a "semi-lumped" design. An example of a semi-lumped circuit is the combline filter. Sources can be classified as independent sources and dependent sources. An ideal independent source maintains the same voltage or current regardless of the other elements present in the circuit. Its value is either constant (DC) or sinusoidal (AC). The strength of voltage or current is not changed by any variation in the connected network. Dependent sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is. A number of electrical laws apply to all electrical networks. These include: Other more complex laws may be needed if the network contains nonlinear or reactive components. Non-linear self-regenerative heterodyning systems can be approximated. Applying these laws results in a set of simultaneous equations that can be solved either algebraically or numerically. To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized computer programs or estimation techniques such as the piecewise-linear model. Circuit simulation software, such as HSPICE (an analog circuit simulator), and languages such as VHDL-AMS and verilog-AMS allow engineers to design circuits without the time, cost and risk of error involved in building circuit prototypes. More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, or symbolically using software such as SapWin. When faced with a new circuit, the software first tries to find a steady state solution, that is, one where all nodes conform to Kirchhoff's current law "and" the voltages across and through each element of the circuit conform to the voltage/current equations governing that element. Once the steady state solution is found, the "operating points" of each element in the circuit are known. For a small signal analysis, every non-linear element can be linearized around its operation point to obtain the small-signal estimate of the voltages and currents. This is an application of Ohm's Law. The resulting linear circuit matrix can be solved with Gaussian elimination. Software such as the PLECS interface to Simulink uses piecewise-linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time.
https://en.wikipedia.org/wiki?curid=9559
Empty set In mathematics, the empty set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set; in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set. In some textbooks and popularizations, the empty set is referred to as the "null set". However "null set" is a distinct notion within the context of measure theory. In that setting, it describes a set of measure zero; such a set is not necessarily empty. The empty set may also be called the "void set". Common notations for the empty set include "{}", "formula_1", and "∅". Common notations for the empty set include "{}", "formula_1", and "∅". The latter two symbols were introduced by the Bourbaki group (specifically André Weil) in 1939, inspired by the letter Ø in the Danish and Norwegian alphabets. In the past, "0" was occasionally used as a symbol for the empty set, but this is now considered to be an improper use of notation. The symbol ∅ is available at Unicode point U+2205. It can be coded in HTML as ∅ and as ∅. It can be coded in LaTeX as \varnothing. The symbol formula_1 is coded in LaTeX as \emptyset. When writing in languages such as Danish and Norwegian, where the empty set character may be confused with the alphabetic letter Ø (as when using the symbol in linguistics), the Unicode character U+29B0 REVERSED EMPTY SET ⦰ may be used instead. In standard axiomatic set theory, by the principle of extensionality, two sets are equal if they have the same elements; therefore there can be only one set with no elements. Hence there is but one empty set, and we speak of "the empty set" rather than "an empty set". For any set "A": The empty set has the following properties: The connection between the empty set and zero goes further, however: in the standard set-theoretic definition of natural numbers, sets are used to model the natural numbers. In this context, zero is modelled by the empty set. For any property: Conversely, if for some property and some set "V", the following two statements hold: then "V" = ∅. By the definition of subset, the empty set is a subset of any set "A". That is, "every" element "x" of formula_11 belongs to "A". Indeed, if it were not true that every element of formula_11 is in "A" then there would be at least one element of formula_11 that is not present in "A". Since there are "no" elements of formula_11 at all, there is no element of formula_11 that is not in "A". Any statement that begins "for every element of formula_11" is not making any substantive claim; it is a vacuous truth. This is often paraphrased as "everything is true of the elements of the empty set." When speaking of the sum of the elements of a finite set, one is inevitably led to the convention that the sum of the elements of the empty set is zero. The reason for this is that zero is the identity element for addition. Similarly, the product of the elements of the empty set should be considered to be one (see empty product), since one is the identity element for multiplication. A derangement is a permutation of a set without fixed points. The empty set can be considered a derangement of itself, because it has only one permutation (formula_19), and it is vacuously true that no element (of the empty set) can be found that retains its original position. Since the empty set has no members, when it is considered as a subset of any ordered set, then every member of that set will be an upper bound and lower bound for the empty set. For example, when considered as a subset of the real numbers, with its usual ordering, represented by the real number line, every real number is both an upper and lower bound for the empty set. When considered as a subset of the extended reals formed by adding two "numbers" or "points" to the real numbers, namely negative infinity, denoted formula_20 which is defined to be less than every other extended real number, and positive infinity, denoted formula_21 which is defined to be greater than every other extended real number, then: and That is, the least upper bound (sup or supremum) of the empty set is negative infinity, while the greatest lower bound (inf or infimum) is positive infinity. By analogy with the above, in the domain of the extended reals, negative infinity is the identity element for the maximum and supremum operators, while positive infinity is the identity element for minimum and infimum. In any topological space "X", the empty set is open by definition, as is "X". Since the complement of an open set is closed and the empty set and "X" are complements of each other, the empty set is also closed, making it a clopen set. Moreover, the empty set is compact by the fact that every finite set is compact. The closure of the empty set is empty. This is known as "preservation of nullary unions." If "A" is a set, then there exists precisely one function "f" from ∅ to "A", the empty function. As a result, the empty set is the unique initial object of the category of sets and functions. The empty set can be turned into a topological space, called the empty space, in just one way: by defining the empty set to be open. This empty topological space is the unique initial object in the category of topological spaces with continuous maps. In fact, it is a strict initial object: only the empty set has a function to the empty set. In the von Neumann construction of the ordinals, 0 is defined as the empty set, and the successor of an ordinal is defined as formula_24. Thus, we have formula_25, formula_26, formula_27, and so on. The von Neumann construction, along with the axiom of infinity, which guarantees the existence of at least one infinite set, can be used to construct the set of natural numbers, formula_28, such that the Peano axioms of arithmetic are satisfied. In Zermelo set theory, the existence of the empty set is assured by the axiom of empty set, and its uniqueness follows from the axiom of extensionality. However, the axiom of empty set can be shown redundant in at least two ways: While the empty set is a standard and widely accepted mathematical concept, it remains an ontological curiosity, whose meaning and usefulness are debated by philosophers and logicians. The empty set is not the same thing as "nothing"; rather, it is a set with nothing "inside" it and a set is always "something". This issue can be overcome by viewing a set as a bag—an empty bag undoubtedly still exists. Darling (2004) explains that the empty set is not nothing, but rather "the set of all triangles with four sides, the set of all numbers that are bigger than nine but smaller than eight, and the set of all opening moves in chess that involve a king." The popular syllogism is often used to demonstrate the philosophical relation between the concept of nothing and the empty set. Darling writes that the contrast can be seen by rewriting the statements "Nothing is better than eternal happiness" and "[A] ham sandwich is better than nothing" in a mathematical tone. According to Darling, the former is equivalent to "The set of all things that are better than eternal happiness is formula_11" and the latter to "The set {ham sandwich} is better than the set formula_11". The first compares elements of sets, while the second compares the sets themselves. Jonathan Lowe argues that while the empty set: it is also the case that: George Boolos argued that much of what has been heretofore obtained by set theory can just as easily be obtained by plural quantification over individuals, without reifying sets as singular entities having other entities as members.
https://en.wikipedia.org/wiki?curid=9566
Endomorphism In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space is a linear map , and an endomorphism of a group is a group homomorphism . In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set "S" to itself. In any category, the composition of any two endomorphisms of is again an endomorphism of . It follows that the set of all endomorphisms of forms a monoid, the full transformation monoid, and denoted (or to emphasize the category ). An invertible endomorphism of is called an automorphism. The set of all automorphisms is a subset of with a group structure, called the automorphism group of and denoted . In the following diagram, the arrows denote implication: Any two endomorphisms of an abelian group, , can be added together by the rule . Under this addition, and with multiplication being defined as function composition, the endomorphisms of an abelian group form a ring (the endomorphism ring). For example, the set of endomorphisms of is the ring of all matrices with integer entries. The endomorphisms of a vector space or module also form a ring, as do the endomorphisms of any object in a preadditive category. The endomorphisms of a nonabelian group generate an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group. In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing to define the notion of orbits of elements, etc. Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory. An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism. Let be an arbitrary set. Among endofunctions on one finds permutations of and constant functions associating to every in the same element in . Every permutation of has the codomain equal to its domain and is bijective and invertible. If has more than one element, a constant function on has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number the floor of has its image equal to its codomain and is not invertible. Finite endofunctions are equivalent to directed pseudoforests. For sets of size there are endofunctions on the set. Particular examples of bijective endofunctions are the involutions; i.e., the functions coinciding with their inverses.
https://en.wikipedia.org/wiki?curid=9569
Eric Hoffer Eric Hoffer (July 25, 1898 – May 21, 1983) was an American moral and social philosopher. He was the author of ten books and was awarded the Presidential Medal of Freedom in February 1983. His first book, "The True Believer" (1951), was widely recognized as a classic, receiving critical acclaim from both scholars and laymen, although Hoffer believed that "The Ordeal of Change" (1963) was his finest work. Hoffer was born in 1898 in The Bronx, New York, to Knut and Elsa (Goebel) Hoffer. His parents were immigrants from Alsace, then part of Imperial Germany. By age five, Hoffer could already read in both English and his parents' native German. When he was five, his mother fell down the stairs with him in her arms. He later recalled, "I lost my sight at the age of seven. Two years before, my mother and I fell down a flight of stairs. She did not recover and died in that second year after the fall. I lost my sight and, for a time, my memory." Hoffer spoke with a pronounced German accent all his life, and spoke the language fluently. He was raised by a live-in relative or servant, a German immigrant named Martha. His eyesight inexplicably returned when he was 15. Fearing he might lose it again, he seized on the opportunity to read as much as he could. His recovery proved permanent, but Hoffer never abandoned his reading habit. Hoffer was a young man when he also lost his father. The cabinetmaker's union paid for Knut Hoffer's funeral and gave Hoffer about $300 insurance money. He took a bus to Los Angeles and spent the next 10 years wandering, as he remembered, “up and down the land, dodging hunger and grieving over the world.” Hoffer eventually landed on Skid Row, reading, occasionally writing, and working at odd jobs. In 1931, he considered suicide by drinking a solution of oxalic acid, but he could not bring himself to do it. He left Skid Row and became a migrant worker, following the harvests in California. He acquired a library card where he worked, dividing his time "between the books and the brothels." He also prospected for gold in the mountains. Snowed in for the winter, he read the "Essays" by Michel de Montaigne. Montaigne impressed Hoffer deeply, and Hoffer often made reference to him. He also developed a respect for America's underclass, which he said was "lumpy with talent." He wrote a novel, "Four Years in Young Hank's Life," and a novella, "Chance and Mr. Kunze," both partly autobiographical. He also penned a long article based on his experiences in a federal work camp, "Tramps and Pioneers." It was never published, but a truncated version appeared in "Harper's Magazine" after he became well known. Hoffer tried to enlist in the US Army at age 40 during World War II, but he was rejected due to a hernia. Instead, he began work as a longshoreman on the docks of San Francisco in 1943. At the same time, he began to write seriously. Hoffer left the docks in 1964, and shortly after became an adjunct professor at the University of California, Berkeley. He later retired from public life in 1970. “I'm going to crawl back into my hole where I started,” he said. “I don't want to be a public person or anybody's spokesman... Any man can ride a train. Only a wise man knows when to get off.”In 1970, he endowed the Lili Fabilli and Eric Hoffer Laconic Essay Prize for students, faculty, and staff at the University of California, Berkeley. Hoffer called himself an atheist but had sympathetic views of religion and described it as a positive force. He died at his home in San Francisco in 1983 at the age of 84. Hoffer was influenced by his modest roots and working-class surroundings, seeing in it vast human potential. In a letter to Margaret Anderson in 1941, he wrote: He once remarked, "my writing grows out of my life just as a branch from a tree." When he was called an intellectual, he insisted that he simply was a longshoreman. Hoffer has been dubbed by some authors a "longshoreman philosopher." Hoffer, who was an only child, never married. He fathered a child with Lili Fabilli Osborne, named Eric Osborne, who was born in 1955 and raised by Lili Osborne and her husband, Selden Osborne. Lili Fabilli Osborne had become acquainted with Hoffer through her husband, a fellow longshoreman and acquaintance of Hoffer's. Despite the affair and Lili Osborne later co-habitating with Hoffer, Selden Osborne and Hoffer remained on good terms. Hoffer referred to Eric Osborne as his son or godson. Lili Fabilli Osborne died in 2010 at the age of 93. Prior to her death, Osborne was the executor of Hoffer's estate, and vigorously controlled the rights to his intellectual property. In his 2012 book "Eric Hoffer: The Longshoreman Philosopher," journalist Tom Bethell revealed doubts about Hoffer's account of his early life. Although Hoffer claimed his parents were from Alsace-Lorraine, Hoffer himself spoke with a pronounced Bavarian accent. He claimed to have been born and raised in the Bronx but had no Bronx accent. His lover and executor Lili Fabilli stated that she always thought Hoffer was an immigrant. Her son, Eric Fabilli, said that Hoffer's life may have been comparable to that of B. Traven and considered hiring a genealogist to investigate Hoffer's early life, to which Hoffer reportedly replied, "Are you "sure" you want to know?" Pescadero land-owner Joe Gladstone, a family friend of the Fabilli's who also knew Hoffer, said of Hoffer's account of his early life: "I don't believe a word of it." To this day, no one ever has claimed to have known Hoffer in his youth, and no records apparently exist of his parents, nor indeed of Hoffer himself until he was about forty, when his name appeared in a census. Hoffer came to public attention with the 1951 publication of his first book, "The True Believer: Thoughts on the Nature of Mass Movements", which consists of a preface and 125 sections, which are divided into 18 chapters. Hoffer analyzes the phenomenon of "mass movements," a general term that he applies to revolutionary parties, nationalistic movements, and religious movements. He summarizes his thesis in §113: "A movement is pioneered by men of words, materialized by fanatics and consolidated by men of actions." Hoffer argues that fanatical and extremist cultural movements, whether religious, social, or national, arise when large numbers of frustrated people, believing their own individual lives to be worthless or spoiled, join a movement demanding radical change. But the real attraction for this population is an escape from the self, not a realization of individual hopes: "A mass movement attracts and holds a following not because it can satisfy the desire for self-advancement, but because it can satisfy the passion for self-renunciation." Hoffer consequently argues that the appeal of mass movements is interchangeable: in the Germany of the 1920s and the 1930s, for example, the Communists and National Socialists were ostensibly enemies, but sometimes enlisted each other's members, since they competed for the same kind of marginalized, angry, frustrated people. For the "true believer," Hoffer argues that particular beliefs are less important than escaping from the burden of the autonomous self. Harvard historian Arthur M. Schlesinger Jr. said of "The True Believer": "This brilliant and original inquiry into the nature of mass movements is a genuine contribution to our social thought." Subsequent to the publication of "The True Believer" (1951), Eric Hoffer touched upon Asia and American interventionism in several of his essays. In "The Awakening of Asia" (1954), published in "The Reporter" and later his book "The Ordeal of Change" (1963), Hoffer discusses the reasons for unrest on the continent. In particular, he argues that the root cause of social discontent in Asia was not government corruption, "communist agitation," or the legacy of European colonial "oppression and exploitation," but rather that a "craving for pride" was the central problem in Asia, suggesting a problem that could not be relieved through typical American intervention. For centuries, Hoffer notes, Asia had "submitted to one conqueror after another." Throughout these centuries, Asia had "been misruled, looted, and bled by both foreign and native oppressors without" so much as "a peep" from the general population. Though not without negative effect, corrupt governments and the legacy of European imperialism represented nothing new under the sun. Indeed, the European colonial authorities had been "fairly beneficent" in Asia. To be sure, Communism exerted an appeal of sorts. For the Asian "pseudo-intellectual," it promised elite status and the phony complexities of "doctrinaire double talk." For the ordinary Asian, it promised partnership with the seemingly emergent Soviet Union in a "tremendous, unprecedented undertaking" to build a better tomorrow. According to Hoffer, however, Communism in Asia was dwarfed by the desire for pride. To satisfy such desire, Asians would willingly and irrationally sacrifice their economic well-being and their lives as well. Unintentionally, the West had created this appetite, causing "revolutionary unrest" in Asia. The West had done so by eroding the traditional communal bonds that once had woven the individual to the patriarchal family, clan, tribe, "cohesive rural or urban unit," and "religious or political body." Without the security and spiritual meaning produced by such bonds, Asians had been liberated from tradition only to find themselves now atomized, isolated, exposed, and abandoned, "left orphaned and empty in a cold world." Certainly, Europe had undergone a similar destruction of tradition, but it had occurred centuries earlier at the end of the medieval period and produced better results thanks to different circumstances. For the Asians of the 1950s, the circumstances differed markedly. Most were illiterate and impoverished, living in a world that included no expansive physical or intellectual vistas. Dangerously, the "articulate minority" of the Asian population inevitably disconnected themselves from the ordinary people, thereby failing to acquire "a sense of usefulness and of worth" that came by "taking part in the world's work." As a result, they were "condemned to the life of chattering posturing pseudo-intellectuals" and coveted "the illusion of weight and importance." Most significantly, Hoffer asserts that the disruptive awakening of Asia came about as a result of an unbearable sense of weakness. Indeed, Hoffer discusses the problem of weakness, asserting that while "power corrupts the few... weakness corrupts the many." Hoffer notes that "the resentment of the weak does not spring from any injustice done them but from the sense of their inadequacy and impotence." In short, the weak "hate not wickedness" but themselves for being weak. Consequently, self-loathing produces explosive effects that cannot be mitigated through social engineering schemes, such as programs of wealth redistribution. In fact, American "generosity" is counterproductive, perceived in Asia simply as an example of Western "oppression." In the wake of the Korean War, Hoffer does not recommend exporting at gunpoint either American political institutions or mass democracy. In fact, Hoffer advances the possibility that winning over the multitudes of Asia may not even be desirable. If on the other hand, necessity truly dictates that for "survival" the United States must persuade the "weak" of Asia to "our side," Hoffer suggests the wisest course of action would be to master "the art or technique of sharing hope, pride, and as a last resort, hatred with others." During the Vietnam War, despite his objections to the antiwar movement and acceptance of the notion that the war was somehow necessary to prevent a third world war, Hoffer remained skeptical concerning American interventionism, specifically the intelligence with which the war was being conducted in Southeast Asia. After the United States became involved in the war, Hoffer wished to avoid defeat in Vietnam because of his fear that such a defeat would transform American society for ill, opening the door to those who would preach a stab-in-the-back myth and allow for the rise of an American version of Hitler. In "The Temper of Our Time" (1967), Hoffer implies that the United States as a rule should avoid interventions in the first place: "the better part of statesmanship might be to know clearly and precisely what not to do, and leave action to the improvisation of chance." In fact, Hoffer indicates that "it might be wise to wait for enemies to defeat themselves," as they might fall upon each other with the United States out of the picture. The view was somewhat borne out with the Cambodian-Vietnamese War and Chinese-Vietnamese War of the late 1970s. In May 1968, about a year after the Six-Day War, he wrote an article for the "Los Angeles Times" titled "Israel's Peculiar Position:" Hoffer asks why "everyone expects the Jews to be the only real Christians in this world" and why Israel should sue for peace after its victory. Hoffer believed that rapid change is not necessarily a positive thing for a society and that too rapid change can cause a regression in maturity for those who were brought up in a different society. He noted that in America in the 1960s, many young adults were still living in extended adolescence. Seeking to explain the attraction of the New Left protest movements, he characterized them as the result of widespread affluence, which "is robbing a modern society of whatever it has left of puberty rites to routinize the attainment of manhood." He saw the puberty rites as essential for self-esteem and noted that mass movements and juvenile mindsets tend to go together, to the point that anyone, no matter what age, who joins a mass movement immediately begins to exhibit juvenile behavior. Hoffer further noted that working-class Americans rarely joined protest movements and subcultures since they had entry into meaningful labor as an effective rite of passage out of adolescence while both the very poor who lived on welfare and the affluent were, in his words, "prevented from having a share in the world's work, and of proving their manhood by doing a man's work and getting a man's pay" and thus remained in a state of extended adolescence. Lacking in necessary self-esteem, they were prone to joining mass movements as a form of compensation. Hoffer suggested that the need for meaningful work as a rite of passage into adulthood could be fulfilled with a two-year civilian national service program (like programs during the Great Depression such as the Civilian Conservation Corps): "The routinization of the passage from boyhood to manhood would contribute to the solution of many of our pressing problems. I cannot think of any other undertaking that would dovetail so many of our present difficulties into opportunities for growth." Hoffer appeared on public television in 1964 and then in two one-hour conversations on CBS with Eric Sevareid in the late 1960s. Hoffer's papers, including 131 of the notebooks he carried in his pockets, were acquired in 2000 by the Hoover Institution Archives. The papers fill of shelf space. Because Hoffer cultivated an aphoristic style, the unpublished notebooks (dated from 1949 to 1977) contain very significant work. Although available for scholarly study since at least 2003, little of their contents has been published. A selection of fifty aphorisms, focusing on the development of unrealized human talents through the creative process, appeared in the July 2005 issue of "Harper's Magazine". On 1 January 2001, the Eric Hoffer Award for books and prose was launched internationally in his honor. In 2005, the Eric Hoffer Estate granted its permission for the award, and Christopher Klim became the award's Chairperson. Australian foreign minister Julie Bishop extensively referred to Hoffer's book "The True Believer" when in a 2015 speech she closely compared the psychological underpinnings of ISIS with that of Nazism.
https://en.wikipedia.org/wiki?curid=9574
European Coal and Steel Community The European Coal and Steel Community (ECSC) was an organisation of six European countries created after World War II to regulate their industrial production under a centralised authority. It was formally established in 1951 by the Treaty of Paris, signed by Belgium, France, Italy, Luxembourg, the Netherlands, and West Germany. The ECSC was the first international organisation to be based on the principles of supranationalism, and started the process of formal integration which ultimately led to the European Union. The ECSC was first proposed by French foreign minister Robert Schuman on 9 May 1950 as a way to prevent further war between France and Germany. He declared his aim was to "make war not only unthinkable but materially impossible" which was to be achieved by regional integration, of which the ECSC was the first step. The Treaty would create a common market for coal and steel among its member states which served to neutralise competition between European nations over natural resources, particularly in the Ruhr. The ECSC was overseen by four institutions: a High Authority composed of independent appointees, a Common Assembly composed of national parliamentarians, a Special Council composed of national ministers, and a Court of Justice. These would ultimately form the blueprint for today's European Commission, European Parliament, the Council of the European Union and the European Court of Justice. The ECSC stood as a model for the communities set up after it by the Treaty of Rome in 1957, the European Economic Community and European Atomic Energy Community, with whom it shared its membership and some institutions. The 1967 Merger (Brussels) Treaty led all of ECSC's institutions to merge into the European Economic Community, but the ECSC retained its own independent legal personality. In 2002, the Treaty of Paris expired and the ECSC ceased to exist in any form, its activities fully absorbed by the European Community under the framework of the Amsterdam and Nice treaties. As Prime Minister and Foreign Minister, Schuman was instrumental in turning French policy away from the Gaullist policy of permanent occupation or control of parts of German territory such as the Ruhr or the Saar. Despite stiff ultra-nationalist, Gaullist and communist opposition, the French Assembly voted a number of resolutions in favour of his new policy of integrating Germany into a community. The International Authority for the Ruhr changed in consequence. The Schuman Declaration of 9 May 1950 (in 1985 declared "Europe Day" by the European Communities) occurred after two Cabinet meetings, when the proposal became French government policy. France was thus the first government to agree to surrender sovereignty in a supranational Community. That decision was based on a text, written and edited by Schuman's friend and colleague, the Foreign Ministry lawyer, professor Paul Reuter with the assistance of economist Jean Monnet and Schuman's Directeur de Cabinet, Bernard Clappier. It laid out a plan for a European Community to pool the coal and steel of its members in a common market. Schuman proposed that "Franco-German production of coal and steel as a whole be placed under a common High Authority, within the framework of an organisation open to the participation of the other countries of Europe". Such an act was intended to help economic growth and cement peace between France and Germany, who were historic enemies. Coal and steel were vital resources needed for a country to wage war, so pooling those resources between two such enemies was seen as more than symbolic. Schuman saw the decision of the French government on his proposal as the first example of a democratic and supranational Community, a new development in world history. The plan was also seen by some, like Monnet, who crossed out Reuter's mention of "supranational" in the draft and inserted "federation", as a first step to a "European federation". The Schuman Declaration that created the ECSC had several distinct aims: Firstly, it was intended to prevent further war between France and Germany and other states by tackling the root cause of war. The ECSC was primarily conceived with France and Germany in mind: "The coming together of the nations of Europe requires the elimination of the age-old opposition of France and Germany. Any action taken must in the first place concern these two countries." The coal and steel industries being essential for the production of munitions, Schuman believed that by uniting these two industries across France and Germany under an innovative supranational system that also included a European anti-cartel agency, he could "make war not only unthinkable but materially impossible". Schuman had another aim: "With increased resources Europe will be able to pursue the achievement of one of its essential tasks, namely, the development of the African continent." Industrial cartels tended to impose "restrictive practices" on national markets, whereas the ECSC would ensure the increased production necessary for their ambitions in Africa. In West Germany, Schuman kept the closest contacts with the new generation of democratic politicians. Karl Arnold, the Minister President of North Rhine-Westphalia, the state that included the coal and steel producing Ruhr, was initially spokesman for German foreign affairs. He gave a number of speeches and broadcasts on a supranational coal and steel community at the same time as Robert Schuman began to propose this Community in 1948 and 1949. The Social Democratic Party of Germany (, SPD), in spite of support from unions and other socialists in Europe, decided it would oppose the Schuman plan. Kurt Schumacher's personal distrust of France, capitalism, and Konrad Adenauer aside, he claimed that a focus on integrating with a "Little Europe of the Six" would override the SPD's prime objective of German reunification and thus empower ultra-nationalist and Communist movements in democratic countries. He also thought the ECSC would end any hopes of nationalising the steel industry and lock in a Europe of "cartels, clerics and conservatives". Younger members of the party like Carlo Schmid, were, however, in favor of the Community and pointed to the long socialist support for the supranational idea. In France, Schuman had gained strong political and intellectual support from all sections of the nation and many non-communist parties. Notable amongst these were ministerial colleague Andre Philip, president of the Foreign Relations Committee Edouard Bonnefous, and former prime minister, Paul Reynaud. Projects for a coal and steel authority and other supranational communities were formulated in specialist subcommittees of the Council of Europe in the period before it became French government policy. Charles de Gaulle, who was then out of power, had been an early supporter of "linkages" between economies, on French terms, and had spoken in 1945 of a "European confederation" that would exploit the resources of the Ruhr. However, he opposed the ECSC as a "faux" (false) pooling (""le pool, ce faux semblant"") because he considered it an unsatisfactory "piecemeal approach" to European unity and because he considered the French government "too weak" to dominate the ECSC as he thought proper. De Gaulle also felt that the ECSC had insufficient supranational authority because the Assembly was not ratified by a European referendum and he did not accept Raymond Aron's contention that the ECSC was intended as a movement away from United States domination. Consequently, de Gaulle and his followers in the RPF voted against ratification in the lower house of the French Parliament. Despite these attacks and those from the extreme left, the ECSC found substantial public support, and so it was established. It gained strong majority votes in all eleven chambers of the parliaments of the Six, as well as approval among associations and European public opinion. In 1950, many had thought another war was inevitable. The steel and coal interests, however, were quite vocal in their opposition. The Council of Europe, created by a proposal of Schuman's first government in May 1948, helped articulate European public opinion and gave the Community idea positive support. The UK Prime Minister Clement Attlee opposed Britain joining the proposed European Coal and Steel Community, saying that he 'would not accept the [UK] economy being handed over to an authority that is utterly undemocratic and is responsible to nobody.' The 100-article Treaty of Paris, which established the ECSC, was signed on 18 April 1951 by "the inner six": France, West Germany, Italy, Belgium, the Netherlands and Luxembourg (Benelux). The ECSC was the first international organisation to be based on supranational principles and was, through the establishment of a common market for coal and steel, intended to expand the economies, increase employment, and raise the standard of living within the Community. The market was also intended to progressively rationalise the distribution of high level production whilst ensuring stability and employment. The common market for coal was opened on 10 February 1953, and for steel on 1 May 1953. Upon taking effect, the ECSC gradually replaced the International Authority for the Ruhr. On 11 August 1952, the United States was the first non-ECSC member to recognise the Community and stated it would now deal with the ECSC on coal and steel matters, establishing its delegation in Brussels. Monnet responded by choosing Washington, D.C. as the site of the ECSC's first external presence. The headline of the delegation's first bulletin read "Towards a Federal Government of Europe". Six years after the Treaty of Paris, the Treaties of Rome were signed by the six ECSC members, creating the European Economic Community (EEC) and the European Atomic Energy Community (EAEC or Euratom). These Communities were based, with some adjustments, on the ECSC. The Treaties of Rome were to be in force indefinitely, unlike the Treaty of Paris, which was to expire after fifty years. These two new Communities worked on the creation of a customs union and nuclear power community respectively. The Rome treaties were hurried through just before de Gaulle was given emergency powers and proclaimed the Fifth Republic. Despite his efforts to "chloroform" the Communities, their fields rapidly expanded and the EEC became the most important tool for political unification, overshadowing the ECSC. Despite being separate legal entities, the ECSC, EEC and Euratom initially shared the Common Assembly and the European Court of Justice, although the Councils and the High Authority/Commissions remained separate. To avoid duplication, the Merger Treaty merged these separate bodies of the ECSC and Euratom with the EEC. The EEC later became one of the three pillars of the present day European Union. The Treaty of Paris was frequently amended as the EC and EU evolved and expanded. With the treaty due to expire in 2002, debate began at the beginning of the 1990s on what to do with it. It was eventually decided that it should be left to expire. The areas covered by the ECSC's treaty were transferred to the Treaty of Rome and the financial loose ends and the ECSC research fund were dealt with via a protocol of the Treaty of Nice. The treaty finally expired on 23 July 2002. That day, the ECSC flag was lowered for the final time outside the European Commission in Brussels and replaced with the EU flag. The institutions of the ECSC were the High Authority, the Common Assembly, the Special Council of Ministers and the Court of Justice. A Consultative Committee was established alongside the High Authority, as a fifth institution representing civil society. This was the first international representation of consumers in history. These institutions were merged in 1967 with those of the European Community, which then governed the ECSC, except for the Committee, which continued to be independent until the expiration of the Treaty of Paris in 2002. The Treaty stated that the location of the institutions would be decided by common accord of the members, yet the issue was hotly contested. As a temporary compromise, the institutions were provisionally located in the City of Luxembourg, despite the Assembly being based in Strasbourg. The High Authority (the predecessor to the European Commission) was a nine-member executive body which governed the Community. The Authority consisted of nine members in office for a term of six years. Eight of these members were appointed by the governments of the six signatories. These eight members then themselves appointed a ninth person to be President of the High Authority. Despite being appointed by agreement of national governments acting together, the members were to pledge not to represent their national interest, but rather took an oath to defend the general interests of the Community as a whole. Their independence was aided by members being barred from having any occupation outside the Authority or having any business interests (paid or unpaid) during their tenure and for three years after they left office. To further ensure impartiality, one third of the membership was to be renewed every two years (article 10). The Authority's principal innovation was its supranational character. It had a broad area of competence to ensure the objectives of the treaty were met and that the common market functioned smoothly. The High Authority could issue three types of legal instruments: Decisions, which were entirely binding laws; Recommendations, which had binding aims but the methods were left to member states; and Opinions, which had no legal force. Up to the merger in 1967, the authority had five Presidents followed by an interim President serving for the final days. The Common Assembly (which later became the European Parliament) was composed of 78 representatives and exercised supervisory powers over the executive High Authority. The Common Assembly representatives were to be national MPs delegated each year by their Parliaments to the Assembly or directly elected "by universal suffrage" (article 21), though in practice it was the former, as there was no requirement for elections until the Treaties of Rome and no actual election until 1979, as Rome required agreement in the Council on the electoral system first. However, to emphasise that the chamber was not a traditional international organisation composed of representatives of national governments, the Treaty of Paris used the term "representatives of the peoples". The Assembly was not originally specified in the Schuman Plan because it was hoped the Community would use the institutions (Assembly, Court) of the Council of Europe. When this became impossible because of British objections, separate institutions had to be created. The Assembly was intended as a democratic counter-weight and check to the High Authority, to advise but also to have power to sack the Authority for incompetence, injustice, corruption or fraud. The first President (akin to a Speaker) was Paul-Henri Spaak. The Special Council of Ministers (equivalent to the current Council of the European Union) was composed of representatives of national governments. The Presidency was held by each state for a period of three months, rotating between them in alphabetical order. One of its key aspects was the harmonisation of the work of the High Authority and that of national governments, which were still responsible for the state's general economic policies. The Council was also required to issue opinions on certain areas of work of the High Authority. Issues relating only to coal and steel were in the exclusive domain of the High Authority, and in these areas the Council (unlike the modern Council) could only act as a scrutiny on the Authority. However, areas outside coal and steel required the consent of the Council. The Court of Justice was to ensure the observation of ECSC law along with the interpretation and application of the Treaty. The Court was composed of seven judges, appointed by common accord of the national governments for six years. There were no requirements that the judges had to be of a certain nationality, simply that they be qualified and that their independence be beyond doubt. The Court was assisted by two Advocates General. The Consultative Committee (similar to the Economic and Social Committee) had between 30 and 50 members equally divided between producers, workers, consumers and dealers in the coal and steel sector. Again, there were no national quotas, and the treaty required representatives of European associations to organise their own democratic procedures. They were to establish rules to make their membership fully representative for democratic organised civil society. Membership were appointed for two years and were not bound by any mandate or instruction of the organisations which appointed them. The Committee had a plenary assembly, bureau and president. Again, the required democratic procedures were not introduced and nomination of these members remained in the hands of national ministers. The High Authority was obliged to consult the Committee in certain cases where it was appropriate and to keep it informed. The Consultative Committee remained separate (despite the merger of the other institutions) until 2002, when the Treaty expired and its duties were taken over by the Economic and Social Committee (ESC). Despite its independence, the Committee did cooperate with the ESC when they were consulted on the same issue. Its mission (article 2) was general: to "contribute to the expansion of the economy, the development of employment and the improvement of the standard of living" of its citizens. The Community had little effect on coal and steel "production", which was influenced more by global trends. Trade between members did increase (tenfold for steel) which saved members' money by not having to import resources from the United States. The High Authority also issued 280 modernization loans to the industry which helped the industry to improve output and reduce costs. Costs were further reduced by the abolition of tariffs at borders. Among the ECSC's greatest achievements are those on welfare issues. Some mines, for example, were clearly unsustainable without government subsidies. Some miners had extremely poor housing. Over 15 years it financed 112,500 flats for workers, paying US$1,770 per flat, enabling workers to buy a home they could not have otherwise afforded. The ECSC also paid half the occupational redeployment costs of those workers who have lost their jobs as coal and steel facilities began to close down. Combined with regional redevelopment aid the ECSC spent $150 million creating 100,000 jobs, a third of which were for unemployed coal and steel workers. The welfare guarantees invented by the ECSC were extended to workers outside the coal and steel sector by some of its members. Far more important than creating Europe's first social and regional policy, it is argued that the ECSC introduced European peace. It involved the continent's first European tax. This was a flat tax, a levy on production with a maximum rate of one percent. Given that the European Community countries are now experiencing the longest period of peace in more than seventy years, this has been described as the cheapest tax for peace in history. Another world war, or "world suicide" as Schuman called this threat in 1949, was avoided. In October 1953 Schuman said that the possibility of another European war had been eliminated. Reasoning had to prevail among member states. However the ECSC failed to achieve several fundamental aims of the Treaty of Paris. It was hoped the ECSC would prevent a resurgence of large coal and steel groups such as the "Konzerne", which helped Adolf Hitler rise to power. In the Cold War trade-offs, the cartels and major companies re-emerged, leading to apparent price fixing (another element that was meant to be tackled). With a democratic supervisory system the worst aspects of past abuse were avoided with the anti-cartel powers of the Authority, the first international anti-cartel agency in the world. Efficient firms were allowed to expand into a European market without undue domination. Oil, gas, electricity became natural competitors to coal and also broke cartel powers. Furthermore, with the move to oil, the Community failed to define a proper energy policy. The Euratom treaty was largely stifled by de Gaulle and the European governments refused the suggestion of an Energy Community involving electricity and other vectors that was suggested at Messina in 1955. In a time of high inflation and monetary instability ECSC also fell short of ensuring an upward equalisation of pay of workers within the market. These failures could be put down to overambition in a short period of time, or that the goals were merely political posturing to be ignored. It has been argued that the greatest achievements of the European Coal and Steel Community lie in its revolutionary democratic concepts of a supranational Community.
https://en.wikipedia.org/wiki?curid=9577
European Economic Community The European Economic Community (EEC) was a regional organisation that aimed to bring about economic integration among its member states. It was created by the Treaty of Rome of 1957. Upon the formation of the European Union (EU) in 1993, the EEC was incorporated and renamed the European Community (EC). In 2009, the EC's institutions were absorbed into the EU's wider framework and the community ceased to exist. The Community's initial aim was to bring about economic integration, including a common market and customs union, among its six founding members: Belgium, France, Italy, Luxembourg, the Netherlands and West Germany. It gained a common set of institutions along with the European Coal and Steel Community (ECSC) and the European Atomic Energy Community (EURATOM) as one of the European Communities under the 1965 Merger Treaty (Treaty of Brussels). In 1993 a complete single market was achieved, known as the internal market, which allowed for the free movement of goods, capital, services, and people within the EEC. In 1994 the internal market was formalised by the EEA agreement. This agreement also extended the internal market to include most of the member states of the European Free Trade Association, forming the European Economic Area, which encompasses 15 countries. Upon the entry into force of the Maastricht Treaty in 1993, the EEC was renamed the European Community to reflect that it covered a wider range than economic policy. This was also when the three European Communities, including the EC, were collectively made to constitute the first of the three pillars of the European Union, which the treaty also founded. The EC existed in this form until it was abolished by the 2009 Treaty of Lisbon, which incorporated the EC's institutions into the EU's wider framework and provided that the EU would "replace and succeed the European Community". The EEC was also known as the Common Market in the English-speaking countries and sometimes referred to as the European Community even before it was officially renamed as such in 1993. In 1951, the Treaty of Paris was signed, creating the European Coal and Steel Community (ECSC). This was an international community based on supranationalism and international law, designed to help the economy of Europe and prevent future war by integrating its members. In the aim of creating a federal Europe two further communities were proposed: a European Defence Community and a European Political Community. While the treaty for the latter was being drawn up by the Common Assembly, the ECSC parliamentary chamber, the proposed defense community was rejected by the French Parliament. ECSC President Jean Monnet, a leading figure behind the communities, resigned from the High Authority in protest and began work on alternative communities, based on economic integration rather than political integration. After the Messina Conference in 1955, Paul Henri Spaak was given the task to prepare a report on the idea of a customs union. The so-called Spaak Report of the Spaak Committee formed the cornerstone of the intergovernmental negotiations at Val Duchesse conference centre in 1956. Together with the Ohlin Report the Spaak Report would provide the basis for the Treaty of Rome. In 1956, Paul Henri Spaak led the Intergovernmental Conference on the Common Market and Euratom at the Val Duchesse conference centre, which prepared for the Treaty of Rome in 1957. The conference led to the signature, on 25 March 1957, of the Treaty of Rome establishing a European Economic Community. The resulting communities were the European Economic Community (EEC) and the European Atomic Energy Community (EURATOM or sometimes EAEC). These were markedly less supranational than the previous communities, due to protests from some countries that their sovereignty was being infringed (however there would still be concerns with the behaviour of the Hallstein Commission). Germany became a founding member of the EEC, and Konrad Adenauer was made leader in a very short time. The first formal meeting of the Hallstein Commission was held on 16 January 1958 at the Chateau de Val-Duchesse. The EEC (direct ancestor of the modern Community) was to create a customs union while Euratom would promote co-operation in the nuclear power sphere. The EEC rapidly became the most important of these and expanded its activities. One of the first important accomplishments of the EEC was the establishment (1962) of common price levels for agricultural products. In 1968, internal tariffs (tariffs on trade between member nations) were removed on certain products. Another crisis was triggered in regard to proposals for the financing of the Common Agricultural Policy, which came into force in 1962. The transitional period whereby decisions were made by unanimity had come to an end, and majority-voting in the Council had taken effect. Then-French President Charles de Gaulle's opposition to supranationalism and fear of the other members challenging the CAP led to an "empty chair policy" whereby French representatives were withdrawn from the European institutions until the French veto was reinstated. Eventually, a compromise was reached with the Luxembourg compromise on 29 January 1966 whereby a gentlemen's agreement permitted members to use a veto on areas of national interest. On 1 July 1967 when the Merger Treaty came into operation, combining the institutions of the ECSC and Euratom into that of the EEC, they already shared a Parliamentary Assembly and Courts. Collectively they were known as the "European Communities". The Communities still had independent personalities although were increasingly integrated. Future treaties granted the community new powers beyond simple economic matters which had achieved a high level of integration. As it got closer to the goal of political integration and a peaceful and united Europe, what Mikhail Gorbachev described as a "Common European Home". The 1960s saw the first attempts at enlargement. In 1961, Denmark, Ireland, the United Kingdom and Norway (in 1962), applied to join the three Communities. However, President Charles de Gaulle saw British membership as a Trojan horse for U.S. influence and vetoed membership, and the applications of all four countries were suspended. Greece became the first country to join the EC in 1961 as an associate member, however its membership was suspended in 1967 after the Colonels' coup d'état. A year later, in February 1962, Spain attempted to join the European Communities. However, because Francoist Spain was not a democracy, all members rejected the request in 1964. The four countries resubmitted their applications on 11 May 1967 and with Georges Pompidou succeeding Charles de Gaulle as French president in 1969, the veto was lifted. Negotiations began in 1970 under the pro-European UK government of Edward Heath, who had to deal with disagreements relating to the Common Agricultural Policy and the UK's relationship with the Commonwealth of Nations. Nevertheless, two years later the accession treaties were signed so that Denmark, Ireland and the UK joined the Community effective 1 January 1973. The Norwegian people had finally rejected membership in a referendum on 25 September 1972. The Treaties of Rome had stated that the European Parliament must be directly elected, however this required the Council to agree on a common voting system first. The Council procrastinated on the issue and the Parliament remained appointed, French President Charles de Gaulle was particularly active in blocking the development of the Parliament, with it only being granted Budgetary powers following his resignation. Parliament pressured for agreement and on 20 September 1976 the Council agreed part of the necessary instruments for election, deferring details on electoral systems which remain varied to this day. During the tenure of President Jenkins, in June 1979, the elections were held in all the then-members (see 1979 European Parliament election). The new Parliament, galvanised by direct election and new powers, started working full-time and became more active than the previous assemblies. Shortly after its election, the Parliament proposed that the Community adopt the flag of Europe design used by the Council of Europe. The European Council in 1984 appointed an "ad hoc" committee for this purpose. The European Council in 1985 largely followed the Committee's recommendations, but as the adoption of a flag was strongly reminiscent of a national flag representing statehood, was controversial, the "flag of Europe" design was adopted only with the status of a "logo" or "emblem". The European Council, or European summit, had developed since the 1960s as an informal meeting of the Council at the level of heads of state. It had originated from then-French President Charles de Gaulle's resentment at the domination of supranational institutions (e.g. the Commission) over the integration process. It was mentioned in the treaties for the first time in the Single European Act (see below). Greece re-applied to join the community on 12 June 1975, following the restoration of democracy, and joined on 1 January 1981. Following on from Greece, and after their own democratic restoration, Spain and Portugal applied to the communities in 1977 and joined together on 1 January 1986. In 1987 Turkey formally applied to join the Community and began the longest application process for any country. With the prospect of further enlargement, and a desire to increase areas of co-operation, the Single European Act was signed by the foreign ministers on 17 and 28 February 1986 in Luxembourg and the Hague respectively. In a single document it dealt with reform of institutions, extension of powers, foreign policy cooperation and the single market. It came into force on 1 July 1987. The act was followed by work on what would be the Maastricht Treaty, which was agreed on 10 December 1991, signed the following year and coming into force on 1 November 1993 establishing the European Union, and paving the way for the European Monetary Union. The EU absorbed the European Communities as one of its three pillars. The EEC's areas of activities were enlarged and were renamed the "European Community", continuing to follow the supranational structure of the EEC. The EEC institutions became those of the EU, however the Court, Parliament and Commission had only limited input in the new pillars, as they worked on a more intergovernmental system than the European Communities. This was reflected in the names of the institutions, the Council was formally the "Council of the "European Union"" while the Commission was formally the "Commission of the "European Communities"". However, after the Treaty of Maastricht, Parliament gained a much bigger role. Maastricht brought in the codecision procedure, which gave it equal legislative power with the Council on Community matters. Hence, with the greater powers of the supranational institutions and the operation of Qualified Majority Voting in the Council, the Community pillar could be described as a far more federal method of decision making. The Treaty of Amsterdam transferred responsibility for free movement of persons (e.g., visas, illegal immigration, asylum) from the Justice and Home Affairs (JHA) pillar to the European Community (JHA was renamed Police and Judicial Co-operation in Criminal Matters (PJCC) as a result). Both Amsterdam and the Treaty of Nice also extended codecision procedure to nearly all policy areas, giving Parliament equal power to the Council in the Community. In 2002, the Treaty of Paris which established the ECSC expired, having reached its 50-year limit (as the first treaty, it was the only one with a limit). No attempt was made to renew its mandate; instead, the Treaty of Nice transferred certain of its elements to the Treaty of Rome and hence its work continued as part of the EC area of the European Community's remit. After the entry into force of the Treaty of Lisbon in 2009 the pillar structure ceased to exist. The European Community, together with its legal personality, was absorbed into the newly consolidated European Union which merged in the other two pillars (however Euratom remained distinct). This was originally proposed under the European Constitution but that treaty failed ratification in 2005. The main aim of the EEC, as stated in its preamble, was to "preserve peace and liberty and to lay the foundations of an ever closer union among the peoples of Europe". Calling for balanced economic growth, this was to be accomplished through: For the customs union, the treaty provided for a 10% reduction in custom duties and up to 20% of global import quotas. Progress on the customs union proceeded much faster than the twelve years planned. However, France faced some setbacks due to their war with Algeria. The six states that founded the EEC and the other two Communities were known as the "inner six" (the "outer seven" were those countries who formed the European Free Trade Association). The six were France, West Germany, Italy and the three Benelux countries: Belgium, the Netherlands and Luxembourg. The first enlargement was in 1973, with the accession of Denmark, Ireland and the United Kingdom. Greece, Spain and Portugal joined in the 1980s. The former East Germany became part of the EEC upon German reunification in 1990. Following the creation of the EU in 1993, it has enlarged to include an additional sixteen countries by 2013. Member states are represented in some form in each institution. The Council is also composed of one national minister who represents their national government. Each state also has a right to one European Commissioner each, although in the European Commission they are not supposed to represent their national interest but that of the Community. Prior to 2004, the larger members (France, Germany, Italy and the United Kingdom) have had two Commissioners. In the European Parliament, members are allocated a set number seats related to their population, however these (since 1979) have been directly elected and they sit according to political allegiance, not national origin. Most other institutions, including the European Court of Justice, have some form of national division of its members. There were three political institutions which held the executive and legislative power of the EEC, plus one judicial institution and a fifth body created in 1975. These institutions (except for the auditors) were created in 1957 by the EEC but from 1967 onwards they applied to all three Communities. The Council represents governments, the Parliament represents citizens and the Commission represents the European interest. Essentially, the Council, Parliament or another party place a request for legislation to the Commission. The Commission then drafts this and presents it to the Council for approval and the Parliament for an opinion (in some cases it had a veto, depending upon the legislative procedure in use). The Commission's duty is to ensure it is implemented by dealing with the day-to-day running of the Union and taking others to Court if they fail to comply. After the Maastricht Treaty in 1993, these institutions became those of the European Union, though limited in some areas due to the pillar structure. Despite this, Parliament in particular has gained more power over legislation and security of the Commission. The Court was the highest authority in the law, settling legal disputes in the Community, while the Auditors had no power but to investigate. The EEC inherited some of the Institutions of the ECSC in that the Common Assembly and Court of Justice of the ECSC had their authority extended to the EEC and Euratom in the same role. However the EEC, and Euratom, had different executive bodies to the ECSC. In place of the ECSC's Council of Ministers was the Council of the European Economic Community, and in place of the High Authority was the Commission of the European Communities. There was greater difference between these than name: the French government of the day had grown suspicious of the supranational power of the High Authority and sought to curb its powers in favour of the intergovernmental style Council. Hence the Council had a greater executive role in the running of the EEC than was the situation in the ECSC. By virtue of the Merger Treaty in 1967, the executives of the ECSC and Euratom were merged with that of the EEC, creating a single institutional structure governing the three separate Communities. From here on, the term "European Communities" were used for the institutions (for example, from "Commission of the European Economic Community" to the "Commission of the European Communities"). The Council of the European Communities was a body holding legislative and executive powers and was thus the main decision making body of the Community. Its Presidency rotated between the member states every six months and it is related to the European Council, which was an informal gathering of national leaders (started in 1961) on the same basis as the Council. The Council was composed of one national minister from each member state. However the Council met in various forms depending upon the topic. For example, if agriculture was being discussed, the Council would be composed of each national minister for agriculture. They represented their governments and were accountable to their national political systems. Votes were taken either by majority (with votes allocated according to population) or unanimity. In these various forms they share some legislative and budgetary power of the Parliament. Since the 1960s the Council also began to meet informally at the level of national leaders; these European summits followed the same presidency system and secretariat as the Council but was not a formal formation of it. The Commission of the European Communities was the executive arm of the community, drafting Community law, dealing with the day to running of the Community and upholding the treaties. It was designed to be independent, representing the Community interest, but was composed of national representatives (two from each of the larger states, one from the smaller states). One of its members was the President, appointed by the Council, who chaired the body and represented it. Under the Community, the European Parliament (formerly the European Parliamentary Assembly) had an advisory role to the Council and Commission. There were a number of Community legislative procedures, at first there was only the consultation procedure, which meant Parliament had to be consulted, although it was often ignored. The Single European Act gave Parliament more power, with the assent procedure giving it a right to veto proposals and the cooperation procedure giving it equal power with the Council if the Council was not unanimous. In 1970 and 1975, the Budgetary treaties gave Parliament power over the Community budget. The Parliament's members, up-until 1980 were national MPs serving part-time in the Parliament. The Treaties of Rome had required elections to be held once the Council had decided on a voting system, but this did not happen and elections were delayed until 1979 (see 1979 European Parliament election). After that, Parliament was elected every five years. In the following 20 years, it gradually won co-decision powers with the Council over the adoption of legislation, the right to approve or reject the appointment of the Commission President and the Commission as a whole, and the right to approve or reject international agreements entered into by the Community. The Court of Justice of the European Communities was the highest court of on matters of Community law and was composed of one judge per state with a president elected from among them. Its role was to ensure that Community law was applied in the same way across all states and to settle legal disputes between institutions or states. It became a powerful institution as Community law overrides national law. The fifth institution is the "European Court of Auditors", which despite its name had no judicial powers like the Court of Justice. Instead, it ensured that taxpayer funds from the Community budget have been correctly spent. The court provided an audit report for each financial year to the Council and Parliament and gives opinions and proposals on financial legislation and anti-fraud actions. It is the only institution not mentioned in the original treaties, having been set up in 1975. At the time of its abolition, the European Community pillar covered the following areas;
https://en.wikipedia.org/wiki?curid=9578
European Free Trade Association The European Free Trade Association (EFTA) is a regional trade organization and free trade area consisting of four European states: Iceland, Liechtenstein, Norway, and Switzerland. The organization operates in parallel with the European Union (EU), and all four member states participate in the European Single Market and are part of the Schengen Area. They are not, however, party to the European Union Customs Union. EFTA was historically one of the two dominant western European trade blocs, but is now much smaller and closely associated with its historical competitor, the European Union. It was established on 3 May 1960 to serve as an alternative trade bloc for those European states that were unable or unwilling to join the then European Economic Community (EEC), which subsequently became the European Union. The Stockholm Convention (1960), to establish the EFTA, was signed on 4 January 1960 in the Swedish capital by seven countries (known as the "outer seven": Austria, Denmark, Norway, Portugal, Sweden, Switzerland and the United Kingdom). A revised Convention, the Vaduz Convention, was signed on 21 June 2001 and entered into force on 1 June 2002. Since 1995, only two founding members remain, namely Norway and Switzerland. The other five, Austria, Denmark, Portugal, Sweden and the United Kingdom, joined the EU at some point in the intervening years. The initial Stockholm Convention was superseded by the Vaduz Convention, which aimed to provide a successful framework for continuing the expansion and liberalization of trade, both among the organization's member states and with the rest of the world. Whilst the EFTA is not a customs union and member states have full rights to enter into bilateral third-country trade arrangements, it does have a coordinated trade policy. As a result, its member states have jointly concluded free trade agreements with the EU and a number of other countries. To participate in the EU's single market, Iceland, Liechtenstein, and Norway are parties to the Agreement on a European Economic Area (EEA), with compliances regulated by the EFTA Surveillance Authority and the EFTA Court. Switzerland has a set of bilateral agreements with the EU instead. On 12 January 1960, the Treaty on the European Free Trade Association was initiated in the Golden Hall of the Stockholm City Hall. This established the progressive elimination of customs duties on industrial products, but did not affect agricultural or fisheries products. The main difference between the early EEC and the EFTA was that the latter did not operate common external customs tariffs unlike the former: each EFTA member was free to establish its individual customs duties against, or its individual free trade agreements with, non-EFTA countries. The founding members of the EFTA were: Austria, Denmark, Norway, Portugal, Sweden, Switzerland, and the United Kingdom. During the 1960s, these countries were often referred to as the "Outer Seven", as opposed to the Inner Six of the then European Economic Community (EEC). Finland became an associate member in 1961 and a full member in 1986, and Iceland joined in 1970. The United Kingdom and Denmark joined the EEC in 1973 and hence ceased to be EFTA members. Portugal also left EFTA for the European Community in 1986. Liechtenstein joined the EFTA in 1991 (previously its interests had been represented by Switzerland). Austria, Sweden, and Finland joined the EU in 1995 and thus ceased to be EFTA members. Twice, in 1973 and in 1995, the Norwegian government had tried to join the EU (still the EEC, in 1973) and by doing so, leave the EFTA. However, both the times, the membership of the EU was rejected in national referenda, keeping Norway in the EFTA. Iceland applied for EU membership in 2009 due to the 2008–2011 Icelandic financial crisis, but has since dropped its bid. Between 1994 and 2011, EFTA memberships for Andorra, San Marino, Monaco, the Isle of Man, Turkey, Israel, Morocco, and other European Neighbourhood Policy partners were discussed. In November 2012, after the Council of the European Union had called for an evaluation of the EU's relations with Monaco, Andorra and San Marino, which they described as "fragmented", the European Commission published a report outlining the options for their further integration into the EU. Unlike Liechtenstein, which is a member of the EEA via the EFTA and the Schengen Agreement, relations with these three states are based on a collection of agreements covering specific issues. The report examined four alternatives to the current situation: However, the Commission argued that the sectoral approach did not address the major issues and was still needlessly complicated, while EU membership was dismissed in the near future because "the EU institutions are currently not adapted to the accession of such small-sized countries". The remaining options, EEA membership and a FAA with the states, were found to be viable and were recommended by the Commission. In response, the Council requested that negotiations with the three microstates on further integration continue, and that a report be prepared by the end of 2013 detailing the implications of the two viable alternatives and recommendations on how to proceed. As the EEA memberships are currently only open to the EFTA or EU members, the consent of the existing EFTA member states is required for the microstates to join the EEA without becoming members of the EU. In 2011, Jonas Gahr Støre, then Foreign Minister of Norway which is an EFTA member state, said that EFTA/EEA membership for the microstates was not the appropriate mechanism for their integration into the internal market due to their different requirements from those of large countries such as Norway, and suggested that a simplified association would be better suited for them. Espen Barth Eide, Støre's successor, responded to the Commission's report in late 2012 by questioning whether the microstates have sufficient administrative capabilities to meet the obligations of EEA membership. However, he stated that Norway was open to the possibility of EFTA membership for the microstates if they decide to submit an application, and that the country had not made a final decision on the matter. Pascal Schafhauser, the Counsellor of the Liechtenstein Mission to the EU, said that Liechtenstein, another EFTA member state, was willing to discuss EEA membership for the microstates provided their joining, did not impede the functioning of the organization. However, he suggested that the option direct membership in the EEA for the microstates, outside of both the EFTA and the EU, should be considered. On 18 November 2013, the EU Commission concluded that "the participation of the small-sized countries in the EEA is not judged to be a viable option at present due to the political and institutional reasons," and that, Association Agreements were a more feasible mechanism to integrate the microstates into the internal market. The Norwegian electorate had rejected treaties of accession to the EU in two referendums. At the time of the first referendum in 1972, their neighbour, Denmark joined. Since the second referendum in 1994, two other Nordic neighbours, Sweden and Finland, have joined the EU. The last two governments of Norway have not advanced the question, as they have both been coalition governments consisting of proponents and opponents of EU membership. Since Switzerland rejected the EEA membership in a referendum in 1992, more referendums on EU membership have been initiated, the last time being in 2001. These were all rejected. Switzerland has been in a customs union with fellow EFTA member state and neighbour Liechtenstein since 1924. On 16 July 2009, the government of Iceland formally applied for the EU membership, but the negotiation process had been suspended since mid-2013, and in 2015 the foreign ministers wrote to withdraw its application. In mid-2005, representatives of the Faroe Islands raised the possibility of their territory joining the EFTA. According to Article 56 of the EFTA Convention, only states may become members of the EFTA. The Faroes are a constituent country of the Kingdom of Denmark, and not a sovereign state in their own right. Consequently, they considered the possibility that the "Kingdom of Denmark in respect of the Faroes" could join the EFTA, though the Danish Government has stated that this mechanism would not allow the Faroes to become a separate member of the EEA because Denmark was already a party to the EEA Agreement. The Faroes already have an extensive bilateral free trade agreement with Iceland, known as the Hoyvík Agreement. The United Kingdom was a co-founder of EFTA in 1960, but ceased to be a member upon joining the European Economic Community. The country held a referendum in 2016 on withdrawing from the EU (popularly referred to as "Brexit"), resulting in a 51.9% vote in favour of withdrawing. A 2013 research paper presented to the Parliament of the United Kingdom proposed a number of alternatives to EU membership which would continue to allow it access to the EU's internal market, including continuing EEA membership as an EFTA member state, or the Swiss model of a number of bilateral treaties covering the provisions of the single market. In the first meeting since the Brexit vote, EFTA reacted by saying both that they were open to a UK return, and that Britain has many issues to work through. The president of Switzerland Johann Schneider-Ammann stated that its return would strengthen the association. However, in August 2016 the Norwegian Government expressed reservations. Norway's European affairs minister, Elisabeth Vik Aspaker, told the "Aftenposten" newspaper: "It’s not certain that it would be a good idea to let a big country into this organization. It would shift the balance, which is not necessarily in Norway’s interests." In late 2016, the Scottish First Minister said that her priority was to keep the whole of the UK in the European single market but that taking Scotland alone into the EEA was an option being "looked at". However, other EFTA states have stated that only sovereign states are eligible for membership, so it could only join if it became independent from the UK, unless the solution scouted for the Faroes in 2005 were to be adopted (see above). In early 2018, British MPs Antoinette Sandbach, Stephen Kinnock and Stephen Hammond all called for the UK to rejoin EFTA. EFTA is governed by the EFTA Council and serviced by the EFTA Secretariat. In addition, in connection with the EEA Agreement of 1992, two other EFTA organisations were established, the EFTA Surveillance Authority and the EFTA Court. The EFTA Council is the highest governing body of EFTA. The Council usually meets eight times a year at the ambassadorial level (heads of permanent delegations to EFTA) and twice a year at Ministerial level. In the Council meetings, the delegations consult with one another, negotiate and decide on policy issues regarding EFTA. Each Member State is represented and has one vote, though decisions are usually reached through consensus. The Council discusses substantive matters, especially relating to the development of EFTA relations with third countries and the management of free trade agreements, and keeps under general review relations with the EU third-country policy and administration. It has a broad mandate to consider possible policies to promote the overall objectives of the Association and to facilitate the development of links with other states, unions of states or international organisations. The Council also manages relations between the EFTA States under the EFTA Convention. Questions relating to the EEA are dealt with by the Standing Committee in Brussels. The day-to-day running of the Secretariat is headed by the Secretary-General, Henri Gétaz, who is assisted by two Deputy Secretaries-General, one based in Geneva and the other in Brussels. The three posts are shared between the Member States. The division of the Secretariat reflects the division of EFTA's activities. The Secretariat employs approximately 100 staff members, of whom a third are based in Geneva and two thirds in Brussels and Luxembourg. The Headquarters in Geneva deals with the management and negotiation of free trade agreements with non-EU countries, and provides support to the EFTA Council. In Brussels, the Secretariat provides support for the management of the EEA Agreement and assists the Member States in the preparation of new legislation for integration into the EEA Agreement. The Secretariat also assists the Member States in the elaboration of input to EU decision making. The two duty stations work together closely to implement the Vaduz Convention's stipulations on the intra-EFTA Free Trade Area. The EFTA Statistical Office in Luxembourg contributes to the development of a broad and integrated European Statistical System. The EFTA Statistical Office (ESO) is located in the premises of Eurostat, the Statistical Office of the European Union in Luxembourg, and functions as a liaison office between Eurostat and the EFTA National Statistical Institutes. ESO's main objective is to promote the full inclusion of the EFTA States in the European Statistical System, thus providing harmonised and comparable statistics to support the general cooperation process between EFTA and the EU within and outside the EEA Agreement. The cooperation also entails technical cooperation programmes with third countries and training of European statisticians. The EFTA Secretariat is headquartered in Geneva, Switzerland, but also has duty stations in Brussels, Belgium and Luxembourg City, Luxembourg. The EFTA Surveillance Authority has its headquarters in Brussels, Belgium (the same location as the headquarters of the European Commission), while the EFTA Court has its headquarters in Luxembourg City (the same location as the headquarters of the European Court of Justice). In 1992, the EFTA and the EU signed the European Economic Area Agreement in Oporto, Portugal. However, the proposal that Switzerland ratify its participation was rejected by referendum. (Nevertheless, Switzerland has multiple bilateral treaties with the EU that allow it to participate in the European Single Market, the Schengen Agreement and other programmes). Thus, except for Switzerland, the EFTA members are also members of the European Economic Area (EEA). The EEA comprises three member states of the European Free Trade Association (EFTA) and 28 member states of the European Union (EU), including Croatia which is provisionally applying the agreement pending its ratification by all EEA countries. It was established on 1 January 1994 following an agreement with the European Community (which had become the EU two months earlier). It allows the EFTA-EEA states to participate in the EU's Internal Market without being members of the EU. They adopt almost all EU legislation related to the single market, except laws on agriculture and fisheries. However, they also contribute to and influence the formation of new EEA relevant policies and legislation at an early stage as part of a formal decision-shaping process. One EFTA member, Switzerland, has not joined the EEA but has a series of bilateral agreements, including a free trade agreement, with the EU. The following table summarises the various components of EU laws applied in the EFTA countries and their sovereign territories. Some territories of EU member states also have a special status in regard to EU laws applied as is the case with some European microstates. A Joint Committee consisting of the EEA-EFTA States plus the European Commission (representing the EU) has the function of extending relevant EU law to the non EU members. An EEA Council meets twice yearly to govern the overall relationship between the EEA members. Rather than setting up pan-EEA institutions, the activities of the EEA are regulated by the EFTA Surveillance Authority and the EFTA Court. The EFTA Surveillance Authority and the EFTA Court regulate the activities of the EFTA members in respect of their obligations in the European Economic Area (EEA). Since Switzerland is not an EEA member, it does not participate in these institutions. The EFTA Surveillance Authority performs a role for EFTA members that is equivalent to that of the European Commission for the EU, as "guardian of the treaties" and the EFTA Court performs the European Court of Justice-equivalent role. The original plan for the EEA lacked the EFTA Court or the EFTA Surveillance Authority: the European Court of Justice and the European Commission were to exercise those roles. However, during the negotiations for the EEA agreement, the European Court of Justice informed the Council of the European Union by way of letter that it considered that it would be a violation of the treaties to give to the EU institutions these powers with respect to non-EU member states. Therefore, the current arrangement was developed instead. The EEA and Norway Grants are the financial contributions of Iceland, Liechtenstein and Norway to reduce social and economic disparities in Europe. They were established in conjunction with the 2004 enlargement of the European Economic Area (EEA), which brought together the EU, Iceland, Liechtenstein and Norway in the Internal Market. In the period from 2004 to 2009, €1.3 billion of project funding was made available for project funding in the 15 beneficiary states in Central and Southern Europe. The EEA and Norway Grants are administered by the Financial Mechanism Office, which is affiliated to the EFTA Secretariat in Brussels. EFTA also originated the Hallmarking Convention and the Pharmaceutical Inspection Convention, both of which are open to non-EFTA states. EFTA has several free trade agreements with non-EU countries as well as declarations on cooperation and joint workgroups to improve trade. Currently, the EFTA States have established preferential trade relations with 24 states and territories, in addition to the 28 member states of the European Union. EFTA's interactive Free Trade Map gives an overview of the partners worldwide. EFTA member states' citizens enjoy freedom of movement in each other's territories in accordance with the EFTA convention. EFTA nationals also enjoy freedom of movement in the European Union (EU). EFTA nationals and EU citizens are not only visa-exempt but are legally entitled to enter and reside in each other's countries. The Citizens’ Rights Directive (also sometimes called the "Free Movement Directive") defines the right of free movement for citizens of the European Economic Area (EEA), which includes the three EFTA members Iceland, Norway and Liechtenstein plus the member states of the EU. Switzerland, which is a member of EFTA but not of the EEA, is not bound by the Directive but rather has a separate bilateral agreement on free movement with the EU. As a result a citizen of an EFTA country can live and work in all the other EFTA countries and in all the EU countries, and a citizen of an EU country can live and work in all the EFTA countries (but for voting and working in sensitive fields, such as government / police / military, citizenship is often required, and non-citizens may not have the same rights to welfare and unemployment benefits as citizens). Since each EFTA and EU country can make its own citizenship laws, dual citizenship is not always possible. Of the EFTA countries, Iceland, Norway and Switzerland allow it (in Switzerland, the conditions for the naturalization of immigrants vary regionally), while Liechtenstein allows it only for citizens by descent, but not for foreigners wanting to naturalize. Some non-EFTA/non-EU countries do not allow dual citizenship either, so immigrants wanting to naturalize must sometimes renounce their old citizenship. See also multiple citizenship and the nationality laws of the countries in question for more details. The Portugal Fund came into operation in February 1977 when Portugal was still a member of EFTA. It was to provide funding for the development of Portugal after the Carnation Revolution and the consequential restoration of democracy and the decolonization of the country's overseas possessions. This followed a period of economic sanctions by most of the international community, which left Portugal economically underdeveloped compared to the rest of the western Europe. When Portugal left EFTA in 1985 in order to join the EEC, the remaining EFTA members decided to continue the Portugal Fund so that Portugal would continue to benefit from it. The Fund originally took the form of a low-interest loan from the EFTA member states to the value of US$100 million. Repayment was originally to commence in 1988, however, EFTA then decided to postpone the start of repayments until 1998. The Portugal Fund was dissolved in January 2002.
https://en.wikipedia.org/wiki?curid=9580
European Parliament The European Parliament (EP) is the legislative branch of the European Union and one of its seven institutions. Together with the Council of the European Union, it adopts European legislation, normally on a proposal from the European Commission. The Parliament is composed of 705 members (MEPs). The Parliament represents the second-largest democratic electorate in the world (after the Parliament of India) and the largest trans-national democratic electorate in the world (375 million eligible voters in 2009). Since 1979, it has been directly elected every five years by European Union citizens, using universal suffrage. Voter turnout for parliamentary elections has decreased at each election after 1979 until 2019, when the voter turnout increased by 8 percentage points and went above 50% for the first time since 1994. Voting age is 18 in all member states except Malta and Austria, where it is 16, and Greece, where it is 17. Although the European Parliament has legislative power, as does the Council, it does not formally possess legislative initiative (which is the prerogative of the European Commission), as most national parliaments of European Union member states do. The Parliament is the "first institution" of the EU (mentioned first in the treaties, having ceremonial precedence over all authority at the European level), and shares equal legislative and budgetary powers with the Council (except in a few areas where the special legislative procedures apply). It likewise has equal control over the EU budget. Finally, the European Commission, the executive body of the EU (it exercises executive powers, but no legislative ones other than legislative initiative), is accountable to Parliament. In particular, Parliament can decide whether or not to approve the European Council's nominee for the President of the Commission, and it is further tasked with approving (or rejecting) the appointment of the Commission as a whole. It can subsequently force the Commission as a body to resign by adopting a motion of censure. The President of the European Parliament (Parliament's speaker) is David Sassoli (PD), elected in July 2019. He presides over a multi-party chamber, the five largest groups being the European People's Party group (EPP), the Progressive Alliance of Socialists and Democrats (S&D), Renew Europe (previously ALDE), the Greens/European Free Alliance (Greens–EFA) and Identity and Democracy (ID). The last union-wide elections were the 2019 elections. The European Parliament's headquarters are in Strasbourg (France). Luxembourg City (Luxembourg) is home to the administrative offices (the "General Secretariat"). Meetings of the whole Parliament ("plenary sessions") take place in Strasbourg and in Brussels (Belgium). Committee meetings are held in Brussels. The Parliament, like the other institutions, was not designed in its current form when it first met on 10 September 1952. One of the oldest common institutions, it began as the "Common Assembly" of the European Coal and Steel Community (ECSC). It was a consultative assembly of 78 appointed parliamentarians drawn from the national parliaments of member states, having no legislative powers. The change since its foundation was highlighted by Professor David Farrell of the University of Manchester: "For much of its life, the European Parliament could have been justly labelled a 'multi-lingual talking shop'." Its development since its foundation shows how the European Union's structures have evolved without a clear ‘master plan’. Some, such as Tom Reid of the "Washington Post", said of the union: "nobody would have deliberately designed a government as complex and as redundant as the EU". Even the Parliament's two seats, which have switched several times, are a result of various agreements or lack of agreements. Although most MEPs would prefer to be based just in Brussels, at John Major's 1992 Edinburgh summit, France engineered a treaty amendment to maintain Parliament's plenary seat permanently at Strasbourg. The body was not mentioned in the original Schuman Declaration. It was assumed or hoped that difficulties with the British would be resolved to allow the Parliamentary Assembly of the Council of Europe to perform the task. A separate Assembly was introduced during negotiations on the Treaty as an institution which would counterbalance and monitor the executive while providing democratic legitimacy. The wording of the ECSC Treaty demonstrated the leaders' desire for more than a normal consultative assembly by using the term "representatives of the people" and allowed for direct election. Its early importance was highlighted when the Assembly was given the task of drawing up the draft treaty to establish a European Political Community. By this document, the Ad Hoc Assembly was established on 13 September 1952 with extra members, but after the failure of the proposed European Defence Community the project was dropped. Despite this, the European Economic Community and Euratom were established in 1958 by the Treaties of Rome. The Common Assembly was shared by all three communities (which had separate executives) and it renamed itself the "European Parliamentary Assembly". The first meeting was held on 19 March 1958 having been set up in Luxembourg City, it elected Schuman as its president and on 13 May it rearranged itself to sit according to political ideology rather than nationality. This is seen as the birth of the modern European Parliament, with Parliament's 50 years celebrations being held in March 2008 rather than 2002. The three communities merged their remaining organs as the European Communities in 1967, and the body's name was changed to the current "European Parliament" in 1962. In 1970 the Parliament was granted power over areas of the Communities' budget, which were expanded to the whole budget in 1975. Under the Rome Treaties, the Parliament should have become elected. However, the Council was required to agree a uniform voting system beforehand, which it failed to do. The Parliament threatened to take the Council to the European Court of Justice; this led to a compromise whereby the Council would agree to elections, but the issue of voting systems would be put off until a later date. In 1979, its members were directly elected for the first time. This sets it apart from similar institutions such as those of the Parliamentary Assembly of the Council of Europe or Pan-African Parliament which are appointed. After that first election, the parliament held its first session on 11 July 1979, electing Simone Veil MEP as its president. Veil was also the first female president of the Parliament since it was formed as the Common Assembly. As an elected body, the Parliament began to draft proposals addressing the functioning of the EU. For example, in 1984, inspired by its previous work on the Political Community, it drafted the "draft Treaty establishing the European Union" (also known as the 'Spinelli Plan' after its rapporteur Altiero Spinelli MEP). Although it was not adopted, many ideas were later implemented by other treaties. Furthermore, the Parliament began holding votes on proposed Commission Presidents from the 1980s, before it was given any formal right to veto. Since it became an elected body, the membership of the European Parliament has simply expanded whenever new nations have joined (the membership was also adjusted upwards in 1994 after German reunification). Following this, the Treaty of Nice imposed a cap on the number of members to be elected: 732. Like the other institutions, the Parliament's seat was not yet fixed. The provisional arrangements placed Parliament in Strasbourg, while the Commission and Council had their seats in Brussels. In 1985 the Parliament, wishing to be closer to these institutions, built a second chamber in Brussels and moved some of its work there despite protests from some states. A final agreement was eventually reached by the European Council in 1992. It stated the Parliament would retain its formal seat in Strasbourg, where twelve sessions a year would be held, but with all other parliamentary activity in Brussels. This two-seat arrangement was contested by the Parliament, but was later enshrined in the Treaty of Amsterdam. To this day the institution's locations are a source of contention. The Parliament gained more powers from successive treaties, namely through the extension of the ordinary legislative procedure (then called the codecision procedure), and in 1999, the Parliament forced the resignation of the Santer Commission. The Parliament had refused to approve the Community budget over allegations of fraud and mis-management in the Commission. The two main parties took on a government-opposition dynamic for the first time during the crisis which ended in the Commission resigning en masse, the first of any forced resignation, in the face of an impending censure from the Parliament. In 2004, following the largest trans-national election in history, despite the European Council choosing a President from the largest political group (the EPP), the Parliament again exerted pressure on the Commission. During the Parliament's hearings of the proposed Commissioners MEPs raised doubts about some nominees with the Civil Liberties committee rejecting Rocco Buttiglione from the post of Commissioner for Justice, Freedom and Security over his views on homosexuality. That was the first time the Parliament had ever voted against an incoming Commissioner and despite Barroso's insistence upon Buttiglione the Parliament forced Buttiglione to be withdrawn. A number of other Commissioners also had to be withdrawn or reassigned before Parliament allowed the Barroso Commission to take office. Along with the extension of the ordinary legislative procedure, the Parliament's democratic mandate has given it greater control over legislation against the other institutions. In voting on the Bolkestein directive in 2006, the Parliament voted by a large majority for over 400 amendments that changed the fundamental principle of the law. The "Financial Times" described it in the following terms: In 2007, for the first time, Justice Commissioner Franco Frattini included Parliament in talks on the second Schengen Information System even though MEPs only needed to be consulted on parts of the package. After that experiment, Frattini indicated he would like to include Parliament in all justice and criminal matters, informally pre-empting the new powers they could gain as part of the Treaty of Lisbon. Between 2007 and 2009, a special working group on parliamentary reform implemented a series of changes to modernise the institution such as more speaking time for rapporteurs, increase committee co-operation and other efficiency reforms. The Lisbon Treaty finally came into force on 1 December 2009, granting Parliament powers over the entire EU budget, making Parliament's legislative powers equal to the Council's in nearly all areas and linking the appointment of the Commission President to Parliament's own elections. Despite some calls for the parties to put forward candidates beforehand, only the EPP (which had re-secured their position as largest party) had one in re-endorsing Barroso. Barroso gained the support of the European Council for a second term and secured majority support from the Parliament in September 2009. Parliament voted 382 votes in favour and 219 votes against (117 abstentions ) with support of the European People's Party, European Conservatives and Reformists and the Alliance of Liberals and Democrats for Europe. The liberals gave support after Barroso gave them a number of concessions; the liberals previously joined the socialists' call for a delayed vote (the EPP had wanted to approve Barroso in July of that year). Once Barroso put forward the candidates for his next Commission, another opportunity to gain concessions arose. Bulgarian nominee Rumiana Jeleva was forced to step down by Parliament due to concerns over her experience and financial interests. She only had the support of the EPP which began to retaliate on left wing candidates before Jeleva gave in and was replaced (setting back the final vote further). Before the final vote, Parliament demanded a number of concessions as part of a future working agreement under the new Lisbon Treaty. The deal includes that Parliament's President will attend high level Commission meetings. Parliament will have a seat in the EU's Commission-led international negotiations and have a right to information on agreements. However, Parliament secured only an observer seat. Parliament also did not secure a say over the appointment of delegation heads and special representatives for foreign policy. Although they will appear before parliament after they have been appointed by the High Representative. One major internal power was that Parliament wanted a pledge from the Commission that it would put forward legislation when parliament requests. Barroso considered this an infringement on the Commission's powers but did agree to respond within three months. Most requests are already responded to positively. During the setting up of the European External Action Service (EEAS), Parliament used its control over the EU budget to influence the shape of the EEAS. MEPs had aimed at getting greater oversight over the EEAS by linking it to the Commission and having political deputies to the High Representative. MEPs didn't manage to get everything they demanded. However, they got broader financial control over the new body. In January 2019, Conservative MEPs supported proposals to boost opportunities for women and tackle sexual harassment in the European Parliament. The Parliament and Council have been compared to the two chambers of a bicameral legislature. However, there are some differences from national legislatures; for example, neither the Parliament nor the Council have the power of legislative initiative (except for the fact that the Council has the power in some intergovernmental matters). In Community matters, this is a power uniquely reserved for the European Commission (the executive). Therefore, while Parliament can amend and reject legislation, to make a proposal for legislation, it needs the Commission to draft a bill before anything can become law. The value of such a power has been questioned by noting that in the national legislatures of the member states 85% of initiatives introduced without executive support fail to become law. Yet it has been argued by former Parliament president Hans-Gert Pöttering that as the Parliament does have the right to ask the Commission to draft such legislation, and as the Commission is following Parliament's proposals more and more Parliament does have a "de facto" right of legislative initiative. The Parliament also has a great deal of indirect influence, through non-binding resolutions and committee hearings, as a "pan-European soapbox" with the ear of thousands of Brussels-based journalists. There is also an indirect effect on foreign policy; the Parliament must approve all development grants, including those overseas. For example, the support for post-war Iraq reconstruction, or incentives for the cessation of Iranian nuclear development, must be supported by the Parliament. Parliamentary support was also required for the transatlantic passenger data-sharing deal with the United States. Finally, Parliament holds a non-binding vote on new EU treaties but cannot veto it. However, when Parliament threatened to vote down the Nice Treaty, the Belgian and Italian Parliaments said they would veto the treaty on the European Parliament's behalf. With each new treaty, the powers of the Parliament, in terms of its role in the Union's legislative procedures, have expanded. The procedure which has slowly become dominant is the "ordinary legislative procedure" (previously named "codecision procedure"), which provides an equal footing between Parliament and Council. In particular, under the procedure, the Commission presents a proposal to Parliament and the Council which can only become law if both agree on a text, which they do (or not) through successive readings up to a maximum of three. In its first reading, Parliament may send amendments to the Council which can either adopt the text with those amendments or send back a "common position". That position may either be approved by Parliament, or it may reject the text by an absolute majority, causing it to fail, or it may adopt further amendments, also by an absolute majority. If the Council does not approve these, then a "Conciliation Committee" is formed. The Committee is composed of the Council members plus an equal number of MEPs who seek to agree a compromise. Once a position is agreed, it has to be approved by Parliament, by a simple majority. This is also aided by Parliament's mandate as the only directly democratic institution, which has given it leeway to have greater control over legislation than other institutions, for example over its changes to the Bolkestein directive in 2006. The few other areas that operate the "special legislative procedures" are justice and home affairs, budget and taxation, and certain aspects of other policy areas, such as the fiscal aspects of environmental policy. In these areas, the Council or Parliament decide law alone. The procedure also depends upon which type of institutional act is being used. The strongest act is a regulation, an act or law which is directly applicable in its entirety. Then there are directives which bind member states to certain goals which they must achieve. They do this through their own laws and hence have room to manoeuvre in deciding upon them. A decision is an instrument which is focused at a particular person or group and is directly applicable. Institutions may also issue recommendations and opinions which are merely non-binding, declarations. There is a further document which does not follow normal procedures, this is a "written declaration" which is similar to an early day motion used in the Westminster system. It is a document proposed by up to five MEPs on a matter within the EU's activities used to launch a debate on that subject. Having been posted outside the entrance to the hemicycle, members can sign the declaration and if a majority do so it is forwarded to the President and announced to the plenary before being forwarded to the other institutions and formally noted in the minutes. The legislative branch officially holds the Union's budgetary authority with powers gained through the Budgetary Treaties of the 1970s and the Lisbon Treaty. The EU budget is subject to a form of the ordinary legislative procedure with a single reading giving Parliament power over the entire budget (before 2009, its influence was limited to certain areas) on an equal footing to the Council. If there is a disagreement between them, it is taken to a conciliation committee as it is for legislative proposals. If the joint conciliation text is not approved, the Parliament may adopt the budget definitively. The Parliament is also responsible for discharging the implementation of previous budgets based on the annual report of the European Court of Auditors. It has refused to approve the budget only twice, in 1984 and in 1998. On the latter occasion it led to the resignation of the Santer Commission; highlighting how the budgetary power gives Parliament a great deal of power over the Commission. Parliament also makes extensive use of its budgetary, and other powers, elsewhere; for example in the setting up of the European External Action Service, Parliament has a de facto veto over its design as it has to approve the budgetary and staff changes. The President of the European Commission is proposed by the European Council on the basis of the European elections to Parliament. That proposal has to be approved by the Parliament (by a simple majority) who "elect" the President according to the treaties. Following the approval of the Commission President, the members of the Commission are proposed by the President in accord with the member states. Each Commissioner comes before a relevant parliamentary committee hearing covering the proposed portfolio. They are then, as a body, approved or rejected by the Parliament. In practice, the Parliament has never voted against a President or his Commission, but it did seem likely when the Barroso Commission was put forward. The resulting pressure forced the proposal to be withdrawn and changed to be more acceptable to parliament. That pressure was seen as an important sign by some of the evolving nature of the Parliament and its ability to make the Commission accountable, rather than being a rubber stamp for candidates. Furthermore, in voting on the Commission, MEPs also voted along party lines, rather than national lines, despite frequent pressure from national governments on their MEPs. This cohesion and willingness to use the Parliament's power ensured greater attention from national leaders, other institutions and the public who previously gave the lowest ever turnout for the Parliament's elections. The Parliament also has the power to censure the Commission if they have a two-thirds majority which will force the resignation of the entire Commission from office. As with approval, this power has never been used but it was threatened to the Santer Commission, who subsequently resigned of their own accord. There are a few other controls, such as: the requirement of Commission to submit reports to the Parliament and answer questions from MEPs; the requirement of the President-in-office of the Council to present its programme at the start of their presidency; the obligation on the President of the European Council to report to Parliament after each of its meetings; the right of MEPs to make requests for legislation and policy to the Commission; and the right to question members of those institutions (e.g. "Commission Question Time" every Tuesday). At present, MEPs may ask a question on any topic whatsoever, but in July 2008 MEPs voted to limit questions to those within the EU's mandate and ban offensive or personal questions. The Parliament also has other powers of general supervision, mainly granted by the Maastricht Treaty. The Parliament has the power to set up a Committee of Inquiry, for example over mad cow disease or CIA detention flights the former led to the creation of the European veterinary agency. The Parliament can call other institutions to answer questions and if necessary to take them to court if they break EU law or treaties. Furthermore, it has powers over the appointment of the members of the Court of Auditors and the president and executive board of the European Central Bank. The ECB president is also obliged to present an annual report to the parliament. The European Ombudsman is elected by the Parliament, who deals with public complaints against all institutions. Petitions can also be brought forward by any EU citizen on a matter within the EU's sphere of activities. The Committee on Petitions hears cases, some 1500 each year, sometimes presented by the citizen themselves at the Parliament. While the Parliament attempts to resolve the issue as a mediator they do resort to legal proceedings if it is necessary to resolve the citizens dispute. The parliamentarians are known in English as Members of the European Parliament (MEPs). They are elected every five years by universal adult suffrage and sit according to political allegiance; about a third are women. Before 1979 they were appointed by their national parliaments. In 2017, an estimated 17 MEPs were not white. Of these, three were black; if the numbers were proportionate to the EU population, then 22 would be black. Under the Lisbon Treaty, seats are allocated to each state according to population and the maximum number of members is set at 751 (however, as the President cannot vote while in the chair there will only be 750 voting members at any one time). Since 1 February 2020, 705 MEPs (including the president of the Parliament) sit in the European Parliament, as there is no EU citizens' représentative of the United Kingdom anymore due to Brexit. Representation is currently limited to a maximum of 96 seats and a minimum of 6 seats per state and the seats are distributed according to "degressive proportionality", i.e., the larger the state, the more citizens are represented per MEP. As a result, Maltese and Luxembourgish voters have roughly 10x more influence per voter than citizens of the six large countries. , Germany (80.9 million inhabitants) has 96 seats (previously 99 seats), i.e. one seat for 843,000 inhabitants. Malta (0.4 million inhabitants) has 6 seats, i.e. one seat for 70,000 inhabitants. The new system implemented under the Lisbon Treaty, including revising the seating well before elections, was intended to avoid political horse trading when the allocations have to be revised to reflect demographic changes. Pursuant to this apportionment, the constituencies are formed. In four EU member states (Belgium, Ireland, Italy and Poland), the national territory is divided into a number of constituencies. In the remaining member states, the whole country forms a single constituency. All member states hold elections to the European Parliament using various forms of proportional representation. Due to the delay in ratifying the Lisbon Treaty, the seventh parliament was elected under the lower Nice Treaty cap. A small scale treaty amendment was ratified on 29 November 2011. This amendment brought in transitional provisions to allow the 18 additional MEPs created under the Lisbon Treaty to be elected or appointed before the 2014 election. Under the Lisbon Treaty reforms, Germany was the only state to lose members from 99 to 96. However, these seats were not removed until the 2014 election. Before 2009, members received the same salary as members of their national parliament. However, from 2009 a new members statute came into force, after years of attempts, which gave all members an equal monthly pay, of €8,484.05 each in 2016, subject to a European Union tax and which can also be taxed nationally. MEPs are entitled to a pension, paid by Parliament, from the age of 63. Members are also entitled to allowances for office costs and subsistence, and travelling expenses, based on actual cost. Besides their pay, members are granted a number of privileges and immunities. To ensure their free movement to and from the Parliament, they are accorded by their own states the facilities accorded to senior officials travelling abroad and, by other state governments, the status of visiting foreign representatives. When in their own state, they have all the immunities accorded to national parliamentarians, and, in other states, they have immunity from detention and legal proceedings. However, immunity cannot be claimed when a member is found committing a criminal offence and the Parliament also has the right to strip a member of their immunity. MEPs in Parliament are organised into eight different parliamentary groups, including thirty non-attached members known as "non-inscrits". The two largest groups are the European People's Party (EPP) and the Socialists & Democrats (S&D). These two groups have dominated the Parliament for much of its life, continuously holding between 50 and 70 percent of the seats between them. No single group has ever held a majority in Parliament. As a result of being broad alliances of national parties, European group parties are very decentralised and hence have more in common with parties in federal states like Germany or the United States than unitary states like the majority of the EU states. Nevertheless, the European groups were actually more cohesive than their US counterparts between 2004 and 2009. Groups are often based on a single European political party such as the European People's Party. However, they can, like the liberal group, include more than one European party as well as national parties and independents. For a group to be recognised, it needs 25 MEPs from seven different countries. Once recognised, groups receive financial subsidies from the parliament and guaranteed seats on committees, creating an incentive for the formation of groups. However, some controversy occurred with the establishment of the short-lived Identity, Tradition, Sovereignty (ITS) due to its ideology; the members of the group were far-right, so there were concerns about public funds going towards such a group. There were attempts to change the rules to block the formation of ITS, but they never came to fruition. The group was, however, blocked from gaining leading positions on committees traditionally (by agreement, not a rule) shared among all parties. When this group engaged in infighting, leading to the withdrawal of some members, its size fell below the threshold for recognition causing its collapse. Given that the Parliament does not form the government in the traditional sense of a Parliamentary system, its politics have developed along more consensual lines rather than majority rule of competing parties and coalitions. Indeed, for much of its life it has been dominated by a grand coalition of the European People's Party and the Party of European Socialists. The two major parties tend to co-operate to find a compromise between their two groups leading to proposals endorsed by huge majorities. However, this does not always produce agreement, and each may instead try to build other alliances, the EPP normally with other centre-right or right wing Groups and the PES with centre-left or left wing groups. Sometimes, the Liberal Group is then in the pivotal position. There are also occasions where very sharp party political divisions have emerged, for example over the resignation of the Santer Commission. When the initial allegations against the Commission emerged, they were directed primarily against Édith Cresson and Manuel Marín, both socialist members. When the parliament was considering refusing to discharge the Community budget, President Jacques Santer stated that a no vote would be tantamount to a vote of no confidence. The Socialist group supported the Commission and saw the issue as an attempt by the EPP to discredit their party ahead of the 1999 elections. Socialist leader, Pauline Green MEP, attempted a vote of confidence and the EPP put forward counter motions. During this period the two parties took on similar roles to a government-opposition dynamic, with the Socialists supporting the executive and EPP renouncing its previous coalition support and voting it down. Politicisation such as this has been increasing, in 2007 Simon Hix of the London School of Economics noted that: During the fifth term, 1999 to 2004, there was a break in the grand coalition resulting in a centre-right coalition between the Liberal and People's parties. This was reflected in the Presidency of the Parliament with the terms being shared between the EPP and the ELDR, rather than the EPP and Socialists. In the following term the liberal group grew to hold 88 seats, the largest number of seats held by any third party in Parliament. Elections have taken place, directly in every member state, every five years since 1979. there have been nine elections. When a nation joins mid-term, a by-election will be held to elect their representatives. This has happened six times, most recently when Croatia joined in 2013. Elections take place across four days according to local custom and, apart from having to be proportional, the electoral system is chosen by the member state. This includes allocation of sub-national constituencies; while most members have a national list, some, like the UK and Poland, divide their allocation between regions. Seats are allocated to member states according to their population, since 2014 with no state having more than 96, but no fewer than 6, to maintain proportionality. The most recent Union-wide elections to the European Parliament were the European elections of 2019, held from 23 to 26 May 2019. They were the largest simultaneous transnational elections ever held anywhere in the world. The first session of the ninth parliament started 2 July 2019. European political parties have the exclusive right to campaign during the European elections (as opposed to their corresponding EP groups). There have been a number of proposals designed to attract greater public attention to the elections. One such innovation in the 2014 elections was that the pan-European political parties fielded "candidates" for president of the Commission, the so-called "Spitzenkandidaten" (German, "leading candidates" or "top candidates"). However, European Union governance is based on a mixture of intergovernmental and supranational features: the President of the European Commission is nominated by the European Council, representing the governments of the member states, and there is no obligation for them to nominate the successful "candidate". The Lisbon Treaty merely states that they should take account of the results of the elections when choosing whom to nominate. The so-called "Spitzenkandidaten" were Jean-Claude Juncker for the European People's Party, Martin Schulz for the Party of European Socialists, Guy Verhofstadt for the Alliance of Liberals and Democrats for Europe Party, Ska Keller and José Bové jointly for the European Green Party and Alexis Tsipras for the Party of the European Left. Turnout dropped consistently every year since the first election, and from 1999 until 2019 was below 50%. In 2007 both Bulgaria and Romania elected their MEPs in by-elections, having joined at the beginning of 2007. The Bulgarian and Romanian elections saw two of the lowest turnouts for European elections, just 28.6% and 28.3% respectively. This trend was interrupted in the 2019 election, when turnout increased by 8% EU-wide, rising to 50.6%, the highest since 1994. In England, Scotland and Wales, EP elections were originally held for a constituency MEP on a first-past-the-post basis. In 1999 the system was changed to a form of PR where a large group of candidates stand for a post within a very large regional constituency. One can vote for a party, but not a candidate (unless that party has a single candidate). Each year the activities of the Parliament cycle between committee weeks where reports are discussed in committees and interparliamentary delegations meet, political group weeks for members to discuss work within their political groups and session weeks where members spend 3½ days in Strasbourg for part-sessions. In addition six 2-day part-sessions are organised in Brussels throughout the year. Four weeks are allocated as constituency week to allow members to do exclusively constituency work. Finally there are no meetings planned during the summer weeks. The Parliament has the power to meet without being convened by another authority. Its meetings are partly controlled by the treaties but are otherwise up to Parliament according to its own "Rules of Procedure" (the regulations governing the parliament). During sessions, members may speak after being called on by the President. Members of the Council or Commission may also attend and speak in debates. Partly due to the need for translation, and the politics of consensus in the chamber, debates tend to be calmer and more polite than, say, the Westminster system. Voting is conducted primarily by a show of hands, that may be checked on request by electronic voting. Votes of MEPs are not recorded in either case, however; that only occurs when there is a roll-call ballot. This is required for the final votes on legislation and also whenever a political group or 30 MEPs request it. The number of roll-call votes has increased with time. Votes can also be a completely secret ballot (for example, when the president is elected). All recorded votes, along with minutes and legislation, are recorded in the "Official Journal of the European Union" and can be accessed online. Votes usually do not follow a debate, but rather they are grouped with other due votes on specific occasions, usually at noon on Tuesdays, Wednesdays or Thursdays. This is because the length of the vote is unpredictable and if it continues for longer than allocated it can disrupt other debates and meetings later in the day. Members are arranged in a hemicycle according to their political groups (in the Common Assembly, prior to 1958, members sat alphabetically) who are ordered mainly by left to right, but some smaller groups are placed towards the outer ring of the Parliament. All desks are equipped with microphones, headphones for translation and electronic voting equipment. The leaders of the groups sit on the front benches at the centre, and in the very centre is a podium for guest speakers. The remaining half of the circular chamber is primarily composed of the raised area where the President and staff sit. Further benches are provided between the sides of this area and the MEPs, these are taken up by the Council on the far left and the Commission on the far right. Both the Brussels and Strasbourg hemicycle roughly follow this layout with only minor differences. The hemicycle design is a compromise between the different Parliamentary systems. The British-based system has the different groups directly facing each other while the French-based system is a semicircle (and the traditional German system had all members in rows facing a rostrum for speeches). Although the design is mainly based on a semicircle, the opposite ends of the spectrum do still face each other. With access to the chamber limited, entrance is controlled by ushers who aid MEPs in the chamber (for example in delivering documents). The ushers can also occasionally act as a form of police in enforcing the President, for example in ejecting an MEP who is disrupting the session (although this is rare). The first head of protocol in the Parliament was French, so many of the duties in the Parliament are based on the French model first developed following the French Revolution. The 180 ushers are highly visible in the Parliament, dressed in black tails and wearing a silver chain, and are recruited in the same manner as the European civil service. The President is allocated a personal usher. The President is essentially the speaker of the Parliament and presides over the plenary when it is in session. The President's signature is required for all acts adopted by co-decision, including the EU budget. The President is also responsible for representing the Parliament externally, including in legal matters, and for the application of the rules of procedure. He or she is elected for two-and-a-half-year terms, meaning two elections per parliamentary term. The President is currently David Sassoli (S&D). In most countries, the protocol of the head of state comes before all others; however, in the EU the Parliament is listed as the first institution, and hence the protocol of its president comes before any other European, or national, protocol. The gifts given to numerous visiting dignitaries depend upon the President. President Josep Borrell MEP of Spain gave his counterparts a crystal cup created by an artist from Barcelona who had engraved upon it parts of the Charter of Fundamental Rights among other things. A number of notable figures have been President of the Parliament and its predecessors. The first President was Paul-Henri Spaak MEP, one of the founding fathers of the Union. Other founding fathers include Alcide de Gasperi MEP and Robert Schuman MEP. The two female Presidents were Simone Veil MEP in 1979 (first President of the elected Parliament) and Nicole Fontaine MEP in 1999, both Frenchwomen. The previous president, Jerzy Buzek was the first East-Central European to lead an EU institution, a former Prime Minister of Poland who rose out of the Solidarity movement in Poland that helped overthrow communism in the Eastern Bloc. During the election of a President, the previous President (or, if unable to, one of the previous Vice-Presidents) presides over the chamber. Prior to 2009, the oldest member fulfilled this role but the rule was changed to prevent far-right French MEP Jean-Marie Le Pen taking the chair. Below the President, there are 14 Vice-Presidents who chair debates when the President is not in the chamber. There are a number of other bodies and posts responsible for the running of parliament besides these speakers. The two main bodies are the Bureau, which is responsible for budgetary and administration issues, and the Conference of Presidents which is a governing body composed of the presidents of each of the parliament's political groups. Looking after the financial and administrative interests of members are five Quaestors. , the European Parliament budget was EUR 1.756 billion. A 2008 report on the Parliament's finances highlighted certain overspending and miss-payments. Despite some MEPs calling for the report to be published, Parliamentary authorities had refused until an MEP broke confidentiality and leaked it. The Parliament has 20 Standing Committees consisting of 25 to 73 MEPs each (reflecting the political make-up of the whole Parliament) including a chair, a bureau and secretariat. They meet twice a month in public to draw up, amend to adopt legislative proposals and reports to be presented to the plenary. The rapporteurs for a committee are supposed to present the view of the committee, although notably this has not always been the case. In the events leading to the resignation of the Santer Commission, the rapporteur went against the Budgetary Control Committee's narrow vote to discharge the budget, and urged the Parliament to reject it. Committees can also set up sub-committees (e.g. the Subcommittee on Human Rights) and temporary committees to deal with a specific topic (e.g. on extraordinary rendition). The chairs of the Committees co-ordinate their work through the "Conference of Committee Chairmen". When co-decision was introduced it increased the Parliament's powers in a number of areas, but most notably those covered by the Committee on the Environment, Public Health and Food Safety. Previously this committee was considered by MEPs as a "Cinderella committee"; however, as it gained a new importance, it became more professional and rigorous, attracting increasing attention to its work. The nature of the committees differ from their national counterparts as, although smaller in comparison to those of the United States Congress, the European Parliament's committees are unusually large by European standards with between eight and twelve dedicated members of staff and three to four support staff. Considerable administration, archives and research resources are also at the disposal of the whole Parliament when needed. Delegations of the Parliament are formed in a similar manner and are responsible for relations with Parliaments outside the EU. There are 34 delegations made up of around 15 MEPs, chairpersons of the delegations also cooperate in a conference like the committee chairs do. They include "Interparliamentary delegations" (maintain relations with Parliament outside the EU), "joint parliamentary committees" (maintaining relations with parliaments of states which are candidates or associates of the EU), the delegation to the ACP EU Joint Parliamentary Assembly and the delegation to the Euro-Mediterranean Parliamentary Assembly. MEPs also participate in other international activities such as the Euro-Latin American Parliamentary Assembly, the Transatlantic Legislators' Dialogue and through election observation in third countries. The Intergroups in the European Parliament are informal fora which gather MEPs from various political groups around any topic. They do not express the view of the European Parliament. They serve a double purpose: to address a topic which is transversal to several committees and in a less formal manner. Their daily secretariat can be run either through the office of MEPs or through interest groups, be them corporate lobbies or NGOs. The favored access to MEPs which the organization running the secretariat enjoys can be one explanation to the multiplication of Intergroups in the 1990s. They are now strictly regulated and financial support, direct or otherwise (via Secretariat staff, for example) must be officially specified in a declaration of financial interests. Also Intergroups are established or renewed at the beginning of each legislature through a specific process. Indeed, the proposal for the constitution or renewal of an Intergroup must be supported by at least 3 political groups whose support is limited to a specific number of proposals in proportion to their size (for example, for the legislature 2014-2019, the EPP or S&D political groups could support 22 proposals whereas the Greens/EFA or the EFDD political groups only 7). Speakers in the European Parliament are entitled to speak in any of the 24 official languages of the European Union, ranging from French and German to Maltese and Irish. Simultaneous interpreting is offered in all plenary sessions, and all final texts of legislation are translated. With twenty-four languages, the European Parliament is the most multilingual parliament in the world and the biggest employer of interpreters in the world (employing 350 full-time and 400 free-lancers when there is higher demand). Citizens may also address the Parliament in Basque, Catalan/Valencian and Galician. Usually a language is translated from a foreign tongue into a translator's native tongue. Due to the large number of languages, some being minor ones, since 1995 interpreting is sometimes done the opposite way, out of an interpreter's native tongue (the "retour" system). In addition, a speech in a minor language may be interpreted through a third language for lack of interpreters ("relay" interpreting) for example, when interpreting out of Estonian into Maltese. Due to the complexity of the issues, interpretation is not word for word. Instead, interpreters have to convey the political meaning of a speech, regardless of their own views. This requires detailed understanding of the politics and terms of the Parliament, involving a great deal of preparation beforehand (e.g. reading the documents in question). Difficulty can often arise when MEPs use profanities, jokes and word play or speak too fast. While some see speaking their native language as an important part of their identity, and can speak more fluently in debates, interpretation and its cost has been criticised by some. A 2006 report by Alexander Stubb MEP highlighted that by only using English, French and German costs could be reduced from €118,000 per day (for 21 languages then Romanian, Bulgarian and Croatian having not yet been included) to €8,900 per day. There has also been a small-scale campaign to make French the reference language for all legal texts, on the basis of an argument that it is more clear and precise for legal purposes. Because the proceedings are translated into all of the official EU languages, they have been used to make a multilingual corpus known as Europarl. It is widely used to train statistical machine translation systems. According to the European Parliament website, the annual parliament budget for 2016 was €1.838 billion. The main cost categories were: According to a European Parliament study prepared in 2013, the Strasbourg seat costs an extra €103 million over maintaining a single location and according to the Court of Auditors an additional €5 million is related to travel expenses caused by having two seats. As a comparison, the German lower house of parliament (Bundestag) is estimated to cost €517 million in total for 2018, for a parliament with 709 members. The British House of Commons reported total annual costs in 2016-2017 of £249 million (€279 million). It had 650 seats. According to "The Economist", the European Parliament costs more than the British, French and German parliaments combined. A quarter of the costs is estimated to be related to translation and interpretation costs (c. €460 million) and the double seats are estimated to add an additional €180 million a year. For a like-for-like comparison, these two cost blocks can be excluded. On 2 July 2018, MEPs rejected proposals to tighten the rules around the General Expenditure Allowance (GEA), which "is a controversial €4,416 per month payment that MEPs are given to cover office and other expenses, but they are not required to provide any evidence of how the money is spent". The Parliament is based in three different cities with numerous buildings. A protocol attached to the Treaty of Amsterdam requires that 12 plenary sessions be held in Strasbourg (none in August but two in September), which is the Parliament's official seat, while extra part sessions as well as committee meetings are held in Brussels. Luxembourg City hosts the Secretariat of the European Parliament. The European Parliament is one of at least two assemblies in the world with more than one meeting place (another being the parliament of the Isle of Man, Tynwald) and one of the few that does not have the power to decide its own location. The Strasbourg seat is seen as a symbol of reconciliation between France and Germany, the Strasbourg region having been fought over by the two countries in the past. However, the cost and inconvenience of having two seats is questioned. While Strasbourg is the official seat, and sits alongside the Council of Europe, Brussels is home to nearly all other major EU institutions, with the majority of Parliament's work being carried out there. Critics have described the two-seat arrangement as a "travelling circus", and there is a strong movement to establish Brussels as the sole seat. This is because the other political institutions (the Commission, Council and European Council) are located there, and hence Brussels is treated as the 'capital' of the EU. This movement has received strong backing from numerous figures, including Margot Wallström, Commission First-Vice President from 2004 to 2010, who stated that "something that was once a very positive symbol of the EU reuniting France and Germany has now become a negative symbol of wasting money, bureaucracy and the insanity of the Brussels institutions". The Green Party has also noted the environmental cost in a study led by Jean Lambert MEP and Caroline Lucas MEP; in addition to the extra 200 million euro spent on the extra seat, there are over 20,268 tonnes of additional carbon dioxide, undermining any environmental stance of the institution and the Union. The campaign is further backed by a million-strong online petition started by Cecilia Malmström MEP. In August 2014, an assessment by the European Court of Auditors calculated that relocating the Strasbourg seat of the European Parliament to Brussels would save €113.8 million per year. In 2006, there were allegations of irregularities in the charges made by the city of Strasbourg on buildings the Parliament rented, thus further harming the case for the Strasbourg seat. Most MEPs prefer Brussels as a single base. A poll of MEPs found 89% of the respondents wanting a single seat, and 81% preferring Brussels. Another, more academic, survey found 68% support. In July 2011, an absolute majority of MEPs voted in favour of a single seat. In early 2011, the Parliament voted to scrap one of the Strasbourg sessions by holding two within a single week. The mayor of Strasbourg officially reacted by stating "we will counter-attack by upturning the adversary's strength to our own profit, as a judoka would do." However, as Parliament's seat is now fixed by the treaties, it can only be changed by the Council acting unanimously, meaning that France could veto any move. The former French President Nicolas Sarkozy has stated that the Strasbourg seat is "non-negotiable", and that France has no intention of surrendering the only EU Institution on French soil. Given France's declared intention to veto any relocation to Brussels, some MEPs have advocated civil disobedience by refusing to take part in the monthly exodus to Strasbourg. Over the last few years, European institutions have committed to promoting transparency, openness, and the availability of information about their work. In particular, transparency is regarded as pivotal to the action of European institutions and a general principle of EU law, to be applied to the activities of EU institutions in order to strengthen the Union's democratic foundation. The general principles of openness and transparency are reaffirmed in the articles 8 A, point 3 and 10.3 of the Treaty of Lisbon and the Maastricht Treaty respectively, stating that "every citizen shall have the right to participate in the democratic life of the Union. Decisions shall be taken as openly and as closely as possible to the citizen". Furthermore, both treaties acknowledge the value of dialogue between citizens, representative associations, civil society, and European institutions. Article 17 of the Treaty on the Functioning of the European Union (TFEU) lays the juridical foundation for an open, transparent dialogue between European institutions and churches, religious associations, and non-confessional and philosophical organisations. In July 2014, in the beginning of the 8th term, then President of the European Parliament Martin Schulz tasked Antonio Tajani, then Vice-President, with implementing the dialogue with the religious and confessional organisations included in article 17. In this framework, the European Parliament hosts high-level conferences on inter-religious dialogue, also with focus on current issues and in relation with parliamentary works. The chair of European Parliament Mediator for International Parental Child Abduction was established in 1987 by initiative of British MEP Charles Henry Plumb, with the goal of helping minor children of international couples victim of parental abduction. The Mediator finds negotiated solutions in the higher interest of the minor when said minor is abducted by a parent following separation of the couple, regardless whether married or unmarried. Since its institution, the chair has been held by Mairead McGuinness (since 2014), Roberta Angelilli (2009-2014), Evelyne Gebhardt (2004-2009), Mary Banotti (1995-2004), and Marie-Claude Vayssade (1987-1994). The Mediator's main task is to assist parents in finding a solution in the minor's best interest through mediation, i.e. a form of controversy resolution alternative to lawsuit. The Mediator is activated by request of a citizen and, after evaluating the request, starts a mediation process aimed at reaching an agreement. Once subscribed by both parties and the Mediator, the agreement is official. The nature of the agreement is that of a private contract between parties. In defining the agreement, the European Parliament offers the parties the juridical support necessary to reach a sound, lawful agreement based on legality and equity. The agreement can be ratified by the competent national courts and can also lay the foundation for consensual separation or divorce. The European Parliamentary Research Service (EPRS) is the European Parliament's in-house research department and think tank. It provides Members of the European Parliament and, where appropriate, parliamentary committees with independent, objective and authoritative analysis of, and research on, policy issues relating to the European Union, in order to assist them in their parliamentary work. It is also designed to increase Members' and EP committees' capacity to scrutinise and oversee the European Commission and other EU executive bodies. EPRS aims to provide a comprehensive range of products and services, backed by specialist internal expertise and knowledge sources in all policy fields, so empowering Members and committees through knowledge and contributing to the Parliament's effectiveness and influence as an institution. In undertaking this work, the EPRS supports and promotes parliamentary outreach to the wider public, including dialogue with relevant stakeholders in the EU’s system of multi-level governance. All publications by EPRS are publicly available on the EP Think Tank platform. The European Parliament periodically commissions opinion polls and studies on public opinion trends in Member States to survey perceptions and expectations of citizens about its work and the overall activities of the European Union. Topics include citizens' perception of the European Parliament's role, their knowledge of the institution, their sense of belonging in the European Union, opinions on European elections and European integration, identity, citizenship, political values, but also on current issues such as climate change, current economy and politics, etc.. Eurobarometer analyses seek to provide an overall picture of national situations, regional specificities, socio-demographic cleavages, and historical trends. Annually, the European Parliament awards four prizes to individuals and organisations that distinguished themselves in the areas of human rights, film, youth projects, and European participation and citizenship. With the Sakharov Prize for Freedom of Thought, created in 1998, the European Parliament supports human rights by awarding individuals that contribute to promoting human rights worldwide, thus raising awareness on human rights violations. Priorities include: protection of human rights and fundamental liberties, with particular focus on freedom of expression; protection of minority rights; compliance with international law; and development of democracy and authentic rule of law. The European Charlemagne Youth Prize seeks to encourage youth participation in the European integration process. It is awarded by the European Parliament and the Foundation of the International Charlemagne Prize of Aachen to youth projects aimed at nurturing common European identity and European citizenship. The European Citizens' Prize is awarded by the European Parliament to activities and actions carried out by citizens and associations to promote integration between the citizens of EU member states and transnational cooperation projects in the EU. Since 2007, the LUX Prize is awarded by the European Parliament to films dealing with current topics of public European interest that encourage reflection on Europe and its future. Over time, the Lux Prize has become a prestigious cinema award which supports European film and production also outside the EU.
https://en.wikipedia.org/wiki?curid=9581
European Council The European Council ("informally" EUCO) is a collective body that defines the European Union's overall political direction and priorities. It comprises the heads of state or government of the EU member states, along with the President of the European Council and the President of the European Commission. The High Representative of the Union for Foreign Affairs and Security Policy also takes part in its meetings. Established as an informal summit in 1975, the European Council was formalised as an institution in 2009 upon the entry into force of the Treaty of Lisbon. Its current president is Charles Michel, former Prime Minister of Belgium. While the European Council has no legislative power, it is a strategic (and crisis-solving) body that provides the union with general political directions and priorities, and acts as a collective presidency. The European Commission remains the sole initiator of legislation, but the European Council is able to provide an impetus to guide legislative policy. The meetings of the European Council, still commonly referred to as EU summits, are chaired by its president and take place at least twice every six months; usually in the Europa building in Brussels. Decisions of the European Council are taken by consensus, except where the Treaties provide otherwise. The European Council officially gained the status of an EU institution after the Treaty of Lisbon in 2007, distinct from the Council of the European Union (Council of Ministers). Before that, the first summits of EU heads of state or government were held in February and July 1961 (in Paris and Bonn respectively). They were informal summits of the leaders of the European Community, and were started due to then-French President Charles de Gaulle's resentment at the domination of supranational institutions (notably the European Commission) over the integration process, but petered out. The first influential summit held, after the departure of de Gaulle, was the Hague summit of 1969, which reached an agreement on the admittance of the United Kingdom into the Community and initiated foreign policy cooperation (the European Political Cooperation) taking integration beyond economics.The summits were only formalised in the period between 1974 and 1988. At the December summit in Paris in 1974, following a proposal from then-French president Valéry Giscard d'Estaing, it was agreed that more high level, political input was needed following the "empty chair crisis" and economic problems. The inaugural "European Council", as it became known, was held in Dublin on 10 and 11 March 1975 during Ireland's first Presidency of the Council of Ministers. In 1987, it was included in the treaties for the first time (the Single European Act) and had a defined role for the first time in the Maastricht Treaty. At first only a minimum of two meetings per year were required, which resulted in an average of three meetings per year being held for the 1975-1995 period. Since 1996, the number of meetings were required to be minimum four per year. For the latest 2008-2014 period, this minimum was well exceeded, by an average of seven meetings being held per year. The seat of the Council was formalised in 2002, basing it in Brussels. Three types of European Councils exist: Informal, Scheduled and Extraordinary. While the informal meetings are also scheduled 1½ years in advance, they differ from the scheduled ordinary meetings by not ending with official "Council conclusions", as they instead end by more broad political "Statements" on some cherry picked policy matters. The extraordinary meetings always end with official "Council conclusions" - but differs from the scheduled meetings by not being scheduled more than a year in advance, as for example in 2001 when the European Council gathered to lead the European Union's response to the 11 September attacks. Some meetings of the European Council—and, before the European Council was formalised, meetings of the heads of government—are seen by some as turning points in the history of the European Union. For example: As such, the European Council had already existed before it gained the status as an institution of the European Union with the entering into force of the Treaty of Lisbon, but even after it had been mentioned in the treaties (since the Single European Act) it could only take political decisions, not formal legal acts. However, when necessary, the Heads of State or Government could also meet as the Council of Ministers and take formal decisions in that role. Sometimes, this was even compulsory, e.g. Article 214(2) of the Treaty establishing the European Community provided (before it was amended by the Treaty of Lisbon) that ‘the Council, meeting "in the composition of Heads of State or Government" and acting by a qualified majority, shall nominate the person it intends to appoint as President of the Commission’ (emphasis added); the same rule applied in some monetary policy provisions introduced by the Maastricht Treaty (e.g. Article 109j TEC). In that case, what was politically part of a European Council meeting was legally a meeting of the Council of Ministers. When the European Council, already introduced into the treaties by the Single European Act, became an institution by virtue of the Treaty of Lisbon, this was no longer necessary, and the "Council [of the European Union] meeting in the composition of the Heads of State or Government", was replaced in these instances by the European Council now taking formal legally binding decisions in these cases (). The Treaty of Lisbon made the European Council a formal institution distinct from the (ordinary) Council of the EU, and created the present longer term and full-time presidency. As an outgrowth of the Council of the EU, the European Council had previously followed the same Presidency, rotating between each member state. While the Council of the EU retains that system, the European Council established, with no change in powers, a system of appointing an individual (without them being a national leader) for a two-and-a-half-year term—which can be renewed for the same person only once. Following the ratification of the treaty in December 2009, the European Council elected the then-Prime Minister of Belgium Herman Van Rompuy as its first permanent president (resigning from Belgian Prime Minister). The European Council is an official institution of the EU, mentioned by the Lisbon Treaty as a body which "shall provide the Union with the necessary impetus for its development". Essentially it defines the EU's policy agenda and has thus been considered to be the motor of European integration. Beyond the need to provide "impetus", the Council has developed further roles: to "settle issues outstanding from discussions at a lower level", to lead in foreign policy — acting externally as a "collective Head of State", "formal ratification of important documents" and "involvement in the negotiation of the treaty changes". Since the institution is composed of national leaders, it gathers the executive power of the member states and has thus a great influence in high-profile policy areas as for example foreign policy. It also exercises powers of appointment, such as appointment of its own President, the High Representative of the Union for Foreign Affairs and Security Policy, and the President of the European Central Bank. It proposes, to the European Parliament, a candidate for President of the European Commission. Moreover, the European Council influences police and justice planning, the composition of the Commission, matters relating to the organisation of the rotating Council presidency, the suspension of membership rights, and changing the voting systems through the Passerelle Clause. Although the European Council has no direct legislative power, under the "emergency brake" procedure, a state outvoted in the Council of Ministers may refer contentious legislation to the European Council. However, the state may still be outvoted in the European Council. Hence with powers over the supranational executive of the EU, in addition to its other powers, the European Council has been described by some as the Union's "supreme political authority". The European Council consists of the heads of state or government of the member states, alongside its own President and the Commission President (both non-voting). The meetings used to be regularly attended by the national foreign minister as well, and the Commission President likewise accompanied by another member of the Commission. However, since the Treaty of Lisbon, this has been discontinued, as the size of the body had become somewhat large following successive accessions of new Member States to the Union. Meetings can also include other invitees, such as the President of the European Central Bank, as required. The Secretary-General of the Council attends, and is responsible for organisational matters, including minutes. The President of the European Parliament also attends to give an opening speech outlining the European Parliament's position before talks begin. Additionally, the negotiations involve a large number of other people working behind the scenes. Most of those people, however, are not allowed to the conference room, except for two delegates per state to relay messages. At the push of a button members can also call for advice from a Permanent Representative via the "Antici Group" in an adjacent room. The group is composed of diplomats and assistants who convey information and requests. Interpreters are also required for meetings as members are permitted to speak in their own languages. As the composition is not precisely defined, some states which have a considerable division of executive power can find it difficult to decide who should attend the meetings. While an MEP, Alexander Stubb argued that there was no need for the President of Finland to attend Council meetings with or instead of the Prime Minister of Finland (who was head of European foreign policy). In 2008, having become Finnish Foreign Minister, Stubb was forced out of the Finnish delegation to the emergency council meeting on the Georgian crisis because the President wanted to attend the high-profile summit as well as the Prime Minister (only two people from each country could attend the meetings). This was despite Stubb being Chair-in-Office of the Organisation for Security and Co-operation in Europe at the time which was heavily involved in the crisis. Problems also occurred in Poland where the President of Poland and the Prime Minister of Poland were of different parties and had a different foreign policy response to the crisis. A similar situation arose in Romania between President Traian Băsescu and Prime Minister Călin Popescu-Tăriceanu in 2007–2008 and again in 2012 with Prime Minister Victor Ponta, who both opposed the president. A number of ad hoc meetings of Heads of State or Government of the Euro area countries were held in 2010 and 2011 to discuss the Sovereign Debt crisis. It was agreed in October 2011 that they should meet regularly twice a year (with extra meetings if needed). This will normally be at the end of a European Council meeting and according to the same format (chaired by the President of the European Council and including the President of the Commission), but usually restricted to the (currently 19) Heads of State or Government of countries whose currency is the euro. The President of the European Council is elected by the European Council by a qualified majority for a once-renewable term of two and a half years. The President must report to the European Parliament after each European Council meeting.The post was created by the Treaty of Lisbon and was subject to a debate over its exact role. Prior to Lisbon, the Presidency rotated in accordance with the Presidency of the Council of the European Union. The role of that President-in-Office was in no sense (other than protocol) equivalent to an office of a head of state, merely a "primus inter pares" (first among equals) role among other European heads of government. The President-in-Office was primarily responsible for preparing and chairing the Council meetings, and had no executive powers other than the task of representing the Union externally. Now the leader of the Council Presidency country can still act as president when the permanent president is absent. (11 + 1 non-voting from the EU institution) Almost all members of the European Council are members of a political party at national level, and most of these are members of a European-level political party or other alliances such as Renew Europe. These frequently hold pre-meetings of their European Council members, prior to its meetings. However, the European Council is composed to represent the EU's states rather than political alliances and decisions are generally made on these lines, though ideological alignment can colour their political agreements and their choice of appointments (such as their president). The table below outlines the number of leaders affiliated to each alliance and their total voting weight. The map to the right indicates the alignment of each individual country. The European Council is required by Article 15.3 TEU to meet at least twice every six months, but convenes more frequently in practice. Despite efforts to contain business, meetings typically last for at least two days, and run long into the night. Until 2002, the venue for European Council summits was the member state that held the rotating Presidency of the Council of the European Union. However, European leaders agreed during ratification of the Nice Treaty to forego this arrangement at such a time as the total membership of the European Union surpassed 18 member states. An advanced implementation of this agreement occurred in 2002, with certain states agreeing to waive their right to host meetings, favouring Brussels as the location. Following the growth of the EU to 25 member states, with the 2004 enlargement, all subsequent official summits of the European Council have been in Brussels, with the exception of punctuated ad hoc meetings, such as the 2017 informal European Council in Malta. The logistical, environmental, financial and security arrangements of hosting large summits are usually cited as the primary factors in the decision by EU leaders to move towards a permanent seat for the European Council. Additionally, some scholars argue that the move, when coupled with the formalisation of the European Council in the Lisbon Treaty, represents an institutionalisation of an ad hoc EU organ that had its origins in Luxembourg compromise, with national leaders reasserting their dominance as the EU's "supreme political authority". Originally, both the European Council and the Council of the European Union utilised the Justus Lipsius building as their Brussels venue. In order to make room for additional meeting space a number of renovations were made, including the conversion of an underground carpark into additional press briefing rooms. However, in 2004 leaders decided the logistical problems created by the outdated facilities warranted the construction of a new purpose built seat able to cope with the nearly 6,000 meetings, working groups, and summits per year. This resulted in the Europa building, which opened its doors in 2017. The focal point of the new building, the distinctive multi-storey "lantern-shaped" structure in which the main meeting room is located, is utilised in both the European Council's and Council of the European Union's official logos.
https://en.wikipedia.org/wiki?curid=9582
Euthanasia Euthanasia (from ; "good death": εὖ, "eu"; "well" or "good" + θάνατος, "thanatos"; "death") is the practice of intentionally ending a life to relieve pain and suffering. Different countries have different euthanasia laws. The British House of Lords select committee on medical ethics defines euthanasia as "a deliberate intervention undertaken with the express intention of ending a life, to relieve intractable suffering". In the Netherlands and Belgium, euthanasia is understood as "termination of life by a doctor at the request of a patient". The Dutch law, however, does not use the term 'euthanasia' but includes the concept under the broader definition of "assisted suicide and termination of life on request". Euthanasia is categorized in different ways, which include voluntary, non-voluntary, or involuntary: In some countries divisive public controversy occurs over the moral, ethical, and legal issues associated with euthanasia. Passive euthanasia (known as "pulling the plug") is legal under some circumstances in many countries. Active euthanasia, however, is legal or "de facto" legal in only a handful of countries (for example: Belgium, Canada and Switzerland), which limit it to specific circumstances and require the approval of counselors and doctors or other specialists. In some countries - such as Nigeria, Saudi Arabia and Pakistan - support for active euthanasia is almost non-existent. Like other terms borrowed from history, "euthanasia" has had different meanings depending on usage. The first apparent usage of the term "euthanasia" belongs to the historian Suetonius, who described how the Emperor Augustus, "dying quickly and without suffering in the arms of his wife, Livia, experienced the 'euthanasia' he had wished for." The word "euthanasia" was first used in a medical context by Francis Bacon in the 17th century, to refer to an easy, painless, happy death, during which it was a "physician's responsibility to alleviate the 'physical sufferings' of the body." Bacon referred to an "outward euthanasia"—the term "outward" he used to distinguish from a spiritual concept—the euthanasia "which regards the preparation of the soul." In current usage, euthanasia has been defined as the "painless inducement of a quick death". However, it is argued that this approach fails to properly define euthanasia, as it leaves open a number of possible actions which would meet the requirements of the definition, but would not be seen as euthanasia. In particular, these include situations where a person kills another, painlessly, but for no reason beyond that of personal gain; or accidental deaths that are quick and painless, but not intentional. Another approach incorporates the notion of suffering into the definition. The definition offered by the Oxford English Dictionary incorporates suffering as a necessary condition, with "the painless killing of a patient suffering from an incurable and painful disease or in an irreversible coma", This approach is included in Marvin Khol and Paul Kurtz's definition of it as "a mode or act of inducing or permitting death painlessly as a relief from suffering". Counterexamples can be given: such definitions may encompass killing a person suffering from an incurable disease for personal gain (such as to claim an inheritance), and commentators such as Tom Beauchamp and Arnold Davidson have argued that doing so would constitute "murder simpliciter" rather than euthanasia. The third element incorporated into many definitions is that of intentionality – the death must be intended, rather than being accidental, and the intent of the action must be a "merciful death". Michael Wreen argued that "the principal thing that distinguishes euthanasia from intentional killing simpliciter is the agent's motive: it must be a good motive insofar as the good of the person killed is concerned." Likewise, James Field argued that euthanasia entails a sense of compassion towards the patient, in contrast to the diverse non-compassionate motives of serial killers who work in health care professions. Similarly, Heather Draper speaks to the importance of motive, arguing that "the motive forms a crucial part of arguments for euthanasia, because it must be in the best interests of the person on the receiving end." Definitions such as that offered by the House of Lords Select committee on Medical Ethics take this path, where euthanasia is defined as "a deliberate intervention undertaken with the express intention of ending a life, to relieve intractable suffering." Beauchamp and Davidson also highlight Baruch Brody's "an act of euthanasia is one in which one person ... (A) kills another person (B) for the benefit of the second person, who actually does benefit from being killed". Draper argued that any definition of euthanasia must incorporate four elements: an agent and a subject; an intention; a causal proximity, such that the actions of the agent lead to the outcome; and an outcome. Based on this, she offered a definition incorporating those elements, stating that euthanasia "must be defined as death that results from the intention of one person to kill another person, using the most gentle and painless means possible, that is motivated solely by the best interests of the person who dies." Prior to Draper, Beauchamp and Davidson had also offered a definition that includes these elements. Their definition specifically discounts fetuses to distinguish between abortions and euthanasia: Wreen, in part responding to Beauchamp and Davidson, offered a six-part definition: Wreen also considered a seventh requirement: "(7) The good specified in (6) is, or at least includes, the avoidance of evil", although as Wreen noted in the paper, he was not convinced that the restriction was required. In discussing his definition, Wreen noted the difficulty of justifying euthanasia when faced with the notion of the subject's "right to life". In response, Wreen argued that euthanasia has to be voluntary, and that "involuntary euthanasia is, as such, a great wrong". Other commentators incorporate consent more directly into their definitions. For example, in a discussion of euthanasia presented in 2003 by the European Association of Palliative Care (EPAC) Ethics Task Force, the authors offered: "Medicalized killing of a person without the person's consent, whether nonvoluntary (where the person is unable to consent) or involuntary (against the person's will) is not euthanasia: it is murder. Hence, euthanasia can be voluntary only." Although the EPAC Ethics Task Force argued that both non-voluntary and involuntary euthanasia could not be included in the definition of euthanasia, there is discussion in the literature about excluding one but not the other. Euthanasia may be classified into three types, according to whether a person gives informed consent: voluntary, non-voluntary and involuntary. There is a debate within the medical and bioethics literature about whether or not the non-voluntary (and by extension, involuntary) killing of patients can be regarded as euthanasia, irrespective of intent or the patient's circumstances. In the definitions offered by Beauchamp and Davidson and, later, by Wreen, consent on the part of the patient was not considered as one of their criteria, although it may have been required to justify euthanasia. However, others see consent as essential. Voluntary euthanasia is conducted with the consent of the patient. Active voluntary euthanasia is legal in Belgium, Luxembourg and the Netherlands. Passive voluntary euthanasia is legal throughout the US per "Cruzan v. Director, Missouri Department of Health". When the patient brings about their own death with the assistance of a physician, the term assisted suicide is often used instead. Assisted suicide is legal in Switzerland and the U.S. states of California, Oregon, Washington, Montana and Vermont. Non-voluntary euthanasia is conducted when the consent of the patient is unavailable. Examples include child euthanasia, which is illegal worldwide but decriminalised under certain specific circumstances in the Netherlands under the Groningen Protocol. Involuntary euthanasia is conducted against the will of the patient. Voluntary, non-voluntary and involuntary types can be further divided into passive or active variants. Passive euthanasia entails the withholding treatment necessary for the continuance of life. Active euthanasia entails the use of lethal substances or forces (such as administering a lethal injection), and is the more controversial. While some authors consider these terms to be misleading and unhelpful, they are nonetheless commonly used. In some cases, such as the administration of increasingly necessary, but toxic doses of painkillers, there is a debate whether or not to regard the practice as active or passive. Euthanasia was practiced in Ancient Greece and Rome: for example, hemlock was employed as a means of hastening death on the island of Kea, a technique also employed in Marseilles. Euthanasia, in the sense of the deliberate hastening of a person's death, was supported by Socrates, Plato and Seneca the Elder in the ancient world, although Hippocrates appears to have spoken against the practice, writing "I will not prescribe a deadly drug to please someone, nor give advice that may cause his death" (noting there is some debate in the literature about whether or not this was intended to encompass euthanasia). The term "euthanasia" in the earlier sense of supporting someone as they died, was used for the first time by Francis Bacon. In his work, "Euthanasia medica", he chose this ancient Greek word and, in doing so, distinguished between "euthanasia interior", the preparation of the soul for death, and "euthanasia exterior", which was intended to make the end of life easier and painless, in exceptional circumstances by shortening life. That the ancient meaning of an easy death came to the fore again in the early modern period can be seen from its definition in the 18th century "Zedlers Universallexikon": Euthanasia: a very gentle and quiet death, which happens without painful convulsions. The word comes from ευ, "bene", well, and θανατος, "mors", death. The concept of euthanasia in the sense of alleviating the process of death goes back to the medical historian, Karl Friedrich Heinrich Marx, who drew on Bacon's philosophical ideas. According to Marx, a doctor had a moral duty to ease the suffering of death through encouragement, support and mitigation using medication. Such an "alleviation of death" reflected the contemporary "zeitgeist", but was brought into the medical canon of responsibility for the first time by Marx. Marx also stressed the distinction between the theological care of the soul of sick people from the physical care and medical treatment by doctors. Euthanasia in its modern sense has always been strongly opposed in the Judeo-Christian tradition. Thomas Aquinas opposed both and argued that the practice of euthanasia contradicted our natural human instincts of survival, as did Francois Ranchin (1565–1641), a French physician and professor of medicine, and Michael Boudewijns (1601–1681), a physician and teacher. Other voices argued for euthanasia, such as John Donne in 1624, and euthanasia continued to be practised. In 1678, the publication of Caspar Questel's "De pulvinari morientibus non-subtrahend", (""On the pillow of which the dying should not be deprived""), initiated debate on the topic. Questel described various customs which were employed at the time to hasten the death of the dying, (including the sudden removal of a pillow, which was believed to accelerate death), and argued against their use, as doing so was "against the laws of God and Nature". This view was shared by others who followed, including Philipp Jakob Spener, Veit Riedlin and Johann Georg Krünitz. Despite opposition, euthanasia continued to be practised, involving techniques such as bleeding, suffocation, and removing people from their beds to be placed on the cold ground. Suicide and euthanasia became more accepted during the Age of Enlightenment. Thomas More wrote of euthanasia in "Utopia", although it is not clear if More was intending to endorse the practice. Other cultures have taken different approaches: for example, in Japan suicide has not traditionally been viewed as a sin, as it is used in cases of honor, and accordingly, the perceptions of euthanasia are different from those in other parts of the world. In the mid-1800s, the use of morphine to treat "the pains of death" emerged, with John Warren recommending its use in 1848. A similar use of chloroform was revealed by Joseph Bullar in 1866. However, in neither case was it recommended that the use should be to hasten death. In 1870 Samuel Williams, a schoolteacher, initiated the contemporary euthanasia debate through a speech given at the Birmingham Speculative Club in England, which was subsequently published in a one-off publication entitled "Essays of the Birmingham Speculative Club", the collected works of a number of members of an amateur philosophical society. Williams' proposal was to use chloroform to deliberately hasten the death of terminally ill patients: The essay was favourably reviewed in "The Saturday Review", but an editorial against the essay appeared in "The Spectator". From there it proved to be influential, and other writers came out in support of such views: Lionel Tollemache wrote in favour of euthanasia, as did Annie Besant, the essayist and reformer who later became involved with the National Secular Society, considering it a duty to society to "die voluntarily and painlessly" when one reaches the point of becoming a 'burden'. "Popular Science" analyzed the issue in May 1873, assessing both sides of the argument. Kemp notes that at the time, medical doctors did not participate in the discussion; it was "essentially a philosophical enterprise ... tied inextricably to a number of objections to the Christian doctrine of the sanctity of human life". The rise of the euthanasia movement in the United States coincided with the so-called Gilded Age, a time of social and technological change that encompassed an "individualistic conservatism that praised laissez-faire economics, scientific method, and rationalism", along with major depressions, industrialisation and conflict between corporations and labour unions. It was also the period in which the modern hospital system was developed, which has been seen as a factor in the emergence of the euthanasia debate. Robert Ingersoll argued for euthanasia, stating in 1894 that where someone is suffering from a terminal illness, such as terminal cancer, they should have a right to end their pain through suicide. Felix Adler offered a similar approach, although, unlike Ingersoll, Adler did not reject religion. In fact, he argued from an Ethical Culture framework. In 1891, Adler argued that those suffering from overwhelming pain should have the right to commit suicide, and, furthermore, that it should be permissible for a doctor to assist – thus making Adler the first "prominent American" to argue for suicide in cases where people were suffering from chronic illness. Both Ingersoll and Adler argued for voluntary euthanasia of adults suffering from terminal ailments. Dowbiggin argues that by breaking down prior moral objections to euthanasia and suicide, Ingersoll and Adler enabled others to stretch the definition of euthanasia. The first attempt to legalise euthanasia took place in the United States, when Henry Hunt introduced legislation into the General Assembly of Ohio in 1906. Hunt did so at the behest of Anna Sophina Hall, a wealthy heiress who was a major figure in the euthanasia movement during the early 20th century in the United States. Hall had watched her mother die after an extended battle with liver cancer, and had dedicated herself to ensuring that others would not have to endure the same suffering. Towards this end she engaged in an extensive letter writing campaign, recruited Lurana Sheldon and Maud Ballington Booth, and organised a debate on euthanasia at the annual meeting of the American Humane Association in 1905 – described by Jacob Appel as the first significant public debate on the topic in the 20th century. Hunt's bill called for the administration of an anesthetic to bring about a patient's death, so long as the person is of lawful age and sound mind, and was suffering from a fatal injury, an irrevocable illness, or great physical pain. It also required that the case be heard by a physician, required informed consent in front of three witnesses, and required the attendance of three physicians who had to agree that the patient's recovery was impossible. A motion to reject the bill outright was voted down, but the bill failed to pass, 79 to 23. Along with the Ohio euthanasia proposal, in 1906 Assemblyman Ross Gregory introduced a proposal to permit euthanasia to the Iowa legislature. However, the Iowa legislation was broader in scope than that offered in Ohio. It allowed for the death of any person of at least ten years of age who suffered from an ailment that would prove fatal and cause extreme pain, should they be of sound mind and express a desire to artificially hasten their death. In addition, it allowed for infants to be euthanised if they were sufficiently deformed, and permitted guardians to request euthanasia on behalf of their wards. The proposed legislation also imposed penalties on physicians who refused to perform euthanasia when requested: a 6–12-month prison term and a fine of between $200 and $1,000. The proposal proved to be controversial. It engendered considerable debate and failed to pass, having been withdrawn from consideration after being passed to the Committee on Public Health. After 1906 the euthanasia debate reduced in intensity, resurfacing periodically, but not returning to the same level of debate until the 1930s in the United Kingdom. Euthanasia opponent Ian Dowbiggin argues that the early membership of the Euthanasia Society of America (ESA) reflected how many perceived euthanasia at the time, often seeing it as a eugenics matter rather than an issue concerning individual rights. Dowbiggin argues that not every eugenist joined the ESA "solely for eugenic reasons", but he postulates that there were clear ideological connections between the eugenics and euthanasia movements. The Voluntary Euthanasia Legalisation Society was founded in 1935 by Charles Killick Millard (now called Dignity in Dying). The movement campaigned for the legalisation of euthanasia in Great Britain. In January 1936, King George V was given a fatal dose of morphine and cocaine to hasten his death. At the time he was suffering from cardio-respiratory failure, and the decision to end his life was made by his physician, Lord Dawson. Although this event was kept a secret for over 50 years, the death of George V coincided with proposed legislation in the House of Lords to legalise euthanasia. A 24 July 1939 killing of a severely disabled infant in Nazi Germany was described in a BBC "Genocide Under the Nazis Timeline" as the first "state-sponsored euthanasia". Parties that consented to the killing included Hitler's office, the parents, and the Reich Committee for the Scientific Registration of Serious and Congenitally Based Illnesses. "The Telegraph" noted that the killing of the disabled infant—whose name was Gerhard Kretschmar, born blind, with missing limbs, subject to convulsions, and reportedly "an idiot"— provided "the rationale for a secret Nazi decree that led to 'mercy killings' of almost 300,000 mentally and physically handicapped people". While Kretchmar's killing received parental consent, most of the 5,000 to 8,000 children killed afterwards were forcibly taken from their parents. The "euthanasia campaign" of mass murder gathered momentum on 14 January 1940 when the "handicapped" were killed with gas vans and killing centres, eventually leading to the deaths of 70,000 adult Germans. Professor Robert Jay Lifton, author of "The Nazi Doctors" and a leading authority on the T4 program, contrasts this program with what he considers to be a genuine euthanasia. He explains that the Nazi version of "euthanasia" was based on the work of Adolf Jost, who published "The Right to Death" (Das Recht auf den Tod) in 1895. Lifton writes: Jost argued that control over the death of the individual must ultimately belong to the social organism, the state. This concept is in direct opposition to the Anglo-American concept of euthanasia, which emphasizes the "individual's" 'right to die' or 'right to death' or 'right to his or her own death,' as the ultimate human claim. In contrast, Jost was pointing to the state's right to kill. ... Ultimately the argument was biological: 'The rights to death [are] the key to the fitness of life.' The state must own death—must kill—in order to keep the social organism alive and healthy. In modern terms, the use of "euthanasia" in the context of Action T4 is seen to be a euphemism to disguise a program of genocide, in which people were killed on the grounds of "disabilities, religious beliefs, and discordant individual values". Compared to the discussions of euthanasia that emerged post-war, the Nazi program may have been worded in terms that appear similar to the modern use of "euthanasia", but there was no "mercy" and the patients were not necessarily terminally ill. Despite these differences, historian and euthanasia opponent Ian Dowbiggin writes that "the origins of Nazi euthanasia, like those of the American euthanasia movement, predate the Third Reich and were intertwined with the history of eugenics and Social Darwinism, and with efforts to discredit traditional morality and ethics." On 6 January 1949, the Euthanasia Society of America presented to the New York State Legislature a petition to legalize euthanasia, signed by 379 leading Protestant and Jewish ministers, the largest group of religious leaders ever to have taken this stance. A similar petition had been sent to the New York Legislature in 1947, signed by approximately 1,000 New York physicians. Roman Catholic religious leaders criticized the petition, saying that such a bill would "legalize a suicide-murder pact" and a "rationalization of the fifth commandment of God, 'Thou Shalt Not Kill.'" The Right Reverend Robert E. McCormick stated that The petition brought tensions between the American Euthanasia Society and the Catholic Church to a head that contributed to a climate of anti-Catholic sentiment generally, regarding issues such as birth control, eugenics, and population control. However, the petition did not result in any legal changes. Historically, the euthanasia debate has tended to focus on a number of key concerns. According to euthanasia opponent Ezekiel Emanuel, proponents of euthanasia have presented four main arguments: a) that people have a right to self-determination, and thus should be allowed to choose their own fate; b) assisting a subject to die might be a better choice than requiring that they continue to suffer; c) the distinction between passive euthanasia, which is often permitted, and active euthanasia, which is not substantive (or that the underlying principle–the doctrine of double effect–is unreasonable or unsound); and d) permitting euthanasia will not necessarily lead to unacceptable consequences. Pro-euthanasia activists often point to countries like the Netherlands and Belgium, and states like Oregon, where euthanasia has been legalized, to argue that it is mostly unproblematic. Similarly, Emanuel argues that there are four major arguments presented by opponents of euthanasia: a) not all deaths are painful; b) alternatives, such as cessation of active treatment, combined with the use of effective pain relief, are available; c) the distinction between active and passive euthanasia is morally significant; and d) legalising euthanasia will place society on a slippery slope, which will lead to unacceptable consequences. In fact, in Oregon, in 2013, pain wasn't one of the top five reasons people sought euthanasia. Top reasons were a loss of dignity, and a fear of burdening others. In the United States in 2013, 47% nationwide supported doctor-assisted suicide. This included 32% of Latinos, 29% of African-Americans, and almost nobody with disabilities. A 2015 Populus poll in the United Kingdom found broad public support for assisted dying. 82% of people supported the introduction of assisted dying laws, including 86% of people with disabilities. One concern is that euthanasia might undermine filial responsibility. In some countries, adult children of impoverished parents are legally entitled to support payments under filial responsibility laws. Thirty out of the fifty United States as well as France, Germany, Singapore, and Taiwan have filial responsibility laws. West's "Encyclopedia of American Law" states that "a 'mercy killing' or euthanasia is generally considered to be a criminal homicide" and is normally used as a synonym of homicide committed at a request made by the patient. The judicial sense of the term "homicide" includes any intervention undertaken with the express intention of ending a life, even to relieve intractable suffering. Not all homicide is unlawful. Two designations of homicide that carry no criminal punishment are justifiable and excusable homicide. In most countries this is not the status of euthanasia. The term "euthanasia" is usually confined to the active variety; the University of Washington website states that "euthanasia generally means that the physician would act directly, for instance by giving a lethal injection, to end the patient's life". Physician-assisted suicide is thus not classified as euthanasia by the US State of Oregon, where it is legal under the Oregon Death with Dignity Act, and despite its name, it is not legally classified as suicide either. Unlike physician-assisted suicide, withholding or withdrawing life-sustaining treatments with patient consent (voluntary) is almost unanimously considered, at least in the United States, to be legal. The use of pain medication to relieve suffering, even if it hastens death, has been held as legal in several court decisions. Some governments around the world have legalized voluntary euthanasia but most commonly it is still considered to be criminal homicide. In the Netherlands and Belgium, where euthanasia has been legalized, it still remains homicide although it is not prosecuted and not punishable if the perpetrator (the doctor) meets certain legal conditions. In a historic judgment, the Supreme court of India legalized passive euthanasia. The apex court remarked in the judgment that the Constitution of India values liberty, dignity, autonomy, and privacy. A bench headed by Chief Justice Dipak Misra delivered a unanimous judgment. A 2010 survey in the United States of more than 10,000 physicians found that 16.3% of physicians would consider halting life-sustaining therapy because the family demanded it, even if they believed that it was premature. Approximately 54.5% would not, and the remaining 29.2% responded "it depends". The study also found that 45.8% of physicians agreed that physician-assisted suicide should be allowed in some cases; 40.7% did not, and the remaining 13.5% felt it depended. In the United Kingdom, the assisted dying campaign group Dignity in Dying cites research in which 54% of General Practitioners support or are neutral towards a law change on assisted dying. Similarly, a 2017 Doctors.net.uk poll reported in the British Medical Journal stated that 55% of doctors believe assisted dying, in defined circumstances, should be legalised in the UK. One concern among healthcare professionals is the possibility of being asked to participate in euthanasia in a situation where they personally believe it to be wrong. In a 1996 study of 852 nurses in adult ICUs, 19% admitted to participating in euthanasia. 30% of those who admitted to it also believed that euthanasia is unethical. The Roman Catholic Church condemns euthanasia and assisted suicide as morally wrong. It states that, "intentional euthanasia, whatever its forms or motives, is murder. It is gravely contrary to the dignity of the human person and to the respect due to the living God, his Creator". Because of this, the practice is unacceptable within the Church. The Orthodox Church in America, along with other Eastern Orthodox Churches, also opposes euthanasia stating that "euthanasia is the deliberate cessation of human life, and, as such, must be condemned as murder." Many non-Catholic churches in the United States take a stance against euthanasia. Among Protestant denominations, the Episcopal Church passed a resolution in 1991 opposing euthanasia and assisted suicide stating that it is "morally wrong and unacceptable to take a human life to relieve the suffering caused by incurable illnesses." Other Protestant churches which oppose euthanasia include: The Church of England accepts passive euthanasia under some circumstances, but is strongly against active euthanasia, and has led opposition against recent attempt to legalise it. The United Church of Canada accepts passive euthanasia under some circumstances, but is in general against active euthanasia, with growing acceptance now that active euthanasia has been partly legalised in Canada. Euthanasia is a complex issue in Islamic theology; however, in general it is considered contrary to Islamic law and holy texts. Among interpretations of the Koran and Hadith, the early termination of life is a crime, be it by suicide or helping one commit suicide. The various positions on the cessation of medical treatment are mixed and considered a different class of action than direct termination of life, especially if the patient is suffering. Suicide and euthanasia are both crimes in almost all Muslim majority countries. There is much debate on the topic of euthanasia in Judaic theology, ethics, and general opinion (especially in Israel and the United States). Passive euthanasia was declared legal by Israel's highest court under certain conditions and has reached some level of acceptance. Active euthanasia remains illegal, however the topic is actively under debate with no clear consensus through legal, ethical, theological and spiritual perspectives.
https://en.wikipedia.org/wiki?curid=9587
Extraterrestrial life Extraterrestrial life is hypothetical life which may occur outside of Earth and which did not originate on Earth. Such life might range from simple prokaryotes (or comparable life forms) to beings with civilizations far more advanced than humanity. The Drake equation speculates about the existence of intelligent life elsewhere in the universe. The science of extraterrestrial life in all its forms is known as astrobiology. Since the mid-20th century, active ongoing research has taken place to look for signs of extraterrestrial life. This encompasses a search for current and historic extraterrestrial life, and a narrower search for extraterrestrial intelligent life. Depending on the category of search, methods range from the analysis of telescope and specimen data to radios used to detect and send communication signals. The concept of extraterrestrial life, and particularly extraterrestrial intelligence, has had a major cultural impact, chiefly in works of science fiction. Over the years, science fiction has introduced a number of theoretical ideas, each having a wide range of possibilities. Many have piqued public interest in the possibilities of extraterrestrial life. One particular concern is the wisdom of attempting communication with extraterrestrial intelligence. Some encourage aggressive methods to make contact with intelligent extraterrestrial life. Others argue to do so may give away the location of Earth, making an invasion possible in the future. Alien life, such as microorganisms, has been hypothesized to exist in the Solar System and throughout the universe. This hypothesis relies on the vast size and consistent physical laws of the observable universe. According to this argument, made by scientists such as Carl Sagan and Stephen Hawking, as well as notable personalities such as Winston Churchill, it would be improbable for life "not" to exist somewhere other than Earth. This argument is embodied in the Copernican principle, which states that Earth does not occupy a unique position in the Universe, and the mediocrity principle, which states that there is nothing special about life on Earth. The chemistry of life may have begun shortly after the Big Bang, 13.8 billion years ago, during a habitable epoch when the universe was only 10–17 million years old. Life may have emerged independently at many places throughout the universe. Alternatively, life may have formed less frequently, then spread—by meteoroids, for example—between habitable planets in a process called panspermia. In any case, complex organic molecules may have formed in the protoplanetary disk of dust grains surrounding the Sun before the formation of Earth. According to these studies, this process may occur outside Earth on several planets and moons of the Solar System and on planets of other stars. Since the 1950s, astronomers have proposed that "habitable zones" around stars are the most likely places for life to exist. Numerous discoveries of such zones since 2007 have generated numerical estimates of many billions of planets with Earth-like compositions. , only a few planets had been discovered in these zones. Nonetheless, on 4 November 2013, astronomers reported, based on "Kepler" space mission data, that there could be as many as 40 billion Earth-sized planets orbiting in the habitable zones of Sun-like stars and red dwarfs in the Milky Way, 11 billion of which may be orbiting Sun-like stars. The nearest such planet may be 12 light-years away, according to the scientists. Astrobiologists have also considered a "follow the energy" view of potential habitats. A study published in 2017 suggests that due to how complexity evolved in species on Earth, the level of predictability for alien evolution elsewhere would make them look similar to life on our planet. One of the study authors, Sam Levin, notes "Like humans, we predict that they are made-up of a hierarchy of entities, which all cooperate to produce an alien. At each level of the organism there will be mechanisms in place to eliminate conflict, maintain cooperation, and keep the organism functioning. We can even offer some examples of what these mechanisms will be." There is also research in assessing the capacity of life for developing intelligence. It has been suggested that this capacity arises with the number of potential niches a planet contains, and that the complexity of life itself is reflected in the information density of planetary environments, which in turn can be computed from its niches. Life on Earth requires water as a solvent in which biochemical reactions take place. Sufficient quantities of carbon and other elements, along with water, might enable the formation of living organisms on terrestrial planets with a chemical make-up and temperature range similar to that of Earth. Life based on ammonia (rather than water) has been suggested as an alternative, though this solvent appears less suitable than water. It is also conceivable that there are forms of life whose solvent is a liquid hydrocarbon, such as methane, ethane or propane. About 29 chemical elements play active roles in living organisms on Earth. About 95% of living matter is built upon only six elements: carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur. These six elements form the basic building blocks of virtually all life on Earth, whereas most of the remaining elements are found only in trace amounts. The unique characteristics of carbon make it unlikely that it could be replaced, even on another planet, to generate the biochemistry necessary for life. The carbon atom has the unique ability to make four strong chemical bonds with other atoms, including other carbon atoms. These covalent bonds have a direction in space, so that carbon atoms can form the skeletons of complex 3-dimensional structures with definite architectures such as nucleic acids and proteins. Carbon forms more compounds than all other elements combined. The great versatility of the carbon atom, and its abundance in the visible universe, makes it the element most likely to provide the bases—even exotic ones—for the chemical composition of life on other planets. Some bodies in the Solar System have the potential for an environment in which extraterrestrial life can exist, particularly those with possible subsurface oceans. Should life be discovered elsewhere in the Solar System, astrobiologists suggest that it will more likely be in the form of extremophile microorganisms. According to NASA's 2015 Astrobiology Strategy, "Life on other worlds is most likely to include microbes, and any complex living system elsewhere is likely to have arisen from and be founded upon microbial life. Important insights on the limits of microbial life can be gleaned from studies of microbes on modern Earth, as well as their ubiquity and ancestral characteristics." Researchers found a stunning array of subterranean organisms, mostly microbial, deep underground and estimate that approximately 70 percent of the total number of Earth's bacteria and archaea organisms live within the Earth's crust. Rick Colwell, a member of the Deep Carbon Observatory team from Oregon State University, told the BBC: "I think it’s probably reasonable to assume that the subsurface of other planets and their moons are habitable, especially since we’ve seen here on Earth that organisms can function far away from sunlight using the energy provided directly from the rocks deep underground". Mars may have niche subsurface environments where microbial life might exist. A subsurface marine environment on Jupiter's moon Europa might be the most likely habitat in the Solar System, outside Earth, for extremophile microorganisms. The panspermia hypothesis proposes that life elsewhere in the Solar System may have a common origin. If extraterrestrial life was found on another body in the Solar System, it could have originated from Earth just as life on Earth could have been seeded from elsewhere (exogenesis). The first known mention of the term 'panspermia' was in the writings of the 5th century BC Greek philosopher Anaxagoras. In the 19th century it was again revived in modern form by several scientists, including Jöns Jacob Berzelius (1834), Kelvin (1871), Hermann von Helmholtz (1879) and, somewhat later, by Svante Arrhenius (1903). Sir Fred Hoyle (1915–2001) and Chandra Wickramasinghe (born 1939) are important proponents of the hypothesis who further contended that life forms continue to enter Earth's atmosphere, and may be responsible for epidemic outbreaks, new diseases, and the genetic novelty necessary for macroevolution. Directed panspermia concerns the deliberate transport of microorganisms in space, sent to Earth to start life here, or sent from Earth to seed new stellar systems with life. The Nobel prize winner Francis Crick, along with Leslie Orgel proposed that seeds of life may have been purposely spread by an advanced extraterrestrial civilization, but considering an early "RNA world" Crick noted later that life may have originated on Earth. There may be scientific support, based on studies reported in March 2020, for considering that parts of the planet Mercury may have been habitable, and perhaps that life forms, albeit likely primitive microorganisms, may have existed on the planet. In the early 20th century, Venus was considered to be similar to Earth for habitability, but observations since the beginning of the Space Age revealed that the Venus surface temperature is around , making it inhospitable for Earth-life. Likewise, the atmosphere of Venus is almost completely carbon dioxide, which can be toxic to Earth-like life. Between the altitudes of 50 and 65 kilometers, the pressure and temperature are Earth-like, and it may accommodate thermoacidophilic extremophile microorganisms in the acidic upper layers of the Venusian atmosphere. Furthermore, Venus likely had liquid water on its surface for at least a few million years after its formation. Life on Mars has been long speculated. Liquid water is widely thought to have existed on Mars in the past, and now can occasionally be found as low-volume liquid brines in shallow Martian soil. The origin of the potential biosignature of methane observed in Mars' atmosphere is unexplained, although hypotheses not involving life have also been proposed. There is evidence that Mars had a warmer and wetter past: dried-up river beds, polar ice caps, volcanoes, and minerals that form in the presence of water have all been found. Nevertheless, present conditions on Mars' subsurface may support life. Evidence obtained by the "Curiosity" rover studying Aeolis Palus, Gale Crater in 2013 strongly suggests an ancient freshwater lake that could have been a hospitable environment for microbial life. Current studies on Mars by the "Curiosity" and "Opportunity" rovers are searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on Mars is now a primary NASA objective. Ceres, the only dwarf planet in the asteroid belt, has a thin water-vapor atmosphere.The vapor could have been produced by ice volcanoes or by ice near the surface sublimating (transforming from solid to gas). Nevertheless, the presence of water on Ceres had led to speculation that life may be possible there. It is one of the few places in our solar system where scientists would like to search for possible signs of life. Hence, it is a possibility that the planet could support life to very small microbes similar to bacteria. Although the planet might not have living things today, there could be signs it harbored life in the past. Carl Sagan and others in the 1960s and 1970s computed conditions for hypothetical microorganisms living in the atmosphere of Jupiter. The intense radiation and other conditions, however, do not appear to permit encapsulation and molecular biochemistry, so life there is thought unlikely. In contrast, some of Jupiter's moons may have habitats capable of sustaining life. Scientists have indications that heated subsurface oceans of liquid water may exist deep under the crusts of the three outer Galilean moons—Europa, Ganymede, and Callisto. The EJSM/Laplace mission is planned to determine the habitability of these environments. Jupiter's moon Europa has been subject to speculation about the existence of life due to the strong possibility of a liquid water ocean beneath its ice surface. Hydrothermal vents on the bottom of the ocean, if they exist, may warm the water and could be capable of supporting nutrients and energy to microorganisms. It is also possible that Europa could support aerobic macrofauna using oxygen created by cosmic rays impacting its surface ice. The case for life on Europa was greatly enhanced in 2011 when it was discovered that vast lakes exist within Europa's thick, icy shell. Scientists found that ice shelves surrounding the lakes appear to be collapsing into them, thereby providing a mechanism through which life-forming chemicals created in sunlit areas on Europa's surface could be transferred to its interior. On 11 December 2013, NASA reported the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet according to the scientists. The "Europa Clipper", which would assess the habitability of Europa, is planned for launch in 2024. Europa's subsurface ocean is considered the best target for the discovery of life. Like Jupiter, Saturn is not likely to host life. However, Titan and Enceladus have been speculated to have possible habitats supportive of life. Enceladus, a moon of Saturn, has some of the conditions for life, including geothermal activity and water vapor, as well as possible under-ice oceans heated by tidal effects. The "Cassini–Huygens" probe detected carbon, hydrogen, nitrogen and oxygen—all key elements for supporting life—during its 2005 flyby through one of Enceladus's geysers spewing ice and gas. The temperature and density of the plumes indicate a warmer, watery source beneath the surface. Titan, the largest moon of Saturn, is the only known moon in the Solar System with a significant atmosphere. Data from the "Cassini–Huygens" mission refuted the hypothesis of a global hydrocarbon ocean, but later demonstrated the existence of liquid hydrocarbon lakes in the polar regions—the first stable bodies of surface liquid discovered outside Earth. Analysis of data from the mission has uncovered aspects of atmospheric chemistry near the surface that are consistent with—but do not prove—the hypothesis that organisms there if present, could be consuming hydrogen, acetylene and ethane, and producing methane. NASA's Dragonfly mission is slated to land on Titan in the mid 2030's with a VTOL-capable rotorcraft with a launch date set in 2026. Small Solar System bodies have also been speculated to host habitats for extremophiles. Fred Hoyle and Chandra Wickramasinghe have proposed that microbial life might exist on comets and asteroids. Models of heat retention and heating via radioactive decay in smaller icy Solar System bodies suggest that Rhea, Titania, Oberon, Triton, Pluto, Eris, Sedna, and Orcus may have oceans underneath solid icy crusts approximately 100 km thick. Of particular interest in these cases is the fact that the models indicate that the liquid layers are in direct contact with the rocky core, which allows efficient mixing of minerals and salts into the water. This is in contrast with the oceans that may be inside larger icy satellites like Ganymede, Callisto, or Titan, where layers of high-pressure phases of ice are thought to underlie the liquid water layer. Hydrogen sulfide has been proposed as a hypothetical solvent for life and is quite plentiful on Jupiter's moon Io, and may be in liquid form a short distance below the surface. The scientific search for extraterrestrial life is being carried out both directly and indirectly. , 3,667 exoplanets in 2,747 systems have been identified, and other planets and moons in our own solar system hold the potential for hosting primitive life such as microorganisms. Scientists search for biosignatures within the Solar System by studying planetary surfaces and examining meteorites. Some claim to have identified evidence that microbial life has existed on Mars. An experiment on the two Viking Mars landers reported gas emissions from heated Martian soil samples that some scientists argue are consistent with the presence of living microorganisms. Lack of corroborating evidence from other experiments on the same samples suggests that a non-biological reaction is a more likely hypothesis. In 1996, a controversial report stated that structures resembling nanobacteria were discovered in a meteorite, ALH84001, formed of rock ejected from Mars. In February 2005 NASA scientists reported they may have found some evidence of present life on Mars. The two scientists, Carol Stoker and Larry Lemke of NASA's Ames Research Center, based their claim on methane signatures found in Mars's atmosphere resembling the methane production of some forms of primitive life on Earth, as well as on their own study of primitive life near the Rio Tinto river in Spain. NASA officials soon distanced NASA from the scientists' claims, and Stoker herself backed off from her initial assertions. Though such methane findings are still debated, support among some scientists for the existence of life on Mars exists. In November 2011 NASA launched the Mars Science Laboratory that landed the "Curiosity" rover on Mars. It is designed to assess the past and present habitability on Mars using a variety of scientific instruments. The rover landed on Mars at Gale Crater in August 2012. The Gaia hypothesis stipulates that any planet with a robust population of life will have an atmosphere in chemical disequilibrium, which is relatively easy to determine from a distance by spectroscopy. However, significant advances in the ability to find and resolve light from smaller rocky worlds near their star are necessary before such spectroscopic methods can be used to analyze extrasolar planets. To that effect, the Carl Sagan Institute was founded in 2014 and is dedicated to the atmospheric characterization of exoplanets in circumstellar habitable zones. Planetary spectroscopic data will be obtained from telescopes like WFIRST and ELT. In August 2011, findings by NASA, based on studies of meteorites found on Earth, suggest DNA and RNA components (adenine, guanine and related organic molecules), building blocks for life as we know it, may be formed extraterrestrially in outer space. In October 2011, scientists reported that cosmic dust contains complex organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. One of the scientists suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." In August 2012, and in a world first, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary "IRAS 16293-2422", which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. Projects such as SETI are monitoring the galaxy for electromagnetic interstellar communications from civilizations on other worlds. If there is an advanced extraterrestrial civilization, there is no guarantee that it is transmitting radio communications in the direction of Earth or that this information could be interpreted as such by humans. The length of time required for a signal to travel across the vastness of space means that any signal detected would come from the distant past. The presence of heavy elements in a star's light-spectrum is another potential biosignature; such elements would (in theory) be found if the star was being used as an incinerator/repository for nuclear waste products. Some astronomers search for extrasolar planets that may be conducive to life, narrowing the search to terrestrial planets within the habitable zone of their star. Since 1992 over four thousand exoplanets have been discovered ( planets in planetary systems including multiple planetary systems as of ). The extrasolar planets so far discovered range in size from that of terrestrial planets similar to Earth's size to that of gas giants larger than Jupiter. The number of observed exoplanets is expected to increase greatly in the coming years. The "Kepler" space telescope has also detected a few thousand candidate planets, of which about 11% may be false positives. There is at least one planet on average per star. About 1 in 5 Sun-like stars have an "Earth-sized" planet in the habitable zone, with the nearest expected to be within 12 light-years distance from Earth. Assuming 200 billion stars in the Milky Way, that would be 11 billion potentially habitable Earth-sized planets in the Milky Way, rising to 40 billion if red dwarfs are included. The rogue planets in the Milky Way possibly number in the trillions. The nearest known exoplanet is Proxima Centauri b, located from Earth in the southern constellation of Centaurus. , the least massive planet known is PSR B1257+12 A, which is about twice the mass of the Moon. The most massive planet listed on the NASA Exoplanet Archive is DENIS-P J082303.1-491201 b, about 29 times the mass of Jupiter, although according to most definitions of a planet, it is too massive to be a planet and may be a brown dwarf instead. Almost all of the planets detected so far are within the Milky Way, but there have also been a few possible detections of extragalactic planets. The study of planetary habitability also considers a wide range of other factors in determining the suitability of a planet for hosting life. One sign that a planet probably already contains life is the presence of an atmosphere with significant amounts of oxygen, since that gas is highly reactive and generally would not last long without constant replenishment. This replenishment occurs on Earth through photosynthetic organisms. One way to analyze the atmosphere of an exoplanet is through spectrography when it transits its star, though this might only be feasible with dim stars like white dwarfs. The science of astrobiology considers life on Earth as well, and in the broader astronomical context. In 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia, when the young Earth was about 400 million years old. According to one of the researchers, "If life arose relatively quickly on Earth, then it could be common in the universe." In 1961, University of California, Santa Cruz, astronomer and astrophysicist Frank Drake devised the Drake equation as a way to stimulate scientific dialogue at a meeting on the search for extraterrestrial intelligence (SETI). The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy. The equation is best understood not as an equation in the strictly mathematical sense, but to summarize all the various concepts which scientists must contemplate when considering the question of life elsewhere. The Drake equation is: where: and Drake's proposed estimates are as follows, but numbers on the right side of the equation are agreed as speculative and open to substitution: formula_2 The Drake equation has proved controversial since several of its factors are uncertain and based on conjecture, not allowing conclusions to be made. This has led critics to label the equation a guesstimate, or even meaningless. Based on observations from the "Hubble Space Telescope", there are between 125 and 250 billion galaxies in the observable universe. It is estimated that at least ten percent of all Sun-like stars have a system of planets, i.e. there are stars with planets orbiting them in the observable universe. Even if it is assumed that only one out of a billion of these stars has planets supporting life, there would be some 6.25 billion life-supporting planetary systems in the observable universe. A 2013 study based on results from the "Kepler" spacecraft estimated that the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets. Also based on "Kepler" data, scientists estimate that at least one in six stars has an Earth-sized planet. The apparent contradiction between high estimates of the probability of the existence of extraterrestrial civilizations and the lack of evidence for such civilizations is known as the Fermi paradox. Cosmic pluralism, the plurality of worlds, or simply pluralism, describes the philosophical belief in numerous "worlds" in addition to Earth, which might harbor extraterrestrial life. Before the development of the heliocentric theory and a recognition that the Sun is just one of many stars, the notion of pluralism was largely mythological and philosophical. The earliest recorded assertion of extraterrestrial human life is found in ancient scriptures of Jainism. There are multiple "worlds" mentioned in Jain scriptures that support human life. These include "Bharat Kshetra", "Mahavideh Kshetra", "Airavat Kshetra", "Hari kshetra", etc. Medieval Muslim writers like Fakhr al-Din al-Razi and Muhammad al-Baqir supported cosmic pluralism on the basis of the Qur'an. With the scientific and Copernican revolutions, and later, during the Enlightenment, cosmic pluralism became a mainstream notion, supported by the likes of Bernard le Bovier de Fontenelle in his 1686 work "Entretiens sur la pluralité des mondes". Pluralism was also championed by philosophers such as John Locke, Giordano Bruno and astronomers such as William Herschel. The astronomer Camille Flammarion promoted the notion of cosmic pluralism in his 1862 book "La pluralité des mondes habités". None of these notions of pluralism were based on any specific observation or scientific information. There was a dramatic shift in thinking initiated by the invention of the telescope and the Copernican assault on geocentric cosmology. Once it became clear that Earth was merely one planet amongst countless bodies in the universe, the theory of extraterrestrial life started to become a topic in the scientific community. The best known early-modern proponent of such ideas was the Italian philosopher Giordano Bruno, who argued in the 16th century for an infinite universe in which every star is surrounded by its own planetary system. Bruno wrote that other worlds "have no less virtue nor a nature different to that of our earth" and, like Earth, "contain animals and inhabitants". In the early 17th century, the Czech astronomer Anton Maria Schyrleus of Rheita mused that "if Jupiter has (...) inhabitants (...) they must be larger and more beautiful than the inhabitants of Earth, in proportion to the [characteristics] of the two spheres". In Baroque literature such as "The Other World: The Societies and Governments of the Moon" by Cyrano de Bergerac, extraterrestrial societies are presented as humoristic or ironic parodies of earthly society. The didactic poet Henry More took up the classical theme of the Greek Democritus in "Democritus Platonissans, or an Essay Upon the Infinity of Worlds" (1647). In "The Creation: a Philosophical Poem in Seven Books" (1712), Sir Richard Blackmore observed: "We may pronounce each orb sustains a race / Of living things adapted to the place". With the new relative viewpoint that the Copernican revolution had wrought, he suggested "our world's sunne / Becomes a starre elsewhere". Fontanelle's "Conversations on the Plurality of Worlds" (translated into English in 1686) offered similar excursions on the possibility of extraterrestrial life, expanding, rather than denying, the creative sphere of a Maker. The possibility of extraterrestrials remained a widespread speculation as scientific discovery accelerated. William Herschel, the discoverer of Uranus, was one of many 18th–19th-century astronomers who believed that the Solar System is populated by alien life. Other luminaries of the period who championed "cosmic pluralism" included Immanuel Kant and Benjamin Franklin. At the height of the Enlightenment, even the Sun and Moon were considered candidates for extraterrestrial inhabitants. Speculation about life on Mars increased in the late 19th century, following telescopic observation of apparent Martian canals—which soon, however, turned out to be optical illusions. Despite this, in 1895, American astronomer Percival Lowell published his book "Mars," followed by "Mars and its Canals" in 1906, proposing that the canals were the work of a long-gone civilization. The idea of life on Mars led British writer H. G. Wells to write the novel "The War of the Worlds" in 1897, telling of an invasion by aliens from Mars who were fleeing the planet's desiccation. Spectroscopic analysis of Mars's atmosphere began in earnest in 1894, when U.S. astronomer William Wallace Campbell showed that neither water nor oxygen was present in the Martian atmosphere. By 1909 better telescopes and the best perihelic opposition of Mars since 1877 conclusively put an end to the canal hypothesis. The science fiction genre, although not so named during the time, developed during the late 19th century. Jules Verne's "Around the Moon" (1870) features a discussion of the possibility of life on the Moon, but with the conclusion that it is barren. Stories involving extraterrestrials are found in e.g. Garrett P. Serviss's "Edison's Conquest of Mars" (1898), an unauthorized sequel to "The War of the Worlds" by H. G. Wells was published in 1897 which stands at the beginning of the popular idea of the "Martian invasion" of Earth prominent in 20th-century pop culture. Most unidentified flying objects or UFO sightings can be readily explained as sightings of Earth-based aircraft, known astronomical objects, or as hoaxes. Nonetheless, a certain fraction of the public believe that UFOs might actually be of extraterrestrial origin, and the notion has had influence on popular culture. The possibility of extraterrestrial life on the Moon was ruled out in the 1960s, and during the 1970s it became clear that most of the other bodies of the Solar System do not harbor highly developed life, although the question of primitive life on bodies in the Solar System remains open. The failure so far of the SETI program to detect an intelligent radio signal after decades of effort has at least partially dimmed the prevailing optimism of the beginning of the space age. Notwithstanding, belief in extraterrestrial beings continues to be voiced in pseudoscience, conspiracy theories, and in popular folklore, notably "Area 51" and legends. It has become a pop culture trope given less-than-serious treatment in popular entertainment. In the words of SETI's Frank Drake, "All we know for sure is that the sky is not littered with powerful microwave transmitters". Drake noted that it is entirely possible that advanced technology results in communication being carried out in some way other than conventional radio transmission. At the same time, the data returned by space probes, and giant strides in detection methods, have allowed science to begin delineating habitability criteria on other worlds, and to confirm that at least other planets are plentiful, though aliens remain a question mark. The Wow! signal, detected in 1977 by a SETI project, remains a subject of speculative debate. In 2000, geologist and paleontologist Peter Ward and astrobiologist Donald Brownlee published a book entitled "Rare Earth: Why Complex Life is Uncommon in the Universe". In it, they discussed the Rare Earth hypothesis, in which they claim that Earth-like life is rare in the universe, whereas microbial life is common. Ward and Brownlee are open to the idea of evolution on other planets that is not based on essential Earth-like characteristics (such as DNA and carbon). Theoretical physicist Stephen Hawking in 2010 warned that humans should not try to contact alien life forms. He warned that aliens might pillage Earth for resources. "If aliens visit us, the outcome would be much as when Columbus landed in America, which didn't turn out well for the Native Americans", he said. Jared Diamond had earlier expressed similar concerns. In November 2011, the White House released an official response to two petitions asking the U.S. government to acknowledge formally that aliens have visited Earth and to disclose any intentional withholding of government interactions with extraterrestrial beings. According to the response, "The U.S. government has no evidence that any life exists outside our planet, or that an extraterrestrial presence has contacted or engaged any member of the human race." Also, according to the response, there is "no credible information to suggest that any evidence is being hidden from the public's eye." The response noted "odds are pretty high" that there may be life on other planets but "the odds of us making contact with any of them—especially any intelligent ones—are extremely small, given the distances involved." In 2013, the exoplanet Kepler-62f was discovered, along with Kepler-62e and Kepler-62c. A related special issue of the journal "Science", published earlier, described the discovery of the exoplanets. On 17 April 2014, the discovery of the Earth-size exoplanet Kepler-186f, 500 light-years from Earth, was publicly announced; it is the first Earth-size planet to be discovered in the habitable zone and it has been hypothesized that there may be liquid water on its surface. On 13 February 2015, scientists (including Geoffrey Marcy, Seth Shostak, Frank Drake and David Brin) at a convention of the American Association for the Advancement of Science, discussed Active SETI and whether transmitting a message to possible intelligent extraterrestrials in the Cosmos was a good idea; one result was a statement, signed by many, that a "worldwide scientific, political and humanitarian discussion must occur before any message is sent". On 20 July 2015, British physicist Stephen Hawking and Russian billionaire Yuri Milner, along with the SETI Institute, announced a well-funded effort, called the Breakthrough Initiatives, to expand efforts to search for extraterrestrial life. The group contracted the services of the 100-meter Robert C. Byrd Green Bank Telescope in West Virginia in the United States and the 64-meter Parkes Telescope in New South Wales, Australia. In June 2020, astronomers from the University of Nottingham reported that it estimates that 30 "active communicating intelligent civilizations", or Communicating Extra-Terrestrial Intelligent (CETI) civilizations (none within our current ability to detect due to various reasons including distance or size) should be in our own Milky Way galaxy, based on the latest astrophysical information.
https://en.wikipedia.org/wiki?curid=9588
European Strategic Program on Research in Information Technology European Strategic Programme on Research in Information Technology (ESPRIT) was a series of integrated programmes of information technology research and development projects and industrial technology transfer measures. It was a European Union initiative managed by the Directorate General for Industry (DG III) of the European Commission. Five ESPRIT programmes (ESPRIT 0 to ESPRIT 4) ran consecutively from 1983 to 1998. ESPRIT 4 was succeeded by the Information Society Technologies (IST) programme in 1999. Some of the projects and products supported by ESPRIT were: Editors: D. Michie, D.J. Spiegelhalter, C.C. Taylor February 17, 1994 page 4, footnote 2, retrieved 12/12/2015 "The above book (originally published in 1994 by Ellis Horwood) is now out of print. The copyright now resides with the editors who have decided to make the material freely available on the web." http://www1.maths.leeds.ac.uk/~charles/statlog/
https://en.wikipedia.org/wiki?curid=9589
E. E. Cummings Edward Estlin "E. E." Cummings (October 14, 1894 – September 3, 1962), often styled as e e cummings, as he is attributed in many of his published works, was an American poet, painter, essayist, author, and playwright. He wrote approximately 2,900 poems, two autobiographical novels, four plays, and several essays. He is often regarded as one of the most important American poets of the 20th century. Cummings is associated with modernist free-form poetry. Much of his work has idiosyncratic syntax and uses lower case spellings for poetic expression. Edward Estlin Cummings was born on October 14, 1894 in Cambridge, Massachusetts to Edward Cummings and the former Rebecca Haswell Clarke, a well-known Unitarian couple in the city. His father was a professor at Harvard University who later became nationally known as the minister of South Congregational Church (Unitarian) in Boston, Massachusetts. His mother, who loved to spend time with her children, played games with Cummings and his sister, Elizabeth. From an early age, Cummings' parents supported his creative gifts. Cummings wrote poems and drew as a child, and he often played outdoors with the many other children who lived in his neighborhood. He grew up in the company of such family friends as the philosophers William James and Josiah Royce. Many of Cummings' summers were spent on Silver Lake in Madison, New Hampshire, where his father had built two houses along the eastern shore. The family ultimately purchased the nearby Joy Farm where Cummings had his primary summer residence. He expressed transcendental leanings his entire life. As he matured, Cummings moved to an "I, Thou" relationship with God. His journals are replete with references to ""le bon Dieu"", as well as prayers for inspiration in his poetry and artwork (such as "Bon Dieu! may i some day do something truly great. amen."). Cummings "also prayed for strength to be his essential self ('may I be I is the only prayer—not may I be great or good or beautiful or wise or strong'), and for relief of spirit in times of depression ('almighty God! I thank thee for my soul; & may I never die spiritually into a mere mind through disease of loneliness')." Cummings wanted to be a poet from childhood and wrote poetry daily from age 8 to 22, exploring assorted forms. He graduated from Harvard University with a Bachelor of Arts degree "magna cum laude" and Phi Beta Kappa in 1915 and received a Master of Arts degree from the university in 1916. In his studies at Harvard, he developed an interest in modern poetry, which ignored conventional grammar and syntax, while aiming for a dynamic use of language. Upon graduating, he worked for a book dealer. In 1917, with the First World War ongoing in Europe, Cummings enlisted in the Norton-Harjes Ambulance Corps. On the boat to France, he met William Slater Brown and they would become friends. Due to an administrative error, Cummings and Brown did not receive an assignment for five weeks, a period they spent exploring Paris. Cummings fell in love with the city, to which he would return throughout his life. During their service in the ambulance corps, the two young writers sent letters home that drew the attention of the military censors. They were known to prefer the company of French soldiers over fellow ambulance drivers. The two openly expressed anti-war views; Cummings spoke of his lack of hatred for the Germans. On September 21, 1917, five months after starting his belated assignment, Cummings and William Slater Brown were arrested by the French military on suspicion of espionage and undesirable activities. They were held for three and a half months in a military detention camp at the "Dépôt de Triage", in La Ferté-Macé, Orne, Normandy. They were imprisoned with other detainees in a large room. Cummings' father failed to obtain his son's release through diplomatic channels, and in December 1917 he wrote a letter to President Woodrow Wilson. Cummings was released on December 19, 1917, and Brown was released two months later. Cummings used his prison experience as the basis for his novel, "The Enormous Room" (1922), about which F. Scott Fitzgerald said, "Of all the work by young men who have sprung up since 1920 one book survives—"The Enormous Room" by e e cummings... Those few who cause books to live have not been able to endure the thought of its mortality." Cummings returned to the United States on New Year's Day 1918. Later in 1918 he was drafted into the army. He served in the 12th Division at Camp Devens, Massachusetts, until November 1918. Cummings returned to Paris in 1921 and lived there for two years before returning to New York. His collection "Tulips and Chimneys" was published in 1923 and his inventive use of grammar and syntax is evident. The book was heavily cut by his editor. "XLI Poems" was published in 1925. With these collections, Cummings made his reputation as an avant garde poet. During the rest of the 1920s and 1930s, Cummings returned to Paris a number of times, and traveled throughout Europe, meeting, among others, artist Pablo Picasso. In 1931 Cummings traveled to the Soviet Union, recounting his experiences in "Eimi", published two years later. During these years Cummings also traveled to Northern Africa and Mexico. He worked as an essayist and portrait artist for "Vanity Fair" magazine (1924–1927). In 1926, Cummings' parents were in a car crash; only his mother survived, although she was severely injured. Cummings later described the crash in the following passage from his "i: six nonlectures" series given at Harvard (as part of the Charles Eliot Norton Lectures) in 1952 and 1953: A locomotive cut the car in half, killing my father instantly. When two brakemen jumped from the halted train, they saw a woman standing – dazed but erect – beside a mangled machine; with blood spouting (as the older said to me) out of her head. One of her hands (the younger added) kept feeling her dress, as if trying to discover why it was wet. These men took my sixty-six-year old mother by the arms and tried to lead her toward a nearby farmhouse; but she threw them off, strode straight to my father's body, and directed a group of scared spectators to cover him. When this had been done (and only then) she let them lead her away. His father's death had a profound effect on Cummings, who entered a new period in his artistic life. He began to focus on more important aspects of life in his poetry. He started this new period by paying homage to his father in the poem "my father moved through dooms of love". In the 1930s Samuel Aiwaz Jacobs was Cummings' publisher; he had started the Golden Eagle Press after working as a typographer and publisher. In 1952, his alma mater, Harvard University, awarded Cummings an honorary seat as a guest professor. The Charles Eliot Norton Lectures he gave in 1952 and 1955 were later collected as "i: six nonlectures". Cummings spent the last decade of his life traveling, fulfilling speaking engagements, and spending time at his summer home, Joy Farm, in Silver Lake, New Hampshire. He died of a stroke on September 3, 1962, at the age of 67 at Memorial Hospital in North Conway, New Hampshire. Cummings was buried at Forest Hills Cemetery in Boston, Massachusetts. At the time of his death, Cummings was recognized as the second most read poet in the United States, behind Robert Frost. Cummings' papers are held at the Houghton Library at Harvard University and the Harry Ransom Center at the University of Texas at Austin. Cummings was married briefly twice, first to Elaine Orr, then to Anne Minnerly Barton. His longest relationship lasted more than three decades, a common-law marriage to Marion Morehouse. Cummings' first marriage, to Elaine Orr, began as a love affair in 1918 while she was still married to Scofield Thayer, one of Cummings' friends from Harvard. During this time he wrote a good deal of his erotic poetry. After divorcing Thayer, Orr married Cummings on March 19, 1924. The couple had a daughter together out of wedlock. However, the couple separated after two months of marriage and divorced less than nine months later. Cummings married his second wife Anne Minnerly Barton on May 1, 1929, and they separated three years later in 1932. That same year, Minnerly obtained a Mexican divorce; it was not officially recognized in the United States until August 1934. Anne died in 1970 aged 72. In 1934, after his separation from his second wife, Cummings met Marion Morehouse, a fashion model and photographer. Although it is not clear whether the two were ever formally married, Morehouse lived with Cummings in a common-law marriage until his death in 1962. She died on May 18, 1969, while living at 4 Patchin Place, Greenwich Village, New York City, where Cummings had resided since September 1924. According to his testimony in "EIMI", Cummings had little interest in politics until his trip to the Soviet Union in 1931. He subsequently shifted rightward on many political and social issues. Despite his radical and bohemian public image, he was a Republican, and later, an ardent supporter of Joseph McCarthy. Despite Cummings' familiarity with avant-garde styles (likely affected by the "Calligrammes" of French poet Apollinaire, according to a contemporary observation), much of his work is quite traditional. Many of his poems are sonnets, albeit often with a modern twist. He occasionally used the blues form and acrostics. Cummings' poetry often deals with themes of love and nature, as well as the relationship of the individual to the masses and to the world. His poems are also often rife with satire. While his poetic forms and themes share an affinity with the Romantic tradition, Cummings' work universally shows a particular idiosyncrasy of syntax, or way of arranging individual words into larger phrases and sentences. Many of his most striking poems do not involve any typographical or punctuation innovations at all, but purely syntactic ones. As well as being influenced by notable modernists, including Gertrude Stein and Ezra Pound, Cummings in his early work drew upon the imagist experiments of Amy Lowell. Later, his visits to Paris exposed him to Dada and Surrealism, which he reflected in his work. He began to rely on symbolism and allegory, where he once had used simile and metaphor. In his later work, he rarely used comparisons that required objects that were not previously mentioned in the poem, choosing to use a symbol instead. Due to this, his later poetry is "frequently more lucid, more moving, and more profound than his earlier." Cummings also liked to incorporate imagery of nature and death into much of his poetry. While some of his poetry is free verse (with no concern for rhyme or meter), many have a recognizable sonnet structure of 14 lines, with an intricate rhyme scheme. A number of his poems feature a typographically exuberant style, with words, parts of words, or punctuation symbols scattered across the page, often making little sense until read aloud, at which point the meaning and emotion become clear. Cummings, who was also a painter, understood the importance of presentation, and used typography to "paint a picture" with some of his poems. The seeds of Cummings' unconventional style appear well established even in his earliest work. At age six, he wrote to his father: FATHER DEAR. BE, YOUR FATHER-GOOD AND GOOD, HE IS GOOD NOW, IT IS NOT GOOD TO SEE IT RAIN, FATHER DEAR IS, IT, DEAR, NO FATHER DEAR, LOVE, YOU DEAR, ESTLIN. Following his autobiographical novel, "The Enormous Room", Cummings' first published work was a collection of poems titled "Tulips and Chimneys" (1923). This work was the public's first encounter with his characteristic eccentric use of grammar and punctuation. Some of Cummings' most famous poems do not involve much, if any, odd typography or punctuation, but still carry his unmistakable style, particularly in unusual and impressionistic word order. Cummings' work often does not proceed in accordance with the conventional combinatorial rules that generate typical English sentences (for example, "they sowed their isn't"). In addition, a number of Cummings' poems feature, in part or in whole, intentional misspellings, and several incorporate phonetic spellings intended to represent particular dialects. Cummings also made use of inventive formations of compound words, as in "in Just" which features words such as "mud-luscious", "puddle-wonderful", and "eddieandbill." This poem is part of a sequence of poems titled "Chansons Innocentes"; it has many references comparing the "balloonman" to Pan, the mythical creature that is half-goat and half-man. Literary critic R.P. Blackmur has commented that this use of language is "frequently unintelligible because [Cummings] disregards the historical accumulation of meaning in words in favour of merely private and personal associations." Fellow poet Edna St. Vincent Millay, in her equivocal letter recommending Cummings for the Guggenheim Fellowship he was awarded in 1934, expressed her frustration at his opaque symbolism. "[I]f he prints and offers for sale poetry which he is quite content should be, after hours of sweating concentration, inexplicable from any point of view to a person as intelligent as myself, then he does so with a motive which is frivolous from the point of view of art, and should not be helped or encouraged by any serious person or group of persons... there is fine writing and powerful writing (as well as some of the most pompous nonsense I ever let slip to the floor with a wide yawn)... What I propose, then, is this: that you give Mr. Cummings enough rope. He may hang himself; or he may lasso a unicorn." Many of Cummings' poems are satirical and address social issues but have an equal or even stronger bias toward romanticism: time and again his poems celebrate love, sex, and the season of rebirth. Cummings also wrote children's books and novels. A notable example of his versatility is an introduction he wrote for a collection of the comic strip "Krazy Kat". Cummings is known for controversial subject matter, as he wrote numerous erotic poems. He also sometimes included ethnic slurs in his writing. For instance, in his 1950 collection "Xaipe: Seventy-One Poems", Cummings published two poems containing words that caused outrage in some quarters. and Cummings biographer Catherine Reef notes of the controversy: Friends begged Cummings to reconsider publishing these poems, and the book's editor pleaded with him to withdraw them, but he insisted that they stay. All the fuss perplexed him. The poems were commenting on prejudice, he pointed out, and not condoning it. He intended to show how derogatory words cause people to see others in terms of stereotypes rather than as individuals. "America (which turns Hungarian into 'hunky' & Irishman into 'mick' and Norwegian into 'square- head') is to blame for 'kike,'" he said. William Carlos Williams spoke out in his defense. During his lifetime, Cummings published four plays. "HIM", a three-act play, was first produced in 1928 by the Provincetown Players in New York City. The production was directed by James Light. The play's main characters are "Him", a playwright, portrayed by William Johnstone, and "Me", his girlfriend, portrayed by Erin O'Brien-Moore. Cummings said of the unorthodox play: Relax and give the play a chance to strut its stuff—relax, stop wondering what it is all 'about'—like many strange and familiar things, Life included, this play isn't 'about,' it simply is. . . . Don't try to enjoy it, let it try to enjoy you. DON'T TRY TO UNDERSTAND IT, LET IT TRY TO UNDERSTAND YOU." "Anthropos, or the Future of Art" is a short, one-act play that Cummings contributed to the anthology "Whither, Whither or After Sex, What? A Symposium to End Symposium". The play consists of dialogue between Man, the main character, and three "infrahumans", or inferior beings. The word "anthropos" is the Greek word for "man", in the sense of "mankind". "Tom, A Ballet" is a ballet based on "Uncle Tom's Cabin". The ballet is detailed in a "synopsis" as well as descriptions of four "episodes", which were published by Cummings in 1935. It has never been performed. "" was probably Cummings' most successful play. It is an allegorical Christmas fantasy presented in one act of five scenes. The play was inspired by his daughter Nancy, with whom he was reunited in 1946. It was first published in the Harvard College magazine, "Wake". The play's main characters are Santa Claus, his family (Woman and Child), Death, and Mob. At the outset of the play, Santa Claus' family has disintegrated due to their lust for knowledge (Science). After a series of events, however, Santa Claus' faith in love and his rejection of the materialism and disappointment he associates with Science are reaffirmed, and he is reunited with Woman and Child. Cummings' publishers and others have often echoed the unconventional orthography in his poetry by writing his name in lowercase and without periods (full stops), but normal orthography for his name (uppercase and periods) is supported by scholarship and preferred by publishers today. Cummings himself used both the lowercase and capitalized versions, though he most often signed his name with capitals. The use of lowercase for his initials was popularized in part by the title of some books, particularly in the 1960s, printing his name in lower case on the cover and spine. In the preface to "E. E. Cummings: The Growth of a Writer" by Norman Friedman, critic Harry T. Moore notes Cummings "had his name put legally into lower case, and in his later books the titles and his name were always in lower case." According to Cummings' widow, however, this is incorrect. She wrote to Friedman: "You should not have allowed H. Moore to make such a stupid & childish statement about Cummings & his signature." On February 27, 1951, Cummings wrote to his French translator D. Jon Grossman that he preferred the use of upper case for the particular edition they were working on. One Cummings scholar believes that on the rare occasions that Cummings signed his name in all lowercase, he may have intended it as a gesture of humility, not as an indication that it was the preferred orthography for others to use. Additionally, the "Chicago Manual of Style", which prescribes favoring non-standard capitalization of names in accordance with the bearer's strongly stated preference, notes "E. E. Cummings can be safely capitalized; it was one of his publishers, not he himself, who lowercased his name." In 1943, modern dancer and choreographer Jean Erdman presented "The Transformations of Medusa, Forever and Sunsmell" with a commissioned score by John Cage and a spoken text from the title poem by E. E. Cummings, sponsored by the Arts Club of Chicago. Erdman also choreographed "Twenty Poems" (1960), a cycle of E. E. Cummings' poems for eight dancers and one actor, with a commissioned score by Teiji Ito. It was performed in the round at the Circle in the Square Theatre in Greenwich Village. Numerous composers have set Cummings' poems to music: During his lifetime, Cummings received numerous awards in recognition of his work, including:
https://en.wikipedia.org/wiki?curid=9591
East River The East River is a salt water tidal estuary in New York City. The waterway, which is actually not a river despite its name, connects Upper New York Bay on its south end to Long Island Sound on its north end. It separates the borough of Queens on Long Island from the Bronx on the North American mainland, and also divides Manhattan from Queens and Brooklyn, which is also on Long Island. Because of its connection to Long Island Sound, it was once also known as the Sound River. The tidal strait changes its direction of flow frequently, and is subject to strong fluctuations in its current, which are accentuated by its narrowness and variety of depths. The waterway is navigable for its entire length of , and was historically the center of maritime activities in the city. Technically a drowned valley, like the other waterways around New York City, the strait was formed approximately 11,000 years ago at the end of the Wisconsin glaciation. The distinct change in the shape of the strait between the lower and upper portions is evidence of this glacial activity. The upper portion (from Long Island Sound to Hell Gate), running largely perpendicular to the glacial motion, is wide, meandering, and has deep narrow bays on both banks, scoured out by the glacier's movement. The lower portion (from Hell Gate to New York Bay) runs north–south, parallel to the glacial motion. It is much narrower, with straight banks. The bays that exist, as well as those that used to exist before being filled in by human activity, are largely wide and shallow. The section known as "Hell Gate" – from the Dutch name "Hellegat" meaning either "bright strait" or "clear opening, given to the entire river in 1614 by explorer Adriaen Block when he passed through it in his ship "Tyger" – is a narrow, turbulent, and particularly treacherous stretch of the river. Tides from the Long Island Sound, New York Harbor and the Harlem River meet there, making it difficult to navigate, especially because of the number of rocky islets which once dotted it, with names such as "Frying Pan", "Pot, Bread and Cheese", "Hen and Chicken", "Heel Top"; "Flood"; and "Gridiron", roughly 12 islets and reefs in all, all of which led to a number of shipwrecks, including HMS "Hussar", a British frigate that sank in 1780 while supposedly carrying gold and silver intended to pay British troops. The stretch has since been cleared of rocks and widened. Washington Irving wrote of Hell Gate that the current sounded "like a bull bellowing for more drink" at half tide, while at full tide it slept "as soundly as an alderman after dinner." He said it was like "a peaceable fellow enough when he has no liquor at all, or when he has a skinful, but who, when half-seas over, plays the very devil." The tidal regime is complex, with the two major tides – from the Long Island Sound and from the Atlantic Ocean – separated by about two hours; and this is without consideration of the tidal influence of the Harlem River, all of which creates a "dangerous cataract", as one ship's captain put it. The river is navigable for its entire length of . In 1939 it was reported that the stretch from The Battery to the former Brooklyn Navy Yard near Wallabout Bay, a run of about , was deep, the long section from there, running to the west of Roosevelt Island, through Hell Gate and to Throg's Neck was at least deep, and then eastward from there the river was, at mean low tide, deep. The broadness of the river's channel south of Roosevelt Island is caused by the dipping of the hardy Fordham gneiss which underlies the island under the less strong Inwood marble which lies under the river bed. Why the river turns to the east as it approaches the three lower Manhattan bridges is currently geologically unknown. Roosevelt Island, a long () and narrow () landmass, lies in the stretch of the river between Manhattan Island and the borough of Queens roughly paralleling Manhattan's East 46th-86th Streets. The abrupt termination of the island on its north end is due to an extension of the 125th Street Fault. Politically, the island's constitute part of the borough of Manhattan. It is connected to Queens by the Roosevelt Island Bridge, to Manhattan by the Roosevelt Island Tramway, and to both boroughs by a subway station served by the F train. The Queensboro Bridge also runs across Roosevelt Island, and an elevator allowing both pedestrian and vehicular access to the island was added to the bridge in 1930, but elevator service was discontinued in 1955 following the opening of the Roosevelt Island Bridge, and the elevator was demolished in 1970. The island, which was formerly known as Blackwell's Island and Welfare Island before being renamed in honor of US President Franklin Delano Roosevelt, historically served as the site of a penitentiary and a number of hospitals; today, it is dominated by residential neighborhoods consisting of large apartment buildings and park land (much of which is dotted with the ruins of older structures). The largest land mass in the River south of Roosevelt Island is U Thant Island, an artificial islet created during the construction of the Steinway Tunnel (which currently serves the subway's 7 and <7> lines). Officially named Belmont Island after one of the tunnel's financiers, the landmass owes its popular name (after U Thant, former Secretary-General of the United Nations) to the efforts of a group associated with the guru Sri Chinmoy that held mediation meetings on the island in the 1970s. Today, the island is owned by New York State and serves as a migratory bird sanctuary that is closed to visitors. Proceeding north and east from Roosevelt Island, the River's principal islands include Manhattan's Mill Rock, an 8.6 acre island located about 1000 feet from Manhattan's East 96th Street; Manhattan's 520 acre Randalls and Wards Islands, two formerly separate islands joined together by landfill that are home to a large public park, a number of public institutions, and the supports for the Triborough and the Hell Gate Bridges; the Bronx's Rikers Island, once under but now over following extensive landfill expansion after the island's 1884 purchase by the city as a prison farm and still home to New York City's massive and controversial primary jail complex; and North and South Brother Islands, both of which also constitute part of the Bronx. The Bronx River, Pugsley Creek, and Westchester Creek drain into the northern bank of the East River in the northern section of the strait. The Flushing River, historically known as Flushing Creek, empties into the strait's southern bank near LaGuardia Airport via Flushing Bay. Further west, Luyster Creek drains into the East River in Astoria, Queens. North of Randalls Island, it is joined by the Bronx Kill. Along the east of Wards Island, at approximately the strait's midpoint, it narrows into a channel called Hell Gate, which is spanned by both the Robert F. Kennedy Bridge (formerly the Triborough), and the Hell Gate Bridge. On the south side of Wards Island, it is joined by the Harlem River. Newtown Creek on Long Island, which itself contained several tributaries, drains into the East River and forms part of the boundary between Queens and Brooklyn. Bushwick Inlet and Wallabout Bay on Long Island also drain into the strait on the Long Island side. The Gowanus Canal was built from Gowanus Creek, which emptied into the river. Historically, there were other small streams which emptied into the river, though these and their associated wetlands have been filled in and built over. These small streams included the Harlem Creek, one of the most significant tributaries originating in Manhattan. Other streams that emptied into the East River included the Sawkill in Manhattan, Mill Brook in the Bronx, and Sunswick Creek in Queens. Prior to the arrival of Europeans, the land north of the East River was occupied by the Siwanoys, one of many groups of Algonquin-speaking Lenapes in the area. Those of the Lenapes who lived in the northern part of Manhattan Island in a campsite known as Konaande Kongh used a landing at around the current location of East 119th street to paddle into the river in canoes fashioned from tree-trunk in order to fish. Dutch settlement of what became New Amsterdam began in 1623. Some of the earliest of the small settlements in the area were along the west bank of the East River on sites that had previously been Native American settlements. As with the Native Americans, the river was central to their lives for transportation for trading and for fishing. They gathered marsh grass to feed their cattle, and the East River's tides helped to power mills which ground grain to flour. By 1642 there was a ferry running on the river between Manhattan island and what is now Brooklyn, and the first pier on the river was built in 1647 at Pearl and Broad Streets. After the British took over the colony in 1664, and was renamed "New York", the development of the waterfront continued, and a shipbuilding industry grew up once New York started exporting flour. By the end of the 17th century, the Great Dock, located at Corlear's Hook on the East River, had been built. Historically, the lower portion of the strait, which separates Manhattan from Brooklyn, was one of the busiest and most important channels in the world, particularly during the first three centuries of New York City's history. Because the water along the lower Manhattan shoreline was too shallow for large boats to tie up and unload their goods, from 1686 on – after the signing of the Dongan Charter, which allowed intertidal land to be owned and sold – the shoreline was "wharfed out" to the high-water mark by constructing retaining walls that were filled in with every conceivable kind of landfill: excrement, dead animals, ships deliberately sunk in place, ship ballast, and muck dredged from the bottom of the river. On the new land were built warehouses and other structures necessary for the burgeoning sea trade. Many of the "water-lot" grants went to the rich and powerful families of the merchant class, although some went to tradesmen. By 1700, the Manhattan bank of the river had been "wharfed-out" up to around Whitehall Street, narrowing the strait of the river. After the signing of the Montgomerie Charter in the late 1720s, another 127 acres of land along the Manhattan shore of the East River was authorized to be filled-in, this time to a point 400 feet beyond the low-water mark; the parts that had already been expanded to the low water mark – much of which had been devastated by a coastal storm in the early 1720s and a nor'easter in 1723 – were also expanded, narrowing the channel even further. What had been quiet beach land was to become new streets and buildings, and the core of the city's sea-borne trade. This infilling went as far north as Corlear's Hook. In addition, the city was given control of the western shore of the river from Wallabout Bay south. Expansion of the waterfront halted during the American Revolution, in which the East River played an important role early in the conflict. On August 28, 1776, while British and Hessian troops rested after besting the Americans at the Battle of Long Island, General George Washington was rounding up all the boats on the east shore of the river, in what is now Brooklyn, and used them to successfully move his troops across the river – under cover of night, rain, and fog – to Manhattan island, before the British could press their advantage. Thus, though the battle was a victory for the British, the failure of Sir William Howe to destroy the Continental Army when he had the opportunity allowed the Americans to continue fighting. Without the stealthy withdrawal across the East River, the American Revolution might have ended much earlier. Wallabout Bay on the River was the site of most of the British prison ships – most notoriously – where thousands of American prisoners of war were held in terrible conditions. These prisoners had come into the hands of the British after the fall of New York City on September 15, 1776, after the American loss at the Battle of Long Island and the loss of Fort Washington on November 16. Prisoners began to be housed on the broken-down warships and transports in December; about 24 ships were used in total, but generally only 5 or 6 at a time. Almost twice as many Americans died from neglect in these ships than did from all the battles in the war: as many as 12,000 soldiers, sailors and civilians. The bodies were thrown overboard or were buried in shallow graves on the riverbanks, but their bones – some of which were collected when they washed ashore – were later relocated and are now inside the Prison Ship Martyrs' Monument in nearby Fort Greene Park. The existence of the ships and the conditions the men were held in was widely known at the time through letters, diaries and memoirs, and was a factor not only in the attitude of Americans toward the British, but in the negotiations to formally end the war. After the war, East River waterfront development continued once more. New York State legislation, which in 1807 had authorized what would become the Commissioners Plan of 1811, authorized the creation of new land out to 400 feet from the low water mark into the river, and with the advent of gridded streets along the new waterline – Joseph Mangin had laid out such a grid in 1803 in his "A Plan and Regulation of the City of New York", which was rejected by the city, but established the concept – the coastline become regularized at the same time that the strait became even narrower. One result of the narrowing of the East River along the shoreline of Manhattan and, later, Brooklyn – which continued until the mid-19th century when the state put a stop to it – was an increase in the speed of its current. Buttermilk Channel, the strait that divides Governors Island from Red Hook in Brooklyn, and which is located directly south of the "mouth" of the East River, was in the early 17th century a fordable waterway across which cattle could be driven. Further investigation by Colonel Jonathan Williams determined that the channel was by 1776 three fathoms deep (), five fathoms deep () in the same spot by 1798, and when surveyed by Williams in 1807 had deepened to 7 fathoms () at low tide. What had been almost a bridge between two landforms that were once connected had become a fully navigable channel, thanks to the constriction of the East River and the increased flow it caused. Soon, the current in the East River had become so strong that larger ships had to use auxiliary steam power in order to turn. The continued narrowing of the channel on both side may have been the reasoning behind the suggestion of one New York State Senator, who wanted to fill in the East River and annex Brooklyn, with the cost of doing so being covered by selling the newly made land. Others proposed a dam at Roosevelt Island (then Blackwell's Island) to create a wet basin for shipping. Filling in part of the river was also proposed in 1867 by engineer James E. Serrell, later a city surveyor, but with emphasis on solving the problem of Hell Gate. Serrell proposed filling in Hell Gate and building a "New East River" through Queens with an extension to Westchester County. Serrell's plan – which he publicized with maps, essay and lectures as well as presentations to the city, state and federal governments – would have filled in the river from 14th Street to 125th Street. The New East River through Queens would be about three times the average width of the existing one at an even throughout, and would run as straight as an arrow for five miles. The new land, and the portions of Queens which would become part of Manhattan, adding , would be covered with an extension of the existing street grid of Manhattan. Variations on Serrell's plan would be floated over the years. A pseudonymous "Terra Firma" brought up filling in the East River again in the "Evening Post" and "Scientific American" in 1904, and Thomas Alva Edison took it up in 1906. Then Thomas Kennard Thompson, a bridge and railway engineer, proposed in 1913 to fill in the river from Hell Gate to the tip of Manhattan and, as Serrell had suggested, make a new canalized East River, only this time from Flushing Bay to Jamaica Bay. He would also expand Brooklyn into the Upper Harbor, put up a dam from Brooklyn to Staten Island, and make extensive landfill in the Lower Bay. At around the same time, in the 1920s, Dr. John A. Harriss, New York City's chief traffic engineer, who had developed the first traffic signals in the city, also had plans for the river. Harriss wanted to dam the East River at Hell Gate and the Williamsburg Bridge, then remove the water, put a roof over it on stilts, and build boulevards and pedestrian lanes on the roof along with "majestic structures", with transportation services below. The East River's course would, once again, be shifted to run through Queens, and this time Brooklyn as well, to channel it to the Harbor. Periodically, merchants and other interested parties would try to get something done about the difficulty of navigating through Hell Gate. In 1832, the New York State legislature was presented with a petition for a canal to be built through nearby Hallet's Point, thus avoiding Hell Gate altogether. Instead, the legislature responded by providing ships with pilots trained to navigate the shoals for the next 15 years. In 1849, a French engineer whose specialty was underwater blasting, Benjamin Maillefert, had cleared some of the rocks which, along with the mix of tides, made the Hell Gate stretch of the river so dangerous to navigate. Ebenezer Meriam had organized a subscription to pay Maillefert $6,000 to, for instance, reduce "Pot Rock" to provide of depth at low-mean water. While ships continued to run aground (in the 1850s about 2% of ships did so) and petitions continued to call for action, the federal government undertook surveys of the area which ended in 1851 with a detailed and accurate map. By then Maillefert had cleared the rock "Baldheaded Billy", and it was reported that Pot Rock had been reduced to , which encouraged the United States Congress to appropriate $20,000 for further clearing of the strait. However, a more accurate survey showed that the depth of Pot Rock was actually a little more than , and eventually Congress withdrew its funding. With the main shipping channels through The Narrows into the harbor silting up with sand due to littoral drift, thus providing ships with less depth, and a new generation of larger ships coming online – epitomized by Isambard Kingdom Brunel's SS "Great Eastern", popularly known as "Leviathan" – New York began to be concerned that it would start to lose its status as a great port if a "back door" entrance into the harbor was not created. In the 1850s the depth continued to lessen – the harbor commission said in 1850 that the mean water low was and the extreme water low was – while the draft required by the new ships continued to increase, meaning it was only safe for them to enter the harbor at high tide. The U.S. Congress, realizing that the problem needed to be addressed, appropriated $20,000 for the Army Corps of Engineers to continue Maillefert's work, but the money was soon spent without appreciable change in the hazards of navigating the strait. An advisory council recommended in 1856 that the strait be cleared of all obstacles, but nothing was done, and the Civil War soon broke out. In the late 1860s, after the Civil War, Congress realized the military importance of having easily navigable waterways, and charged the Army Corps of Engineers with clearing Hell Gate of the rocks there that caused a danger to navigation. The Corps' Colonel James Newton estimated that the project would cost $1 million, as compared to the approximate annual loss in shipping of $2 million. Initial forays floundered, and Newton, by that time a general, took over direct control of the project. In 1868 Newton decided, with the support of both New York's mercantile class and local real estate interests, to focus on the Hallert's Point Reef off of Queens. The project would involve of tunnels equipped with trains to haul debris out as the reef was eviscerated, creating a reef structured like "swiss cheese" which Newton would then blow up. After seven years of digging seven thousand holes, and filling four thousand of them with of dynamite, on September 24, 1876, in front of an audience of people including the inhabitants of the insane asylum on Wards Island, but not the prisoners of Roosevelt Island – then called Blackwell's Island – who remained in their cells, Newton's daughter set off the explosion. The effect was immediate in decreased turbulence through the strait, and fewer accidents and shipwrecks. The city's Chamber of Commerce commented that "The Centennial year will be for ever known in the annals of commerce for this destruction of one of the terrors of navigation." Clearing out the debris from the explosion took until 1891. Then, in 1885, Flood Rock, a reef that Newton had begun to undermine even before starting on Hallert's Rock, removing of rock from the reef, was blown up as well, with Civil War General Philip Sheridan and abolitionist Henry Ward Beecher among those in attendance, and Newton's daughter once more setting off the blast, the biggest ever to that date, and reportedly the largest man-made explosion until the advent of the atomic bomb although the detonation at the Battle of Messines in 1917 was several times larger. Two years later, plans were in place to dredge Hell Gate to a consistent depth of . At the same time that Hell Gate was being cleared, the Harlem River Ship Canal was being planned. When it was completed in 1895, the "back door" to New York's center of ship-borne trade in the docks and warehouses of the East River was open from two directions, through the cleared East River, and from the Hudson River through the Harlem River to the East River. Ironically, though, while both forks of the northern shipping entrance to the city were now open, modern dredging techniques had cut through the sandbars of the Atlantic Ocean entrance, allowing new, even larger ships to use that traditional passage into New York's docks. At the beginning of the 19th century, the East River was the center of New York's shipping industry, but by the end of the century, much of it had moved to the Hudson River, leaving the East River wharves and slips to begin a long process of decay, until the area was finally rehabilitated in the mid-1960s, and the South Street Seaport Museum was opened in 1967. By 1870, the condition of the Port of New York along both the East and Hudson Rivers had so deteriorated that the New York State legislature created the Department of Docks to renovate the port and keep New York competitive with other ports on the American East Coast. The Department of Docks was given the task of creating the master plan for the waterfront, and General George B. McClellan was engaged to head the project. McClellan held public hearings and invited plans to be submitted, ultimately receiving 70 of them, although in the end he and his successors put his own plan into effect. That plan called for the building of a seawall around Manhattan island from West 61st Street on the Hudson, around The Battery, and up to East 51st Street on the East River. The area behind the masonry wall (mostly concrete but in some parts granite blocks) would be filled in with landfill, and wide streets would be laid down on the new land. In this way, a new edge for the island (or at least the part of it used as a commercial port) would be created. The Department had surveyed of shoreline by 1878, as well as documenting the currents and tides. By 1900, had been surveyed and core samples had been taken to inform the builders of how deep the bedrock was. The work was completed just as World War I began, allowing the Port of New York to be a major point of embarkation for troops and materiel. The new seawall helps protect Manhattan island from storm surges, although it is only above the mean sea level, so that particularly dangerous storms, such as the nor'easter of 1992 and Hurricane Sandy in 2012, which hit the city in a way to create surges which are much higher, can still do significant damage. (The Hurricane of September 3, 1831 created the biggest storm surge on record in New York City: a rise of in one hour at the Battery, flooding all of lower Manhattan up to Canal Street.) Still, the new seawall begun in 1871 gave the island a firmer edge, improved the quality of the port, and continues to protect Manhattan from normal storm surges. The Brooklyn Bridge, completed in 1883, was the first bridge to span the East River, connecting the cities of New York and Brooklyn, and all but replacing the frequent ferry service between them, which did not return until the late 20th century. The bridge offered cable car service across the span. The Brooklyn Bridge was followed by the Williamsburg Bridge (1903), the Queensboro Bridge (1909), the Manhattan Bridge (1912) and the Hell Gate Railroad Bridge (1916). Later would come the Triborough Bridge (1936), the Bronx-Whitestone Bridge (1939), the Throgs Neck Bridge (1961) and the Rikers Island Bridge (1966). In addition, numerous rail tunnels pass under the East River – most of them part of the New York City Subway system – as does the Brooklyn-Battery Tunnel and the Queens-Midtown Tunnel. (See Crossings below for details.) Also under the river is Water Tunnel #1 of the New York City water supply system, built in 1917 to extend the Manhattan portion of the tunnel to Brooklyn, and via City Tunnel #2 (1936) to Queens; these boroughs became part of New York City after the city's consolidation in 1898. City Tunnel #3 will also run under the river, under the northern tip of Roosevelt Island, and is expected to be completed by 2018; the Manhattan portion of the tunnel went into service in 2013. Philanthropist John D. Rockefeller founded what is now Rockefeller University in 1901, between 63rd and 64th Streets on the river side of York Avenue, overlooking the river. The university is a research university for doctoral and post-doctoral scholars, primarily in the fields of medicine and biological science. North of it is one of the major medical centers in the city, NewYork Presbyterian / Weill Cornell Medical Center, which is associated with the medical schools of both Columbia University and Cornell University. Although it can trace its history back to 1771, the center on York Avenue, much of which overlooks the river, was built in 1932. The East River was the site of one of the greatest disasters in the history of New York City when, in June 1904, the PS "General Slocum" sank near North Brother Island due to a fire. It was carrying 1,400 German-Americans to a picnic site on Long Island for an annual outing. There were only 321 survivors of the disaster, one of the worst losses of life in the city's long history, and a devastating blow to the Little Germany neighborhood on the Lower East Side. The captain of the ship and the managers of the company that owned it were indicted, but only the captain was convicted; he spent 3 and a half years of his 10-year sentence at Sing Sing Prison before being released by a Federal parole board, and then pardoned by President William Howard Taft. Beginning in 1934, and then again from 1948–1966, the Manhattan shore of the river became the location for the limited-access East River Drive, which was later renamed after Franklin Delano Roosevelt, and is universally known by New Yorkers as the "FDR Drive". The road is sometimes at grade, sometimes runs under locations such as the site of the Headquarters of the United Nations and Carl Schurz Park and Gracie Mansion – the mayor's official residence, and is at time double-decked, because Hell Gate provides no room for more landfill. It begins at Battery Park, runs past the Brooklyn, Manhattan, Williamsburg and Queensboro Bridges, and the Ward's Island Footbridge, and terminates just before the Robert F. Kennedy Triboro Bridge when it connects to the Harlem River Drive. Between most of the FDR Drive and the River is the East River Greenway, part of the Manhattan Waterfront Greenway. The East River Greenway was primarily built in connection with the building of the FDR Drive, although some portions were built as recently as 2002, and other sections are still incomplete. In 1963, Con Edison built the Ravenswood Generating Station on the Long Island City shore of the river, on land some of which was once stone quarries which provided granite and marble slabs for Manhattan's buildings. The plant has since been owned by KeySpan. National Grid and TransCanada, the result of deregulation of the electrical power industry. The station, which can generate about 20% of the electrical needs of New York City – approximately 2,500 megawatts – receives some of its fuel by oil barge. North of the power plant can be found Socrates Sculpture Park, an illegal dumpsite and abandoned landfill that in 1986 was turned into an outdoor museum, exhibition space for artists, and public park by sculptor Mark di Suvero and local activists. The area also contains Rainey Park, which honors Thomas C. Rainey, who attempted for 40 years to get a bridge built in that location from Manhattan to Queens. The Queensboro Bridge was eventually built south of this location. In 2011, NY Waterway started operating its East River Ferry line. The route was a 7-stop East River service that runs in a loop between East 34th Street and Hunters Point, making two intermediate stops in Brooklyn and three in Queens. The ferry, an alternative to the New York City Subway, cost $4 per one-way ticket. It was instantly popular: from June to November 2011, the ferry saw 350,000 riders, over 250% of the initial ridership forecast of 134,000 riders. In December 2016, in preparation for the start of NYC Ferry service the next year, Hornblower Cruises purchased the rights to operate the East River Ferry. NYC Ferry started service on May 1, 2017, with the East River Ferry as part of the system. In February 2012 the federal government announced an agreement with Verdant Power to install 30 tidal turbines in the channel of the East River. The turbines were projected to begin operations in 2015 and are supposed to produce 1.05 megawatts of power. The strength of the current foiled an earlier effort in 2007 to tap the river for tidal power. On May 7, 2017, the catastrophic failure of a Con Edison substation in Brooklyn caused a spill into the river of over of dielectric fluid, a synthetic mineral oil used to cool electrical equipment and prevent electrical discharges. (See below.) Throughout most of the history of New York City, and New Amsterdam before it, the East River has been the receptacle for the city's garbage and sewage. "Night men" who collected "night soil" from outdoor privies would dump their loads into the river, and even after the construction of the Croton Aqueduct (1842) and then the New Croton Aqueduct (1890) gave rise to indoor plumbing, the waste that was flushed away into the sewers, where it mixed with ground runoff, ran directly into the river, untreated. The sewers terminated at the slips where ships docked, until the waste began to build up, preventing dockage, after which the outfalls were moved to the end of the piers. The "landfill" which created new land along the shoreline when the river was "wharfed out" by the sale of "water lots" was largely garbage such as bones, offal, and even whole dead animals, along with excrement – human and animal. The result was that by the 1850s, if not before, the East River, like the other waterways around the city, was undergoing the process of eutrophication where the increase in nitrogen from excrement and other sources led to a decrease in free oxygen, which in turn led to an increase in phytoplankton such as algae and a decrease in other life forms, breaking the area's established food chain. The East River became very polluted, and its animal life decreased drastically. In an earlier time, one person had described the transparency of the water: "I remember the time, gentlemen, when you could go in twelve feet of water and you could see the pebbles on the bottom of this river." As the water got more polluted, it darkened, underwater vegetation (such as photosynthesizing seagrass) began dying, and as the seagrass beds declined, the many associated species of their ecosystems declined as well, contributing to the decline of the river. Also harmful was the general destruction of the once plentiful oyster beds in the waters around the city, and the over-fishing of menhaden, or mossbunker, a small silvery fish which had been used since the time of the Native Americans for fertilizing crops – however it took 8,000 of these schooling fish to fertilize a single acre, so mechanized fishing using the purse seine was developed, and eventually the menhaden population collapsed. Menhaden feed on phytoplankton, helping to keep them in check, and are also a vital step in the food chain, as bluefish, striped bass and other fish species which do not eat phytoplankton feed on the menhaden. The oyster is another filter feeder: oysters purify 10 to 100 gallons a day, while each menhaden filters four gallons in a minute, and their schools were immense: one report had a farmer collecting 20 oxcarts worth of menhaden using simple fishing nets deployed from the shore. The combination of more sewage, due to the availability of more potable water – New York's water consumption "per capita" was twice that of Europe – indoor plumbing, the destruction of filter feeders, and the collapse of the food chain, damaged the ecosystem of the waters around New York, including the East River, almost beyond repair. Because of these changes to the ecosystem, by 1909, the level of dissolved-oxygen in the lower part of the river had declined to less than 65%, where 55% of saturation is the point at which the amount of fish and the number of their species begins to be affected. Only 17 years later, by 1926, the level of dissolved oxygen in the river had fallen to 13%, below the point at which most fish species can survive. Due to heavy pollution, the East River is dangerous to people who fall in or attempt to swim in it, although as of mid-2007 the water was cleaner than it had been in decades. , the New York City Department of Environmental Protection (DEP) categorizes the East River as Use Classification I, meaning it is safe for secondary contact activities such as boating and fishing. According to the marine sciences section of the DEP, the channel is swift, with water moving as fast as four knots, just as it does in the Hudson River on the other side of Manhattan. That speed can push casual swimmers out to sea. A few people drown in the waters around New York City each year. , it was reported that the level of bacteria in the river was below Federal guidelines for swimming on most days, although the readings may vary significantly, so that the outflow from Newtown Creek or the Gowanus Canal can be tens or hundreds of times higher than recommended, according to Riverkeeper, a non-profit environmentalist advocacy group. The counts are also higher along the shores of the strait than they are in the middle of its flow. Nevertheless, the "Brooklyn Bridge Swim" is an annual event where swimmers cross the channel from Brooklyn Bridge Park to Manhattan. Still, thanks to reductions in pollution, cleanups, the restriction of development, and other environmental controls, the East River along Manhattan is one of the areas of New York's waterways – including the Hudson-Raritan Estuary and both shores of Long Island – which have shown signs of the return of biodiversity. On the other hand, the river is also under attack from hardy, competitive, alien critters, such as the European green crab, which is considered to be one of the world's ten worst invasive species, and is present in the river. On May 7, 2017, the catastrophic failure of Con Edison's Farragut Substation at 89 John Street in Dumbo, Brooklyn, caused a spill of dielectric fluid – an insoluble synthetic mineral oil, considered non-toxic by New York state, used to cool electrical equipment and prevent electrical discharges – into the East River from a tank. The National Response Center received a report of the spill at 1:30pm that day, although the public did not learn of the spill for two days, and then only from tweets from NYC Ferry. A "safety zone" was established, extending from a line drawn between Dupont Street in Greenpoint, Brooklyn, to East 25th Street in Kips Bay, Manhattan, south to Buttermilk Channel. Recreational and human-powered vehicles such as kayaks and paddleboards were banned from the zone while the oil was being cleaned up, and the speed of commercial vehicles restricted so as not to spread the oil in their wakes, causing delays in NYC Ferry service. The clean-up efforts were being undertaken by Con Edison personnel and private environmental contractors, the U.S. Coast Guard, and the New York State Department of Environmental Conservation, with the assistance of NYC Emergency Management. The loss of the sub-station caused a voltage dip in the power provided by Con Ed to the Metropolitan Transportation Authority's New York City Subway system, which disrupted its signals. The Coast Guard estimated that of oil spilled into the water, with the remainder soaking into the soil at the substation. In the past the Coast Guard has on average been able to recover about 10% of oil spilled, however the complex tides in the river make the recovery much more difficult, with the turbulent water caused by the river's change of tides pushing contaminated water over the containment booms, where it is then carried out to sea and cannot be recovered. By Friday May 12, officials from Con Edison reported that almost had been taken out of the water. Environmental damage to wildlife is expected to be less than if the spill was of petroleum-based oil, but the oil can still block the sunlight necessary for the river's fish and other organisms to live. Nesting birds are also in possible danger from the oil contaminating their nests and potentially poisoning the birds or their eggs. Water from the East River was reported to have tested positive for low levels of PCB, a known carcinogen. Putting the spill into perspective, John Lipscomb, the vice president of advocacy for Riverkeepers said that the chronic release after heavy rains of overflow from city's wastewater treatment system was "a bigger problem for the harbor than this accident." The state Department of Environmental Conservation is investigating the spill. It was later reported that according to DEC data which dates back to 1978, the substation involved had spilled 179 times previously, more than any other Con Ed facility. The spills have included 8,400 gallons of dielectric oil, hydraulic oil, and antifreeze which leaked at various times into the soil around the substation, the sewers, and the East River. On June 22, Con Edison used non-toxic green dye and divers in the river to find the source of the leak. As a result, a hole was plugged. The utility continued to believe that the bulk of the spill went into the ground around the substation, and excavated and removed several hundred cubic yards of soil from the area. They estimated that about went into the river, of which were recovered. Con Edison said that it installed a new transformer, and intended to add new barrier around the facility to help guard against future spills propagating into the river. Music Television Games Literature Informational notes Citations Bibliography
https://en.wikipedia.org/wiki?curid=9592
Existentialism Existentialism ( or ) is a tradition of philosophical enquiry that explores the nature of existence by emphasizing experience of the human subject—not merely the thinking subject, but the acting, feeling, living human individual. In the view of the existentialist, the individual's starting point is characterized by what has been called "the existential angst" (or, variably, existential attitude, dread, etc.), or a sense of disorientation, confusion, or anxiety in the face of an apparently meaningless or absurd world. Existentialism is associated mainly with certain 19th- and 20th-century European philosophers who shared an emphasis on the human subject, despite profound doctrinal differences. Many existentialists regarded traditional systematic or academic philosophies, in both style and content, as too abstract and remote from concrete human experience. A primary virtue in existentialist thought is authenticity. Søren Kierkegaard is generally considered to have been the first existentialist philosopher, though he did not use the term existentialism. He proposed that each individual—not society or religion—is solely responsible for giving meaning to life and living it passionately and sincerely, or "authentically". Existentialism became popular in the years following World War II, thanks to Jean-Paul Sartre, who read Martin Heidegger while in a POW camp and strongly influenced many disciplines besides philosophy, including theology, drama, art, literature, and psychology. The term "existentialism" (French: "L'existentialisme") was coined by the French Catholic philosopher Gabriel Marcel in the mid-1940s. At first, when Marcel applied the term to Jean-Paul Sartre at a colloquium in 1945, Sartre rejected it. Sartre subsequently changed his mind and, on October 29, 1945, publicly adopted the existentialist label in a lecture to the "Club Maintenant" in Paris. The lecture was published as "L'existentialisme est un humanisme" ("Existentialism is a Humanism"), a short book that did much to popularize existentialist thought. Marcel later came to reject the label himself in favour of the term Neo-Socratic, in honor of Kierkegaard's essay "On The Concept of Irony". Some scholars argue that the term should be used only to refer to the cultural movement in Europe in the 1940s and 1950s associated with the works of the philosophers Sartre, Simone de Beauvoir, Maurice Merleau-Ponty, and Albert Camus. Other scholars extend the term to Kierkegaard, and yet others extend it as far back as Socrates. However, the term is often identified with the philosophical views of Sartre. The labels "existentialism" and "existentialist" are often seen as historical conveniences in as much as they were first applied to many philosophers in hindsight, long after they had died. In fact, while existentialism is generally considered to have originated with Kierkegaard, the first prominent existentialist philosopher to adopt the term as a self-description was Sartre. Sartre posits the idea that "what all existentialists have in common is the fundamental doctrine that existence precedes essence", as the philosopher Frederick Copleston explains. According to the philosopher Steven Crowell, defining existentialism has been relatively difficult, and he argues that it is better understood as a general approach used to reject certain systematic philosophies rather than as a systematic philosophy itself. Sartre himself, in a lecture delivered in 1945, described existentialism as "the attempt to draw all the consequences from a position of consistent atheism". Although many outside Scandinavia consider the term existentialism to have originated from Kierkegaard himself, it is more likely that Kierkegaard adopted this term (or at least the term "existential" as a description of his philosophy) from the Norwegian poet and literary critic Johan Sebastian Cammermeyer Welhaven. This assertion comes from two sources. The Norwegian philosopher Erik Lundestad refers to the Danish philosopher Fredrik Christian Sibbern. Sibbern is supposed to have had two conversations in 1841, the first with Welhaven and the second with Kierkegaard. It is in the first conversation that it is believed that Welhaven came up with "a word that he said covered a certain thinking, which had a close and positive attitude to life, a relationship he described as existential". This was then brought to Kierkegaard by Sibbern. The second claim comes from the Norwegian historian Rune Slagstad, who claims to prove that Kierkegaard himself said the term "existential" was borrowed from the poet. He strongly believes that it was Kierkegaard himself who said that "Hegelians do not study philosophy "existentially"; to use a phrase by Welhaven from one time when I spoke with him about philosophy". Sartre argued that a central proposition of existentialism is that "existence precedes essence", which means that the most important consideration for individuals is that they are individuals—independently acting and responsible, conscious beings ("existence")—rather than what labels, roles, stereotypes, definitions, or other preconceived categories the individuals fit ("essence"). The actual life of the individuals is what constitutes what could be called their "true essence" instead of there being an arbitrarily attributed essence others use to define them. Thus, human beings, through their own consciousness, create their own values and determine a meaning to their life. This view is in contradistinction to what Aristotle and Aquinas held; they taught that essence precedes individual existence. Although it was Sartre who explicitly coined the phrase, similar notions can be found in the thought of existentialist philosophers such as Heidegger, and Kierkegaard: Some interpret the imperative to define oneself as meaning that anyone can wish to be anything. However, an existentialist philosopher would say such a wish constitutes an inauthentic existence – what Sartre would call "bad faith". Instead, the phrase should be taken to say that people are (1) defined only insofar as they act and (2) that they are responsible for their actions. For example, someone who acts cruelly towards other people is, by that act, defined as a cruel person. Furthermore, by this action of cruelty, such persons are themselves responsible for their new identity (cruel persons). This is as opposed to their genes, or "human nature", bearing the blame. As Sartre says in his lecture "Existentialism is a Humanism": "man first of all exists, encounters himself, surges up in the world—and defines himself afterwards". The more positive, therapeutic aspect of this is also implied: a person can choose to act in a different way, and to be a good person instead of a cruel person. Sartre's definition of existentialism was based on Heidegger's magnum opus "Being and Time" (1927). In the correspondence with Jean Beaufret later published as the "Letter on Humanism", Heidegger implies that Sartre misunderstood him for his own purposes of subjectivism, and that he did not mean that actions take precedence over being so long as those actions were not reflected upon. Heidegger commented that "the reversal of a metaphysical statement remains a metaphysical statement", meaning that he thought Sartre had simply switched the roles traditionally attributed to essence and existence without interrogating these concepts and their history in the way that Heidegger claimed to have done. The notion of the absurd contains the idea that there is no meaning in the world beyond what meaning we give it. This meaninglessness also encompasses the amorality or "unfairness" of the world. This conceptualization can be highlighted in the way it opposes the traditional Abrahamic religious perspective, which establishes that life's purpose is about the fulfillment of God's commandments. Such a purpose is what gives meaning to people's lives. To live the life of the absurd means rejecting a life that finds or pursues specific meaning for man's existence since there is nothing to be discovered. According to Albert Camus, the world or the human being is not in itself absurd. The concept only emerges through the juxtaposition of the two, where life becomes absurd due to the incompatibility between human beings and the world they inhabit. This view constitutes one of the two interpretations of the absurd in existentialist literature. The second view, which was first elaborated by Søren Kierkegaard, holds that absurdity is limited to actions and choices of human beings. These are considered absurd since they issue from human freedom, undermining their foundation outside of themselves. The notion of the absurd in existentialism contrasts with the claim that "bad things don't happen to good people"; to the world, metaphorically speaking, there is no such thing as a good person or a bad person; what happens happens, and it may just as well happen to a "good" person as to a "bad" person. Because of the world's absurdity, at any point in time, anything can happen to anyone, and a tragic event could plummet someone into direct confrontation with the Absurd. The notion of the Absurd has been prominent in literature throughout history. Many of the literary works of Kierkegaard, Samuel Beckett, Franz Kafka, Fyodor Dostoyevsky, Eugène Ionesco, Miguel de Unamuno, Luigi Pirandello, Sartre, Joseph Heller, and Camus contain descriptions of people who encounter the absurdity of the world. It is in relation to the concept of the devastating awareness of meaninglessness that Camus claimed that "there is only one truly serious philosophical problem, and that is suicide" in his "The Myth of Sisyphus". Although "prescriptions" against the possibly deleterious consequences of these kinds of encounters vary, from Kierkegaard's religious "stage" to Camus' insistence on persevering in spite of absurdity, the concern with helping people avoid living their lives in ways that put them in the perpetual danger of having everything meaningful break down is common to most existentialist philosophers. The possibility of having everything meaningful break down poses a threat of quietism, which is inherently against the existentialist philosophy. It has been said that the possibility of suicide makes all humans existentialists. The ultimate hero of absurdism lives without meaning and faces suicide without succumbing to it. Facticity is a concept defined by Sartre in "Being and Nothingness" (1943) as the "in-itself", which delineates for humans the modalities of being and not being. This can be more easily understood when considering facticity in relation to the temporal dimension of our past: one's past is what one is, in the sense that it co-constitutes oneself. However, to say that one is only one's past would be to ignore a significant part of reality (the present and the future), while saying that one's past is only what one was, would entirely detach it from oneself now. A denial of one's own concrete past constitutes an inauthentic lifestyle, and the same goes for all other kinds of facticity (having a human body—e.g., one that does not allow a person to run faster than the speed of sound—identity, values, etc.). Facticity is both a limitation and a condition of freedom. It is a limitation in that a large part of one's facticity consists of things one could not have chosen (birthplace, etc.), but a condition of freedom in the sense that one's values most likely depend on it. However, even though one's facticity is "set in stone" (as being past, for instance), it cannot determine a person: the value ascribed to one's facticity is still ascribed to it freely by that person. As an example, consider two men, one of whom has no memory of his past and the other who remembers everything. They both have committed many crimes, but the first man, knowing nothing about this, leads a rather normal life while the second man, feeling trapped by his own past, continues a life of crime, blaming his own past for "trapping" him in this life. There is nothing essential about his committing crimes, but he ascribes this meaning to his past. However, to disregard one's facticity when, in the continual process of self-making, one projects oneself into the future, that would be to put oneself in denial of oneself, and thus would be inauthentic. In other words, the origin of one's projection must still be one's facticity, though in the mode of not being it (essentially). An example of one focusing solely on one's possible projects without reflecting on one's current facticity: if one continually thinks about future possibilities related to being rich (e.g. a better car, bigger house, better quality of life, etc.) without considering the facticity of "not currently having the financial means to do so". In this example, considering both facticity and transcendence, an authentic mode of being would be considering future projects that might improve one's current finances (e.g. putting in extra hours, or investing savings) in order to arrive at a "future-facticity" of a modest pay rise, further leading to purchase of an affordable car. Another aspect of facticity is that it entails angst, both in the sense that freedom "produces" angst when limited by facticity, and in the sense that the lack of the possibility of having facticity to "step in" for one to take responsibility for something one has done, also produces angst. Another aspect of existential freedom is that one can change one's values. Thus, one is responsible for one's values, regardless of society's values. The focus on freedom in existentialism is related to the limits of the responsibility one bears, as a result of one's freedom: the relationship between freedom and responsibility is one of interdependency, and a clarification of freedom also clarifies that for which one is responsible. Many noted existentialist writers consider the theme of authentic existence important. Authentic existence involves the idea that one has to "create oneself" and then live in accordance with this self. What is meant by authenticity is that in acting, one should act as oneself, not as "one's acts" or as "one's genes" or any other essence requires. The authentic act is one that is in accordance with one's freedom. As a condition of freedom is facticity, this includes one's facticity, but not to the degree that this facticity can in any way determine one's transcendent choices (in the sense that one could then blame one's background [facticity] for making the choice one made [chosen project, from one's transcendence]). The role of facticity in relation to authenticity involves letting one's actual values come into play when one makes a choice (instead of, like Kierkegaard's Aesthete, "choosing" randomly), so that one also takes responsibility for the act instead of choosing either-or without allowing the options to have different values. In contrast to this, the inauthentic is the denial to live in accordance with one's freedom. This can take many forms, from pretending choices are meaningless or random, through convincing oneself that some form of determinism is true, to a sort of "mimicry" where one acts as "one should". How "one should" act is often determined by an image one has, of how one such as oneself (say, a bank manager, lion tamer, prostitute, etc.) acts. In "Being and Nothingness", Sartre relates an example of a "waiter" in "bad faith": he merely takes part in the "act" of being a typical waiter, albeit very convincingly. This image usually corresponds to some sort of social norm, but this does not mean that all acting in accordance with social norms is inauthentic: The main point is the attitude one takes to one's own freedom and responsibility, and the extent to which one acts in accordance with this freedom. The Other (when written with a capital "O") is a concept more properly belonging to phenomenology and its account of intersubjectivity. However, the concept has seen widespread use in existentialist writings, and the conclusions drawn from it differ slightly from the phenomenological accounts. The experience of the Other is the experience of another free subject who inhabits the same world as a person does. In its most basic form, it is this experience of the Other that constitutes intersubjectivity and objectivity. To clarify, when one experiences someone else, and this Other person experiences the world (the same world that a person experiences)—only from "over there"—the world itself is constituted as objective in that it is something that is "there" as identical for both of the subjects; a person experiences the other person as experiencing the same things. This experience of the Other's look is what is termed the Look (sometimes the Gaze). While this experience, in its basic phenomenological sense, constitutes the world as objective, and oneself as objectively existing subjectivity (one experiences oneself as seen in the Other's Look in precisely the same way that one experiences the Other as seen by him, as subjectivity), in existentialism, it also acts as a kind of limitation of freedom. This is because the Look tends to objectify what it sees. As such, when one experiences oneself in the Look, one does not experience oneself as nothing (no thing), but as something. Sartre's own example of a man peeping at someone through a keyhole can help clarify this: at first, this man is entirely caught up in the situation he is in; he is in a pre-reflexive state where his entire consciousness is directed at what goes on in the room. Suddenly, he hears a creaking floorboard behind him, and he becomes aware of himself as seen by the Other. He is thus filled with shame for he perceives himself as he would perceive someone else doing what he was doing, as a Peeping Tom. For Sartre, this phenomenological experience of shame establishes a proof for the existence of other minds and defeats the problem of solipsism. For the conscious state of shame to be experienced, one has to become aware of oneself as an object of another look, proving a priori, that other minds exist. The Look is then co-constitutive of one's facticity. Another characteristic feature of the Look is that no Other really needs to have been there: It is quite possible that the creaking floorboard was nothing but the movement of an old house; the Look is not some kind of mystical telepathic experience of the actual way the other sees one (there may also have been someone there, but he could have not noticed that the person was there). It is only one's perception of the way another might perceive him. "Existential angst", sometimes called existential dread, anxiety, or anguish, is a term that is common to many existentialist thinkers. It is generally held to be a negative feeling arising from the experience of human freedom and responsibility. The archetypical example is the experience one has when standing on a cliff where one not only fears falling off it, but also dreads the possibility of throwing oneself off. In this experience that "nothing is holding me back", one senses the lack of anything that predetermines one to either throw oneself off or to stand still, and one experiences one's own freedom. It can also be seen in relation to the previous point how angst is before nothing, and this is what sets it apart from fear that has an object. While in the case of fear, one can take definitive measures to remove the object of fear, in the case of angst, no such "constructive" measures are possible. The use of the word "nothing" in this context relates both to the inherent insecurity about the consequences of one's actions, and to the fact that, in experiencing freedom as angst, one also realizes that one is fully responsible for these consequences. There is nothing in people (genetically, for instance) that acts in their stead—that they can blame if something goes wrong. Therefore, not every choice is perceived as having dreadful possible consequences (and, it can be claimed, human lives would be unbearable if every choice facilitated dread). However, this does not change the fact that freedom remains a condition of every action. Despair is generally defined as a loss of hope. In existentialism, it is more specifically a loss of hope in reaction to a breakdown in one or more of the defining qualities of one's self or identity. If a person is invested in being a particular thing, such as a bus driver or an upstanding citizen, and then finds their being-thing compromised, they would normally be found in a state of despair—a hopeless state. For example, a singer who loses the ability to sing may despair if they have nothing else to fall back on—nothing to rely on for their identity. They find themselves unable to be what defined their being. What sets the existentialist notion of despair apart from the conventional definition is that existentialist despair is a state one is in even when they are not overtly in despair. So long as a person's identity depends on qualities that can crumble, they are in perpetual despair—and as there is, in Sartrean terms, no human essence found in conventional reality on which to constitute the individual's sense of identity, despair is a universal human condition. As Kierkegaard defines it in "Either/Or": "Let each one learn what he can; both of us can learn that a person’s unhappiness never lies in his lack of control over external conditions, since this would only make him completely unhappy." In "Works of Love", he says: Existentialists oppose definitions of human beings as primarily rational, and, therefore, oppose positivism and rationalism. Existentialism asserts that people actually make decisions based on subjective meaning rather than pure rationality. The rejection of reason as the source of meaning is a common theme of existentialist thought, as is the focus on the feelings of anxiety and dread that we feel in the face of our own radical freedom and our awareness of death. Kierkegaard advocated rationality as a means to interact with the objective world (e.g., in the natural sciences), but when it comes to existential problems, reason is insufficient: "Human reason has boundaries". Like Kierkegaard, Sartre saw problems with rationality, calling it a form of "bad faith", an attempt by the self to impose structure on a world of phenomena—"the Other"—that is fundamentally irrational and random. According to Sartre, rationality and other forms of bad faith hinder people from finding meaning in freedom. To try to suppress their feelings of anxiety and dread, people confine themselves within everyday experience, Sartre asserts, thereby relinquishing their freedom and acquiescing to being possessed in one form or another by "the Look" of "the Other" (i.e., possessed by another person—or at least one's idea of that other person). An existentialist reading of the Bible would demand that the reader recognize that they are an existing subject studying the words more as a recollection of events. This is in contrast to looking at a collection of "truths" that are outside and unrelated to the reader, but may develop a sense of reality/God. Such a reader is not obligated to follow the commandments as if an external agent is forcing these commandments upon them, but as though they are inside them and guiding them from inside. This is the task Kierkegaard takes up when he asks: "Who has the more difficult task: the teacher who lectures on earnest things a meteor's distance from everyday life—or the learner who should put it to use?" Although nihilism and existentialism are distinct philosophies, they are often confused with one another as both are rooted in the human experience of anguish and confusion stemming from the apparent meaninglessness of a world in which humans are compelled to find or create meaning. A primary cause of confusion is that Friedrich Nietzsche is an important philosopher in both fields. Existentialist philosophers often stress the importance of Angst as signifying the absolute lack of any objective ground for action, a move that is often reduced to a moral or an existential nihilism. A pervasive theme in the works of existentialist philosophy, however, is to persist through encounters with the absurd, as seen in Camus' "The Myth of Sisyphus" ("One must imagine Sisyphus happy"), and it is only very rarely that existentialist philosophers dismiss morality or one's self-created meaning: Kierkegaard regained a sort of morality in the religious (although he wouldn't himself agree that it was ethical; the religious suspends the ethical), and Sartre's final words in "Being and Nothingness" are "All these questions, which refer us to a pure and not an accessory (or impure) reflection, can find their reply only on the ethical plane. We shall devote to them a future work." Kierkegaard and Nietzsche were two of the first philosophers considered fundamental to the existentialist movement, though neither used the term "existentialism" and it is unclear whether they would have supported the existentialism of the 20th century. They focused on subjective human experience rather than the objective truths of mathematics and science, which they believed were too detached or observational to truly get at the human experience. Like Pascal, they were interested in people's quiet struggle with the apparent meaninglessness of life and the use of diversion to escape from boredom. Unlike Pascal, Kierkegaard and Nietzsche also considered the role of making free choices, particularly regarding fundamental values and beliefs, and how such choices change the nature and identity of the chooser. Kierkegaard's knight of faith and Nietzsche's Übermensch are representative of people who exhibit Freedom, in that they define the nature of their own existence. Nietzsche's idealized individual invents his own values and creates the very terms they excel under. By contrast, Kierkegaard, opposed to the level of abstraction in Hegel, and not nearly as hostile (actually welcoming) to Christianity as Nietzsche, argues through a pseudonym that the objective certainty of religious truths (specifically Christian) is not only impossible, but even founded on logical paradoxes. Yet he continues to imply that a leap of faith is a possible means for an individual to reach a higher stage of existence that transcends and contains both an aesthetic and ethical value of life. Kierkegaard and Nietzsche were also precursors to other intellectual movements, including postmodernism, and various strands of psychotherapy. However, Kierkegaard believed that individuals should live in accordance with their thinking. The first important literary author also important to existentialism was the Russian, Dostoyevsky. Dostoyevsky's "Notes from Underground" portrays a man unable to fit into society and unhappy with the identities he creates for himself. Sartre, in his book on existentialism "Existentialism is a Humanism", quoted Dostoyevsky's "The Brothers Karamazov" as an example of existential crisis. Sartre attributes Ivan Karamazov's claim, "If God did not exist, everything would be permitted" to Dostoyevsky himself, though this quote does not appear in the novel. However, a similar sentiment is explicitly stated when Alyosha visits Dimitri in prison. Dimitri mentions his conversations with Rakitin in which the idea that "Then, if He doesn't exist, man is king of the earth, of the universe" allowing the inference contained in Sartre's attribution to remain a valid idea contested within the novel. Other Dostoyevsky novels covered issues raised in existentialist philosophy while presenting story lines divergent from secular existentialism: for example, in "Crime and Punishment", the protagonist Raskolnikov experiences an existential crisis and then moves toward a Christian Orthodox worldview similar to that advocated by Dostoyevsky himself. In the first decades of the 20th century, a number of philosophers and writers explored existentialist ideas. The Spanish philosopher Miguel de Unamuno y Jugo, in his 1913 book "The Tragic Sense of Life in Men and Nations", emphasized the life of "flesh and bone" as opposed to that of abstract rationalism. Unamuno rejected systematic philosophy in favor of the individual's quest for faith. He retained a sense of the tragic, even absurd nature of the quest, symbolized by his enduring interest in Cervantes' fictional character Don Quixote. A novelist, poet and dramatist as well as philosophy professor at the University of Salamanca, Unamuno wrote a short story about a priest's crisis of faith, "Saint Manuel the Good, Martyr", which has been collected in anthologies of existentialist fiction. Another Spanish thinker, Ortega y Gasset, writing in 1914, held that human existence must always be defined as the individual person combined with the concrete circumstances of his life: ""Yo soy yo y mi circunstancia"" ("I am myself and my circumstances"). Sartre likewise believed that human existence is not an abstract matter, but is always situated (""en situation""). Although Martin Buber wrote his major philosophical works in German, and studied and taught at the Universities of Berlin and Frankfurt, he stands apart from the mainstream of German philosophy. Born into a Jewish family in Vienna in 1878, he was also a scholar of Jewish culture and involved at various times in Zionism and Hasidism. In 1938, he moved permanently to Jerusalem. His best-known philosophical work was the short book "I and Thou", published in 1922. For Buber, the fundamental fact of human existence, too readily overlooked by scientific rationalism and abstract philosophical thought, is "man with man", a dialogue that takes place in the so-called "sphere of between" (""das Zwischenmenschliche""). Two Ukrainian born thinkers, Lev Shestov and Nikolai Berdyaev, became well known as existentialist thinkers during their post-Revolutionary exiles in Paris. Shestov, born into a Ukrainian-Jewish family in Kiev, had launched an attack on rationalism and systematization in philosophy as early as 1905 in his book of aphorisms "All Things Are Possible". Berdyaev, also from Kiev but with a background in the Eastern Orthodox Church, drew a radical distinction between the world of spirit and the everyday world of objects. Human freedom, for Berdyaev, is rooted in the realm of spirit, a realm independent of scientific notions of causation. To the extent the individual human being lives in the objective world, he is estranged from authentic spiritual freedom. "Man" is not to be interpreted naturalistically, but as a being created in God's image, an originator of free, creative acts. He published a major work on these themes, "The Destiny of Man", in 1931. Marcel, long before coining the term "existentialism", introduced important existentialist themes to a French audience in his early essay "Existence and Objectivity" (1925) and in his "Metaphysical Journal" (1927). A dramatist as well as a philosopher, Marcel found his philosophical starting point in a condition of metaphysical alienation: the human individual searching for harmony in a transient life. Harmony, for Marcel, was to be sought through "secondary reflection", a "dialogical" rather than "dialectical" approach to the world, characterized by "wonder and astonishment" and open to the "presence" of other people and of God rather than merely to "information" about them. For Marcel, such presence implied more than simply being there (as one thing might be in the presence of another thing); it connoted "extravagant" availability, and the willingness to put oneself at the disposal of the other. Marcel contrasted "secondary reflection" with abstract, scientific-technical "primary reflection", which he associated with the activity of the abstract Cartesian ego. For Marcel, philosophy was a concrete activity undertaken by a sensing, feeling human being incarnate—embodied—in a concrete world. Although Sartre adopted the term "existentialism" for his own philosophy in the 1940s, Marcel's thought has been described as "almost diametrically opposed" to that of Sartre. Unlike Sartre, Marcel was a Christian, and became a Catholic convert in 1929. In Germany, the psychologist and philosopher Karl Jaspers—who later described existentialism as a "phantom" created by the public—called his own thought, heavily influenced by Kierkegaard and Nietzsche, "Existenzphilosophie". For Jaspers, ""Existenz"-philosophy is the way of thought by means of which man seeks to become himself...This way of thought does not cognize objects, but elucidates and makes actual the being of the thinker". Jaspers, a professor at the University of Heidelberg, was acquainted with Heidegger, who held a professorship at Marburg before acceding to Husserl's chair at Freiburg in 1928. They held many philosophical discussions, but later became estranged over Heidegger's support of National Socialism (Nazism). They shared an admiration for Kierkegaard, and in the 1930s, Heidegger lectured extensively on Nietzsche. Nevertheless, the extent to which Heidegger should be considered an existentialist is debatable. In "Being and Time" he presented a method of rooting philosophical explanations in human existence ("Dasein") to be analysed in terms of existential categories ("existentiale"); and this has led many commentators to treat him as an important figure in the existentialist movement. Following the Second World War, existentialism became a well-known and significant philosophical and cultural movement, mainly through the public prominence of two French writers, Jean-Paul Sartre and Albert Camus, who wrote best-selling novels, plays and widely read journalism as well as theoretical texts. These years also saw the growing reputation of "Being and Time" outside Germany. Sartre dealt with existentialist themes in his 1938 novel "Nausea" and the short stories in his 1939 collection "The Wall", and had published his treatise on existentialism, "Being and Nothingness", in 1943, but it was in the two years following the liberation of Paris from the German occupying forces that he and his close associates—Camus, Simone de Beauvoir, Maurice Merleau-Ponty, and others—became internationally famous as the leading figures of a movement known as existentialism. In a very short period of time, Camus and Sartre in particular became the leading public intellectuals of post-war France, achieving by the end of 1945 "a fame that reached across all audiences." Camus was an editor of the most popular leftist (former French Resistance) newspaper "Combat"; Sartre launched his journal of leftist thought, "Les Temps Modernes", and two weeks later gave the widely reported lecture on existentialism and secular humanism to a packed meeting of the Club Maintenant. Beauvoir wrote that "not a week passed without the newspapers discussing us"; existentialism became "the first media craze of the postwar era." By the end of 1947, Camus' earlier fiction and plays had been reprinted, his new play "Caligula" had been performed and his novel "The Plague" published; the first two novels of Sartre's "The Roads to Freedom" trilogy had appeared, as had Beauvoir's novel "The Blood of Others". Works by Camus and Sartre were already appearing in foreign editions. The Paris-based existentialists had become famous. Sartre had traveled to Germany in 1930 to study the phenomenology of Edmund Husserl and Martin Heidegger, and he included critical comments on their work in his major treatise "Being and Nothingness". Heidegger's thought had also become known in French philosophical circles through its use by Alexandre Kojève in explicating Hegel in a series of lectures given in Paris in the 1930s. The lectures were highly influential; members of the audience included not only Sartre and Merleau-Ponty, but Raymond Queneau, Georges Bataille, Louis Althusser, André Breton, and Jacques Lacan. A selection from "Being and Time" was published in French in 1938, and his essays began to appear in French philosophy journals. Heidegger read Sartre's work and was initially impressed, commenting: "Here for the first time I encountered an independent thinker who, from the foundations up, has experienced the area out of which I think. Your work shows such an immediate comprehension of my philosophy as I have never before encountered." Later, however, in response to a question posed by his French follower Jean Beaufret, Heidegger distanced himself from Sartre's position and existentialism in general in his "Letter on Humanism". Heidegger's reputation continued to grow in France during the 1950s and 1960s. In the 1960s, Sartre attempted to reconcile existentialism and Marxism in his work "Critique of Dialectical Reason". A major theme throughout his writings was freedom and responsibility. Camus was a friend of Sartre, until their falling-out, and wrote several works with existential themes including "The Rebel", "Summer in Algiers", "The Myth of Sisyphus", and "The Stranger", the latter being "considered—to what would have been Camus's irritation—the exemplary existentialist novel." Camus, like many others, rejected the existentialist label, and considered his works concerned with facing the absurd. In the titular book, Camus uses the analogy of the Greek myth of Sisyphus to demonstrate the futility of existence. In the myth, Sisyphus is condemned for eternity to roll a rock up a hill, but when he reaches the summit, the rock will roll to the bottom again. Camus believes that this existence is pointless but that Sisyphus ultimately finds meaning and purpose in his task, simply by continually applying himself to it. The first half of the book contains an extended rebuttal of what Camus took to be existentialist philosophy in the works of Kierkegaard, Shestov, Heidegger, and Jaspers. Simone de Beauvoir, an important existentialist who spent much of her life as Sartre's partner, wrote about feminist and existentialist ethics in her works, including "The Second Sex" and "The Ethics of Ambiguity". Although often overlooked due to her relationship with Sartre, de Beauvoir integrated existentialism with other forms of thinking such as feminism, unheard of at the time, resulting in alienation from fellow writers such as Camus. Paul Tillich, an important existentialist theologian following Kierkegaard and Karl Barth, applied existentialist concepts to Christian theology, and helped introduce existential theology to the general public. His seminal work "The Courage to Be" follows Kierkegaard's analysis of anxiety and life's absurdity, but puts forward the thesis that modern humans must, via God, achieve selfhood in spite of life's absurdity. Rudolf Bultmann used Kierkegaard's and Heidegger's philosophy of existence to demythologize Christianity by interpreting Christian mythical concepts into existentialist concepts. Maurice Merleau-Ponty, an existential phenomenologist, was for a time a companion of Sartre. Merleau-Ponty's "Phenomenology of Perception" (1945) was recognized as a major statement of French existentialism. It has been said that Merleau-Ponty's work "Humanism and Terror" greatly influenced Sartre. However, in later years they were to disagree irreparably, dividing many existentialists such as de Beauvoir, who sided with Sartre. Colin Wilson, an English writer, published his study "The Outsider" in 1956, initially to critical acclaim. In this book and others (e.g. "Introduction to the New Existentialism"), he attempted to reinvigorate what he perceived as a pessimistic philosophy and bring it to a wider audience. He was not, however, academically trained, and his work was attacked by professional philosophers for lack of rigor and critical standards. Stanley Kubrick's 1957 anti-war film "Paths of Glory" "illustrates, and even illuminates...existentialism" by examining the "necessary absurdity of the human condition" and the "horror of war". The film tells the story of a fictional World War I French army regiment ordered to attack an impregnable German stronghold; when the attack fails, three soldiers are chosen at random, court-martialed by a "kangaroo court", and executed by firing squad. The film examines existentialist ethics, such as the issue of whether objectivity is possible and the "problem of authenticity". Orson Welles' 1962 film " The Trial", based upon Franz Kafka's book of the same name (Der Process), is characteristic of both existentialist and absurdist themes in its depiction of a man (Joseph K.) arrested for a crime for which the charges are neither revealed to him nor to the reader. "Neon Genesis Evangelion" is a Japanese science fiction animation series created by the anime studio Gainax and was both directed and written by Hideaki Anno. Existential themes of individuality, consciousness, freedom, choice, and responsibility are heavily relied upon throughout the entire series, particularly through the philosophies of Jean-Paul Sartre and Søren Kierkegaard. Episode 16's title, is a reference to Kierkegaard's book, "The Sickness Unto Death". Some contemporary films dealing with existentialist issues include "Melancholia", "Fight Club", "I Heart Huckabees", "Waking Life", "The Matrix", "Ordinary People", and "Life in a Day". Likewise, films throughout the 20th century such as "The Seventh Seal", "Ikiru", "Taxi Driver", the "Toy Story" films, "The Great Silence", "Ghost in the Shell", "Harold and Maude", "High Noon", "Easy Rider", "One Flew Over the Cuckoo's Nest", "A Clockwork Orange", "Groundhog Day", "Apocalypse Now", "Badlands", and "Blade Runner" also have existentialist qualities. Notable directors known for their existentialist films include Ingmar Bergman, François Truffaut, Jean-Luc Godard, Michelangelo Antonioni, Akira Kurosawa, Terrence Malick, Stanley Kubrick, Andrei Tarkovsky, Hideaki Anno, Wes Anderson, Gaspar Noé, Woody Allen, and Christopher Nolan. Charlie Kaufman's "Synecdoche, New York" focuses on the protagonist's desire to find existential meaning. Similarly, in Kurosawa's "Red Beard", the protagonist's experiences as an intern in a rural health clinic in Japan lead him to an existential crisis whereby he questions his reason for being. This, in turn, leads him to a better understanding of humanity. The French film, "Mood Indigo" (directed by Michel Gondry) embraced various elements of existentialism. The film "The Shawshank Redemption", released in 1994, depicts life in a prison in Maine, United States to explore several existentialist concepts. Existential perspectives are also found in modern literature to varying degrees, especially since the 1920s. Louis-Ferdinand Céline's "Journey to the End of the Night" (Voyage au bout de la nuit, 1932) celebrated by both Sartre and Beauvoir, contained many of the themes that would be found in later existential literature, and is in some ways, the proto-existential novel. Jean-Paul Sartre's 1938 novel "Nausea" was "steeped in Existential ideas", and is considered an accessible way of grasping his philosophical stance. Between 1900 and 1960, other authors such as Albert Camus, Franz Kafka, Rainer Maria Rilke, T.S. Eliot, Herman Hesse, Luigi Pirandello, Ralph Ellison, and Jack Kerouac, composed literature or poetry that contained, to varying degrees, elements of existential or proto-existential thought. The philosophy's influence even reached pulp literature shortly after the turn of the 20th century, as seen in the existential disparity witnessed in Man's lack of control of his fate in the works of H.P. Lovecraft. Since the late 1960s, a great deal of cultural activity in literature contains postmodernist as well as existential elements. Books such as "Do Androids Dream of Electric Sheep?" (1968) (now republished as "Blade Runner") by Philip K. Dick, "Slaughterhouse-Five" by Kurt Vonnegut, "Fight Club" by Chuck Palahniuk and Formless Meanderings by Bharath Srinivasan all distort the line between reality and appearance while simultaneously espousing existential themes. Sartre wrote "No Exit" in 1944, an existentialist play originally published in French as "Huis Clos" (meaning "In Camera" or "behind closed doors"), which is the source of the popular quote, "Hell is other people." (In French, "L'enfer, c'est les autres"). The play begins with a Valet leading a man into a room that the audience soon realizes is in hell. Eventually he is joined by two women. After their entry, the Valet leaves and the door is shut and locked. All three expect to be tortured, but no torturer arrives. Instead, they realize they are there to torture each other, which they do effectively by probing each other's sins, desires, and unpleasant memories. Existentialist themes are displayed in the Theatre of the Absurd, notably in Samuel Beckett's "Waiting for Godot", in which two men divert themselves while they wait expectantly for someone (or something) named Godot who never arrives. They claim Godot is an acquaintance, but in fact, hardly know him, admitting they would not recognize him if they saw him. Samuel Beckett, once asked who or what Godot is, replied, "If I knew, I would have said so in the play." To occupy themselves, the men eat, sleep, talk, argue, sing, play games, exercise, swap hats, and contemplate suicide—anything "to hold the terrible silence at bay". The play "exploits several archetypal forms and situations, all of which lend themselves to both comedy and pathos." The play also illustrates an attitude toward human experience on earth: the poignancy, oppression, camaraderie, hope, corruption, and bewilderment of human experience that can be reconciled only in the mind and art of the absurdist. The play examines questions such as death, the meaning of human existence and the place of God in human existence. Tom Stoppard's "Rosencrantz & Guildenstern Are Dead" is an absurdist tragicomedy first staged at the Edinburgh Festival Fringe in 1966. The play expands upon the exploits of two minor characters from Shakespeare's "Hamlet". Comparisons have also been drawn to Samuel Beckett's "Waiting For Godot", for the presence of two central characters who appear almost as two halves of a single character. Many plot features are similar as well: the characters pass time by playing Questions, impersonating other characters, and interrupting each other or remaining silent for long periods of time. The two characters are portrayed as two clowns or fools in a world beyond their understanding. They stumble through philosophical arguments while not realizing the implications, and muse on the irrationality and randomness of the world. Jean Anouilh's "Antigone" also presents arguments founded on existentialist ideas. It is a tragedy inspired by Greek mythology and the play of the same name (Antigone, by Sophocles) from the 5th century BC. In English, it is often distinguished from its antecedent by being pronounced in its original French form, approximately "Ante-GŌN." The play was first performed in Paris on 6 February 1944, during the Nazi occupation of France. Produced under Nazi censorship, the play is purposefully ambiguous with regards to the rejection of authority (represented by Antigone) and the acceptance of it (represented by Creon). The parallels to the French Resistance and the Nazi occupation have been drawn. Antigone rejects life as desperately meaningless but without affirmatively choosing a noble death. The crux of the play is the lengthy dialogue concerning the nature of power, fate, and choice, during which Antigone says that she is, "... disgusted with [the]...promise of a humdrum happiness." She states that she would rather die than live a mediocre existence. Critic Martin Esslin in his book "Theatre of the Absurd" pointed out how many contemporary playwrights such as Samuel Beckett, Eugène Ionesco, Jean Genet, and Arthur Adamov wove into their plays the existentialist belief that we are absurd beings loose in a universe empty of real meaning. Esslin noted that many of these playwrights demonstrated the philosophy better than did the plays by Sartre and Camus. Though most of such playwrights, subsequently labeled "Absurdist" (based on Esslin's book), denied affiliations with existentialism and were often staunchly anti-philosophical (for example Ionesco often claimed he identified more with 'Pataphysics or with Surrealism than with existentialism), the playwrights are often linked to existentialism based on Esslin's observation. A major offshoot of existentialism as a philosophy is existentialist psychology and psychoanalysis, which first crystallized in the work of Otto Rank, Freud's closest associate for 20 years. Without awareness of the writings of Rank, Ludwig Binswanger was influenced by Freud, Edmund Husserl, Heidegger, and Sartre. A later figure was Viktor Frankl, who briefly met Freud as a young man. His logotherapy can be regarded as a form of existentialist therapy. The existentialists would also influence social psychology, antipositivist micro-sociology, symbolic interactionism, and post-structuralism, with the work of thinkers such as Georg Simmel and Michel Foucault. Foucault was a great reader of Kierkegaard even though he almost never refers this author, who nonetheless had for him an importance as secret as it was decisive. An early contributor to existentialist psychology in the United States was Rollo May, who was strongly influenced by Kierkegaard and Otto Rank. One of the most prolific writers on techniques and theory of existentialist psychology in the USA is Irvin D. Yalom. Yalom states that Aside from their reaction against Freud's mechanistic, deterministic model of the mind and their assumption of a phenomenological approach in therapy, the existentialist analysts have little in common and have never been regarded as a cohesive ideological school. These thinkers—who include Ludwig Binswanger, Medard Boss, Eugène Minkowski, V. E. Gebsattel, Roland Kuhn, G. Caruso, F. T. Buytendijk, G. Bally and Victor Frankl—were almost entirely unknown to the American psychotherapeutic community until Rollo May's highly influential 1958 book "Existence"—and especially his introductory essay—introduced their work into this country. A more recent contributor to the development of a European version of existentialist psychotherapy is the British-based Emmy van Deurzen. Anxiety's importance in existentialism makes it a popular topic in psychotherapy. Therapists often offer existentialist philosophy as an explanation for anxiety. The assertion is that anxiety is manifested of an individual's complete freedom to decide, and complete responsibility for the outcome of such decisions. Psychotherapists using an existentialist approach believe that a patient can harness his anxiety and use it constructively. Instead of suppressing anxiety, patients are advised to use it as grounds for change. By embracing anxiety as inevitable, a person can use it to achieve his full potential in life. Humanistic psychology also had major impetus from existentialist psychology and shares many of the fundamental tenets. Terror management theory, based on the writings of Ernest Becker and Otto Rank, is a developing area of study within the academic study of psychology. It looks at what researchers claim are implicit emotional reactions of people confronted with the knowledge that they will eventually die. Also, Gerd B. Achenbach has refreshed the Socratic tradition with his own blend of philosophical counseling. So did Michel Weber with his Chromatiques Center in Belgium. Walter Kaufmann criticized 'the profoundly unsound methods and the dangerous contempt for reason that have been so prominent in existentialism.' Logical positivist philosophers, such as Rudolf Carnap and A. J. Ayer, assert that existentialists are often confused about the verb "to be" in their analyses of "being". Specifically, they argue that the verb "is" is transitive and pre-fixed to a predicate (e.g., an apple "is red") (without a predicate, the word "is" is meaningless), and that existentialists frequently misuse the term in this manner. Wilson has stated in his book "The Angry Years" that existentialism has created many of its own difficulties: "we can see how this question of freedom of the will has been vitiated by post-romantic philosophy, with its inbuilt tendency to laziness and boredom, we can also see how it came about that existentialism found itself in a hole of its own digging, and how the philosophical developments since then have amounted to walking in circles round that hole". Many critics argue Sartre's philosophy is contradictory. Specifically, they argue that Sartre makes metaphysical arguments despite his claiming that his philosophical views ignore metaphysics. Herbert Marcuse criticized "Being and Nothingness" for projecting anxiety and meaninglessness onto the nature of existence itself: "Insofar as Existentialism is a philosophical doctrine, it remains an idealistic doctrine: it hypostatizes specific historical conditions of human existence into ontological and metaphysical characteristics. Existentialism thus becomes part of the very ideology which it attacks, and its radicalism is illusory". In "Letter on Humanism", Heidegger criticized Sartre's existentialism: Existentialism says existence precedes essence. In this statement he is taking "existentia" and "essentia" according to their metaphysical meaning, which, from Plato's time on, has said that "essentia" precedes "existentia". Sartre reverses this statement. But the reversal of a metaphysical statement remains a metaphysical statement. With it, he stays with metaphysics, in oblivion of the truth of Being.
https://en.wikipedia.org/wiki?curid=9593
Ellipsis The ellipsis, , (plural ellipses; from the , , 'omission' or 'falling short'), also known informally as dot-dot-dot, is a series of (usually three) dots that indicates an intentional omission of a word, sentence, or whole section from a text without altering its original meaning. Opinions differ as to how to render ellipses in printed material. According to "The Chicago Manual of Style", it should consist of three periods, each separated from its neighbor by a non-breaking space, thus: . Such spaces should be omitted, however, according to the Associated Press (thus: ). A third option, illustrated in the opening sentence of this article, is to use the precomposed Unicode character with code point U+2026, in which the gaps are often not as wide as standard spaces, though not every font follows this pattern​— in Cambria, for example, the gaps are wider, not narrower, than standard spaces. In monospaced fonts, the three dots are set extremely tight (thus: , since the glyph may be no wider than an em). The ellipsis is also called a suspension point, points of ellipsis, periods of ellipsis, or (colloquially) "dot-dot-dot". Depending on their context and placement in a sentence, ellipses can indicate an unfinished thought, a leading statement, a slight pause, an echoing voice, or a nervous or awkward silence. Aposiopesis is the use of an ellipsis to trail off into silence—for example: "But I thought he was …" When placed at the beginning or end of a sentence, the ellipsis can also inspire a feeling of melancholy or longing. The most common forms of an ellipsis include a row of three periods or full points or a precomposed triple-dot glyph, the horizontal ellipsis . Style guides often have their own rules governing the use of ellipses. For example, "The Chicago Manual of Style" ("Chicago" style) recommends that an ellipsis be formed by typing three periods, each with a space on both sides , while the "Associated Press Stylebook" ("AP" style) puts the dots together, but retains a space before and after the group, thus: . Whether an ellipsis at the end of a sentence needs a fourth dot to finish the sentence is a matter of debate; "Chicago" advises it, as does the "Publication Manual of the American Psychological Association" (APA style), while some other style guides do not; the "Merriam-Webster Dictionary" and related works treat this style as optional, saying that it "may" be used. More commonly, a normal full stop (period) terminates the sentence, then a separate three-dot ellipsis is used to indicate one or more subsequent omitted sentences before continuing a longer quotation. "Business Insider" magazine suggests this style, and it is also used in many academic journals. Even the "Associated Press Stylebook" notably hostile to punctuation that journalists may consider optional and removable to save newsprint column width favors this approach. It is consistent in intent if not exact form with the agreement among those in favor of a fused four-dot ellipsis that the first of them is a full stop terminating the sentence and the other three are the ellipsis. In her book on the ellipsis, "Ellipsis in English Literature: Signs of Omission" (Cambridge University Press, 2015), Anne Toner suggests that the first use of the punctuation in the English language dates to a 1588 translation of Terence's "Andria", by Maurice Kyffin. In this case, however, the ellipsis consists not of dots but of short dashes. "Subpuncting" of medieval manuscripts also denotes omitted meaning and may be related. As commonly used, this juxtaposition of characters is referred to as "dots of ellipsis" in the English language. Occasionally, it would be used in pulp fiction and other works of early 20th-century fiction to denote expletives that would otherwise have been censored. An ellipsis may also imply an unstated alternative indicated by context. For example, when Sue says "I never drink wine . . . ", the implication is that she does drink something elsesuch as vodka. In reported speech, the ellipsis can be used to represent an intentional silence. In poetry, an ellipsis is used as a thought-pause or line break at the caesura or this is used to highlight sarcasm or make the reader think about the last points in the poem. In news reporting, often associated with brackets, it is used to indicate that a quotation has been condensed for space, brevity or relevance, as in "The President said that […] he would not be satisfied", where the exact quotation was "The President said that, for as long as this situation continued, he would not be satisfied". Herb Caen, Pulitzer-prize-winning columnist for the "San Francisco Chronicle", became famous for his "three-dot journalism". "The Chicago Manual of Style" suggests the use of an ellipsis for any omitted word, phrase, line, or paragraph from within but not at the end of a quoted passage. There are two commonly used methods of using ellipses: one uses three dots for any omission, while the second one makes a distinction between omissions within a sentence (using three dots: . . .) and omissions between sentences (using a period and a space followed by three dots: . ...). The Chicago Style Q&A recommends that writers avoid using the precomposed  (U+2026) character in manuscripts and to place three periods plus two nonbreaking spaces (. . .) instead, leaving the editor, publisher, or typographer to replace them later. The Modern Language Association (MLA) used to indicate that an ellipsis must include spaces before and after each dot in all uses. If an ellipsis is meant to represent an omission, square brackets must surround the ellipsis to make it clear that there was no pause in the original quote: . Currently, the MLA has removed the requirement of brackets in its style handbooks. However, some maintain that the use of brackets is still correct because it clears confusion. The MLA now indicates that a three-dot, spaced ellipsis should be used for removing material from within one sentence within a quote. When crossing sentences (when the omitted text contains a period, so that omitting the end of a sentence counts), a four-dot, spaced (except for before the first dot) ellipsis should be used. When ellipsis points are used in the original text, ellipsis points that are not in the original text should be distinguished by enclosing them in square brackets (e.g. ). According to the Associated Press, the ellipsis should be used to condense quotations. It is less commonly used to indicate a pause in speech or an unfinished thought or to separate items in material such as show business gossip. The stylebook indicates that if the shortened sentence before the mark can stand as a sentence, it should do so, with an ellipsis placed after the period or other ending punctuation. When material is omitted at the end of a paragraph and also immediately following it, an ellipsis goes both at the end of that paragraph and at the beginning of the next, according to this style. According to Robert Bringhurst's "Elements of Typographic Style", the details of typesetting ellipses depend on the character and size of the font being set and the typographer's preference. Bringhurst writes that a full space between each dot is "another Victorian eccentricity. In most contexts, the Chicago ellipsis is much too wide"—he recommends using flush dots (with a normal word space before and after), or "thin"-spaced dots (up to one-fifth of an em), or the prefabricated ellipsis character . Bringhurst suggests that normally an ellipsis should be spaced fore-and-aft to separate it from the text, but when it combines with other punctuation, the leading space disappears and the other punctuation follows. This is the usual practice in typesetting. He provides the following examples: In legal writing in the United States, Rule 5.3 in the "Bluebook" citation guide governs the use of ellipses and requires a space before the first dot and between the two subsequent dots. If an ellipsis ends the sentence, then there are three dots, each separated by a space, followed by the final punctuation (e.g. ). In some legal writing, an ellipsis is written as three asterisks , or , to make it obvious that text has been omitted. "The Oxford Style Guide" recommends setting the ellipsis as a single character or as a series of three (narrow) spaced dots surrounded by spaces, thus: . If there is an ellipsis at the end of an incomplete sentence, the final full stop is omitted. However, it is retained if the following ellipsis represents an omission between two complete sentences. Contrary to "The Oxford Style Guide", the "University of Oxford Style Guide" demands an ellipsis not to be surrounded by spaces, except when it stands for a pause; then, a space has to be set after the ellipsis (but not before). An ellipsis is never preceded or followed by a full stop. When applied in Polish language syntax, the ellipsis is called , which means "multidot". The word "wielokropek" distinguishes the ellipsis of Polish syntax from that of mathematical notation, in which it is known as an . When an ellipsis replaces a fragment omitted from a quotation, the ellipsis is enclosed in parentheses or square brackets. An unbracketed ellipsis indicates an interruption or pause in speech. The syntactical rules for ellipses are standardized by the 1983 Polska Norma document PN-83/P-55366, ("Rules for setting texts in the Polish Language"). The combination "ellipsis+period" is replaced by the ellipsis. The combinations "ellipsis+exclamation mark" and "ellipsis+question mark" are written in this way: !.. ?.. The most common character corresponding to an ellipsis is called "3"-ten rīdā (""3"-dot leaders", ). 2-ten rīdā exists as a character, but it is used less commonly. In writing, the ellipsis consists usually of six dots (two "3"-ten rīdā characters, ). Three dots (one "3"-ten rīdā character) may be used where space is limited, such as in a header. However, variations in the number of dots exist. In horizontally written text the dots are commonly vertically centered within the text height (between the baseline and the ascent line), as in the standard Japanese Windows fonts; in vertically written text the dots are always centered horizontally. As the Japanese word for dot is pronounced "", the dots are colloquially called "" (, akin to the English "dot dot dot"). In text in Japanese media, such as in manga or video games, ellipses are much more frequent than in English, and are often changed to another punctuation sign in translation. The ellipsis by itself represents speechlessness, or a "pregnant pause". Depending on the context, this could be anything from an admission of guilt to an expression of being dumbfounded at another person's words or actions. As a device, the "ten-ten-ten" is intended to focus the reader on a character while allowing the character to not speak any dialogue. This conveys to the reader a focus of the narrative "camera" on the silent subject, implying an expectation of some motion or action. It is not unheard of to see inanimate objects "speaking" the ellipsis. In Chinese, the ellipsis is six dots (in two groups of three dots, occupying the same horizontal or vertical space as two characters) (i.e. ...). In Spanish, the ellipsis is commonly used as a substitute of "et cetera" at the end of unfinished lists. So it means "and so forth" or "and other things". Other use is the suspension of a part of a text, or a paragraph, or a phrase or a part of a word because it is obvious, or unnecessary, or implied. For instance, sometimes the ellipsis is used to avoid the complete use of expletives. When the ellipsis is placed alone into a parenthesis (...) or—less often—between brackets [...], which is what happens usually within a text transcription, it means the original text had more contents on the same position but are not useful to our target in the transcription. When the suppressed text is at the beginning or at the end of a text, the ellipsis does not need to be placed in a parenthesis. The number of dots is three and only three. In French, the ellipsis is commonly used at the end of lists to represent "et cetera". In French typography, the ellipsis is written close up to the preceding word but has a space after it, for example: "comme ça… pas comme ceci". If, exceptionally, it begins a sentence, there is a space before and after, for example: "Lui ? … vaut rien, je crois… ". However, any omitted word, phrase or line at the end of a quoted passage would be indicated like this: [...] (space before and after the square brackets but not inside), for example: " … à Paris, Nice, Nantes, Toulouse […] ". In German, the ellipsis in general is surrounded by spaces, if it stands for one or more omitted words. On the other side there is no space between a letter or (part of) a word and an ellipsis, if it stands for one or more omitted letters, that should stick to the written letter or letters. Example for both cases, using German style: "The first el…is stands for omitted letters, the second … for an omitted word." If the ellipsis is at the end of a sentence, the final full stop is omitted. Example: "I think that …" In computer menu functions or buttons, an ellipsis means that upon selection more options (sometimes in the form of a dialog box) will be displayed, where the user can or must make a choice. If the ellipsis is absent, the function is immediately executed upon selection. For example, the menu item "Save" indicates that the file will be overwritten without further input, whereas "Save as…" indicates that a dialog follows where the user can, for example, select another location, file name, or format. Ellipses are also used as a separate button (particularly considering the limited screen area of mobile apps) to represent partially or completely hidden options. This usage may alternatively be described as a "More button" (see also hamburger button signifying completely hidden options). In mobile, web, and general application design, the vertical ellipsis, , is sometimes used as an interface element, where it is sometimes called a kebab icon. The element typically indicates that a navigation menu can be accessed when the element is activated, and is a smaller version of the hamburger icon (≡) which is a stylized rendering of a menu. An ellipsis is also often used in mathematics to mean "and so forth". In a list, between commas, or following a comma, a normal ellipsis is used, as in: or to mean an infinite list, as: To indicate the omission of values in a repeated operation, an ellipsis raised to the center of the line is used between two operation symbols or following the last operation symbol, as in: Sometimes, e.g. in Russian mathematical texts, normal, non-raised, ellipses are used even in repeated summations. The latter formula means the sum of all natural numbers from 1 to 100. However, it is not a formally defined mathematical symbol. Repeated summations or products may similarly be denoted using capital sigma and capital pi notation, respectively: Normally dots should be used only where the pattern to be followed is clear, the exception being to show the indefinite continuation of an irrational number such as: Sometimes, it is useful to display a formula compactly, for example: Another example is the set of positive zeros of the cosine function: There are many related uses of the ellipsis in set notation. The diagonal and vertical forms of the ellipsis are particularly useful for showing missing terms in matrices, such as the size-"n" identity matrix: A two- or three-dot ellipsis is used as an operator in some programming languages. The precise meaning varies by language, but it generally involves something dealing with multiple items. One of its most common uses is in defining ranges or sequences. This is used in many languages, including Pascal, Modula, Oberon, Ada, Haskell, Perl, Python, Ruby, Kotlin, Bash shell and F#. It is also used to indicate so called variadic functions in the C, C++ and Java languages. "See Ellipsis (programming operator)". The CSS codice_1 property can be set to codice_2, which cuts off text with an ellipsis when it overflows the content area. The ellipsis is a non-verbal cue that is often used in computer-mediated interactions, in particular in synchronous genres, such as chat. The reason behind its popularity is the fact that it allows people to indicate in writing several functions: Although an ellipsis is technically complete with three periods (...), its rise in popularity as a "trailing-off" or "silence" indicator, particularly in mid-20th-century comic strip and comic book prose writing, has led to expanded uses online. Today, extended ellipsis anywhere from two to dozens of periods have become common constructions in Internet chat rooms and text messages. The extent of repetition in itself might serve as an additional contextualization or paralinguistic cue, to "extend the lexical meaning of the words, add character to the sentences, and allow fine-tuning and personalisation of the message". In computing, several ellipsis characters have been codified, depending on the system used. In the Unicode standard, there are the following characters: Unicode recognizes a series of three period characters (U+002E) as compatibility equivalent (though not canonical) to the horizontal ellipsis character. In HTML, the horizontal ellipsis character may be represented by the entity reference codice_4 (since HTML 4.0), and the vertical ellipsis character by the entity reference codice_5 (since HTML 5.0). Alternatively, in HTML, XML, and SGML, a numeric character reference such as codice_6 or codice_7 can be used. In the TeX typesetting system, the following types of ellipsis are available: In LaTeX, note that the reverse orientation of codice_8 can be achieved with codice_9 provided by the codice_10 package: codice_11 yields . With the codice_12 package from AMS-LaTeX, more specific ellipses are provided for math mode. The horizontal ellipsis character also appears in the following older character maps: Note that ISO/IEC 8859 encoding series provides no code point for ellipsis. As with all characters, especially those outside the ASCII range, the author, sender and receiver of an encoded ellipsis must be in agreement upon what bytes are being used to represent the character. Naive text processing software may improperly assume that a particular encoding is being used, resulting in mojibake. In Abstract Syntax Notation One (ASN.1), the ellipsis is used as an extension marker to indicate the possibility of type extensions in future revisions of a protocol specification. In a type constraint expression like codice_14 an ellipsis is used to separate the extension root from extension additions. The definition of type A in version 1 system of the form codice_15 and the definition of type A in version 2 system of the form codice_14 constitute an extension series of the same type A in different versions of the same specification. The ellipsis can also be used in compound type definitions to separate the set of fields belonging to the extension root from the set of fields constituting extension additions. Here is an example: codice_17 In Windows, the horizontal ellipsis can be inserted with , using the numeric keypad. In macOS, it can be inserted with (on an English language keyboard). In some Linux distributions, it can be inserted with (this produces an interpunct on other systems), or . In Chinese and sometimes in Japanese, ellipsis characters are made by entering two consecutive "horizontal ellipses", each with Unicode code point U+2026. In vertical texts, the application should rotate the symbol accordingly.
https://en.wikipedia.org/wiki?curid=9596
Enola Gay The Enola Gay () is a Boeing B-29 Superfortress bomber, named after Enola Gay Tibbets, the mother of the pilot, Colonel Paul Tibbets. On 6 August 1945, during the final stages of World War II, piloted by Tibbets and Robert A. Lewis it became the first aircraft to drop an atomic bomb. The bomb, code-named "Little Boy", was targeted at the city of Hiroshima, Japan, and caused the near-complete destruction of the city. "Enola Gay" participated in the second atomic attack as the weather reconnaissance aircraft for the primary target of Kokura. Clouds and drifting smoke resulted in a secondary target, Nagasaki, being bombed instead. After the war, the "Enola Gay" returned to the United States, where it was operated from Roswell Army Air Field, New Mexico. In May 1946, it was flown to Kwajalein for the Operation Crossroads nuclear tests in the Pacific, but was not chosen to make the test drop at Bikini Atoll. Later that year it was transferred to the Smithsonian Institution, and spent many years parked at air bases exposed to the weather and souvenir hunters, before being disassembled and transported to the Smithsonian's storage facility at Suitland, Maryland, in 1961. In the 1980s, veterans groups engaged in a call for the Smithsonian to put the aircraft on display, leading to an acrimonious debate about exhibiting the aircraft without a proper historical context. The cockpit and nose section of the aircraft were exhibited at the National Air and Space Museum (NASM) on the National Mall, for the bombing's 50th anniversary in 1995, amid controversy. Since 2003, the entire restored B-29 has been on display at NASM's Steven F. Udvar-Hazy Center. The last survivor of its crew, Theodore Van Kirk, died on 28 July 2014 at the age of 93. The "Enola Gay" (Model number B-29-45-MO, Serial number 44-86292, Victor number 82) was built by the Glenn L. Martin Company (later part of Lockheed Martin) at its Bellevue, Nebraska, plant, located at what is now known as Offutt Air Force Base. The bomber was one of the first 15 B-29s built to the "Silverplate" specification— of 65 eventually completed during and after World War II—giving them the primary ability to function as nuclear "weapon delivery" aircraft. These modifications included an extensively modified bomb bay with pneumatic doors and British bomb attachment and release systems, reversible pitch propellers that gave more braking power on landing, improved engines with fuel injection and better cooling, and the removal of protective armor and gun turrets. "Enola Gay" was personally selected by Colonel Paul W. Tibbets Jr., the commander of the 509th Composite Group, on 9 May 1945, while still on the assembly line. The aircraft was accepted by the United States Army Air Forces (USAAF) on 18 May 1945 and assigned to the 393d Bombardment Squadron, Heavy, 509th Composite Group. Crew B-9, commanded by Captain Robert A. Lewis, who took delivery of the bomber and flew it from Omaha to the 509th base at Wendover Army Air Field, Utah, on 14 June 1945. Thirteen days later, the aircraft left Wendover for Guam, where it received a bomb-bay modification, and flew to North Field, Tinian, on 6 July. It was initially given the Victor (squadron-assigned identification) number 12, but on 1 August, was given the circle R tail markings of the 6th Bombardment Group as a security measure and had its Victor number changed to 82 to avoid misidentification with actual 6th Bombardment Group aircraft. During July, the bomber made eight practice or training flights, and flew two missions, on 24 and 26 July, to drop pumpkin bombs on industrial targets at Kobe and Nagoya. "Enola Gay" was used on 31 July on a rehearsal flight for the actual mission. The partially assembled Little Boy gun-type fission weapon L-11, weighing , was contained inside a × × wooden crate that was secured to the deck of the . Unlike the six uranium-235 target discs, which were later flown to Tinian on three separate aircraft arriving 28 and 29 July, the assembled projectile with the nine uranium-235 rings installed was shipped in a single lead-lined steel container weighing that was locked to brackets welded to the deck of Captain Charles B. McVay III's quarters. Both the L-11 and projectile were dropped off at Tinian on 26 July 1945. On 5 August 1945, during preparation for the first atomic mission, Tibbets assumed command of the aircraft and named it after his mother, Enola Gay Tibbets, who, in turn, had been named for the heroine of a novel. When it came to selecting a name for the plane, Tibbets later recalled that: The name was painted on the aircraft on 5 August by Allan L. Karl, an enlisted man in the 509th. Regularly-assigned aircraft commander Robert Lewis was unhappy to be displaced by Tibbets for this important mission, and became furious when he arrived at the aircraft on the morning of 6 August to see it painted with the now-famous nose art. Hiroshima was the primary target of the first nuclear bombing mission on 6 August, with Kokura and Nagasaki as alternative targets. "Enola Gay", piloted by Tibbets, took off from North Field, in the Northern Mariana Islands, about six hours' flight time from Japan, accompanied by two other B-29s, "The Great Artiste", carrying instrumentation, and a then-nameless aircraft later called "Necessary Evil", commanded by Captain George Marquardt, to take photographs. The director of the Manhattan Project, Major General Leslie R. Groves, Jr., wanted the event recorded for posterity, so the takeoff was illuminated by floodlights. When he wanted to taxi, Tibbets leaned out the window to direct the bystanders out of the way. On request, he gave a friendly wave for the cameras. After leaving Tinian, the aircraft made their way separately to Iwo Jima, where they rendezvoused at and set course for Japan. The aircraft arrived over the target in clear visibility at . Captain William S. "Deak" Parsons of Project Alberta, who was in command of the mission, armed the bomb during the flight to minimize the risks during takeoff. His assistant, Second Lieutenant Morris R. Jeppson, removed the safety devices 30 minutes before reaching the target area. The release at 08:15 (Hiroshima time) went as planned, and the Little Boy took 53 seconds to fall from the aircraft flying at to the predetermined detonation height about above the city. "Enola Gay" traveled before it felt the shock waves from the blast. Although buffeted by the shock, neither "Enola Gay" nor "The Great Artiste" was damaged. The detonation created a blast equivalent to . The U-235 weapon was considered very inefficient, with only 1.7% of its fissile material reacting. The radius of total destruction was about one mile (1.6 km), with resulting fires across . Americans estimated that of the city were destroyed. Japanese officials determined that 69% of Hiroshima's buildings were destroyed and another 6–7% damaged. Some 70,000–80,000 people, 30% of the city's population, were killed by the blast and resultant firestorm, and another 70,000 injured. Out of those killed, 20,000 were soldiers and 20,000 Korean slave laborers. "Enola Gay" returned safely to its base on Tinian to great fanfare, touching down at 2:58 pm, after 12 hours 13 minutes. "The Great Artiste" and "Necessary Evil" followed at short intervals. Several hundred people, including journalists and photographers, had gathered to watch the planes return. Tibbets was the first to disembark, and was presented with the Distinguished Service Cross on the spot. The Hiroshima mission was followed by another atomic strike. Originally scheduled for 11 August, it was brought forward by two days to 9 August owing to a forecast of bad weather. This time, a Fat Man nuclear weapon was carried by B-29 "Bockscar", piloted by Major Charles W. Sweeney. "Enola Gay", flown by Captain George Marquardt's Crew B-10, was the weather reconnaissance aircraft for Kokura, the primary target. "Enola Gay" reported clear skies over Kokura, but by the time "Bockscar" arrived, the city was obscured by smoke from fires from the conventional bombing of Yahata by 224 B-29s the day before. After three unsuccessful passes, "Bockscar" diverted to its secondary target, Nagasaki, where it dropped its bomb. In contrast to the Hiroshima mission, the Nagasaki mission has been described as tactically botched, although the mission did meet its objectives. The crew encountered a number of problems in execution, and had very little fuel by the time they landed at the emergency backup landing site Yontan Airfield on Okinawa. "Enola Gay"'s crew on 6 August 1945, consisted of 12 men. The crew was: Of mission commander Parsons, it was said: "There is no one more responsible for getting this bomb out of the laboratory and into some form useful for combat operations than Captain Parsons, by his plain genius in the ordnance business." For the Nagasaki mission, "Enola Gay" was flown by Crew B-10, normally assigned to "Up An' Atom": On 6 November 1945, Lewis flew the "Enola Gay" back to the United States, arriving at the 509th's new base at Roswell Army Air Field, New Mexico, on 8 November. On 29 April 1946, "Enola Gay" left Roswell as part of the Operation Crossroads nuclear weapons tests in the Pacific. It flew to Kwajalein Atoll on 1 May. It was not chosen to make the test drop at Bikini Atoll and left Kwajalein on 1 July, the date of the test, reaching Fairfield-Suisun Army Air Field, California, the next day. The decision was made to preserve the "Enola Gay", and on 24 July 1946, the aircraft was flown to Davis–Monthan Air Force Base, Tucson, Arizona, in preparation for storage. On 30 August 1946, the title to the aircraft was transferred to the Smithsonian Institution and the "Enola Gay" was removed from the USAAF inventory. From 1946 to 1961, the "Enola Gay" was put into temporary storage at a number of locations. It was at Davis-Monthan from 1 September 1946 until 3 July 1949, when it was flown to Orchard Place Air Field, Park Ridge, Illinois, by Tibbets for acceptance by the Smithsonian. It was moved to Pyote Air Force Base, Texas, on 12 January 1952, and then to Andrews Air Force Base, Maryland, on 2 December 1953, because the Smithsonian had no storage space for the aircraft. It was hoped that the Air Force would guard the plane, but, lacking hangar space, it was left outdoors on a remote part of the air base, exposed to the elements. Souvenir hunters broke in and removed parts. Insects and birds then gained access to the aircraft. Paul E. Garber of the Smithsonian Institution, became concerned about the "Enola Gay"s condition, and on 10 August 1960, Smithsonian staff began dismantling the aircraft. The components were transported to the Smithsonian storage facility at Suitland, Maryland, on 21 July 1961. "Enola Gay" remained at Suitland for many years. By the early 1980s, two veterans of the 509th, Don Rehl and his former navigator in the 509th, Frank B. Stewart, began lobbying for the aircraft to be restored and put on display. They enlisted Tibbets and Senator Barry Goldwater in their campaign. In 1983, Walter J. Boyne, a former B-52 pilot with the Strategic Air Command, became director of the National Air and Space Museum, and he made the "Enola Gay"s restoration a priority. Looking at the aircraft, Tibbets recalled, was a "sad meeting. [My] fond memories, and I don't mean the dropping of the bomb, were the numerous occasions I flew the airplane ... I pushed it very, very hard and it never failed me ... It was probably the most beautiful piece of machinery that any pilot ever flew." Restoration of the bomber began on 5 December 1984, at the Paul E. Garber Preservation, Restoration, and Storage Facility in Suitland-Silver Hill, Maryland. The propellers that were used on the bombing mission were later shipped to Texas A&M University. One of these propellers was trimmed to for use in the university's Oran W. Nicks Low Speed Wind Tunnel. The lightweight aluminum variable-pitch propeller is powered by a 1,250 kVA electric motor, providing a wind speed up to . Two engines were rebuilt at Garber and two at San Diego Air & Space Museum. Some parts and instruments had been removed and could not be located. Replacements were found or fabricated, and marked so that future curators could distinguish them from the original components. "Enola Gay" became the center of a controversy at the Smithsonian Institution when the museum planned to put its fuselage on public display in 1995 as part of an exhibit commemorating the 50th anniversary of the atomic bombing of Hiroshima. The exhibit, "The Crossroads: The End of World War II, the Atomic Bomb and the Cold War," was drafted by the Smithsonian's National Air and Space Museum staff, and arranged around the restored "Enola Gay". Critics of the planned exhibit, especially those of the American Legion and the Air Force Association, charged that the exhibit focused too much attention on the Japanese casualties inflicted by the nuclear bomb, rather than on the motives for the bombing or the discussion of the bomb's role in ending the conflict with Japan. The exhibit brought to national attention many long-standing academic and political issues related to retrospective views of the bombings. After attempts to revise the exhibit to meet the satisfaction of competing interest groups, the exhibit was canceled on 30 January 1995. Martin O. Harwit, Director of the National Air and Space Museum, was compelled to resign over the controversy. He later reflected that The forward fuselage went on display on 28 June 1995. On 2 July 1995, three people were arrested for throwing ash and human blood on the aircraft's fuselage, following an earlier incident in which a protester had thrown red paint over the gallery's carpeting. The exhibition closed on 18 May 1998 and the fuselage was returned to the Garber Facility for final restoration. Restoration work began in 1984, and would eventually require 300,000 staff hours. While the fuselage was on display, from 1995 to 1998, work continued on the remaining unrestored components. The aircraft was shipped in pieces to the National Air and Space Museum's Steven F. Udvar-Hazy Center in Chantilly, Virginia from March–June 2003, with the fuselage and wings reunited for the first time since 1960 on 10 April 2003 and assembly completed on 8 August 2003. The aircraft has been on display at the Udvar-Hazy Center since the museum annex opened on 15 December 2003. As a result of the earlier controversy, the signage around the aircraft provided only the same succinct technical data as is provided for other aircraft in the museum, without discussion of the controversial issues. It read: The display of the "Enola Gay" without reference to the historical context of World War II, the Cold War, or the development and deployment of nuclear weapons aroused controversy. A petition from a group calling themselves the Committee for a National Discussion of Nuclear History and Current Policy bemoaned the display of "Enola Gay" as a technological achievement, which it described as an "extraordinary callousness toward the victims, indifference to the deep divisions among American citizens about the propriety of these actions, and disregard for the feelings of most of the world's peoples". It attracted signatures from notable figures including historian Gar Alperovitz, social critic Noam Chomsky, whistle blower Daniel Ellsberg, physicist Joseph Rotblat, writer Kurt Vonnegut, producer Norman Lear, actor Martin Sheen and filmmaker Oliver Stone.
https://en.wikipedia.org/wiki?curid=9597
Electronvolt In physics, an electronvolt (symbol eV, also written electron-volt and electron volt) is the amount of kinetic energy gained (or lost) by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. When used as a unit of energy, the numerical value of 1 eV in joules (symbol J) is equivalent to the numerical value of the charge of an electron in coulombs (symbol C). Under the 2019 redefinition of the SI base units, this sets 1 eV equal to J. Historically, the electronvolt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences, because a particle with electric charge "q" has an energy after passing through the potential "V"; if "q" is quoted in integer units of the elementary charge and the potential in volts, one gets an energy in eV. It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics. It is commonly used with the metric prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion (109) electronvolts; it is equivalent to the GeV. An electronvolt is the amount of kinetic energy gained or lost by a single electron accelerating from rest through an electric potential difference of one volt in vacuum. Hence, it has a value of one volt, , multiplied by the electron's elementary charge "e", Therefore, one electronvolt is equal to The electronvolt, as opposed to the volt, is not an SI unit. The electronvolt (eV) is a unit of energy whereas the volt (V) is the derived SI unit of electric potential. The SI unit for energy is the joule (J). By mass–energy equivalence, the electronvolt is also a unit of mass. It is common in particle physics, where units of mass and energy are often interchanged, to express mass in units of eV/"c"2, where "c" is the speed of light in vacuum (from ). It is common to simply express mass in terms of "eV" as a unit of mass, effectively using a system of natural units with "c" set to 1. The mass equivalent of is For example, an electron and a positron, each with a mass of , can annihilate to yield of energy. The proton has a mass of . In general, the masses of all hadrons are of the order of , which makes the GeV (gigaelectronvolt) a convenient unit of mass for particle physics: The unified atomic mass unit (u), almost exactly 1 gram divided by the Avogadro number, is almost the mass of a hydrogen atom, which is mostly the mass of the proton. To convert to electron volts, use the formula: In high-energy physics, the electronvolt is often used as a unit of momentum. A potential difference of 1 volt causes an electron to gain an amount of energy (i.e., ). This gives rise to usage of eV (and keV, MeV, GeV or TeV) as units of momentum, for the energy supplied results in acceleration of the particle. The dimensions of momentum units are . The dimensions of energy units are . Then, dividing the units of energy (such as eV) by a fundamental constant that has units of velocity (), facilitates the required conversion of using energy units to describe momentum. In the field of high-energy particle physics, the fundamental velocity unit is the speed of light in vacuum "c". By dividing energy in eV by the speed of light, one can describe the momentum of an electron in units of eV/"c". The fundamental velocity constant "c" is often "dropped" from the units of momentum by way of defining units of length such that the value of "c" is unity. For example, if the momentum "p" of an electron is said to be , then the conversion to MKS can be achieved by: In particle physics, a system of "natural units" in which the speed of light in vacuum "c" and the reduced Planck constant "ħ" are dimensionless and equal to unity is widely used: . In these units, both distances and times are expressed in inverse energy units (while energy and mass are expressed in the same units, see mass–energy equivalence). In particular, particle scattering lengths are often presented in units of inverse particle masses. Outside this system of units, the conversion factors between electronvolt, second, and nanometer are the following: The above relations also allow expressing the mean lifetime "τ" of an unstable particle (in seconds) in terms of its decay width "Γ" (in eV) via . For example, the B0 meson has a lifetime of 1.530(9) picoseconds, mean decay length is , or a decay width of . Conversely, the tiny meson mass differences responsible for meson oscillations are often expressed in the more convenient inverse picoseconds. Energy in electronvolts is sometimes expressed through the wavelength of light with photons of the same energy: In certain fields, such as plasma physics, it is convenient to use the electronvolt to express temperature. The electronvolt is divided by the Boltzmann constant to convert to the Kelvin scale: Where "k"B is the Boltzmann constant, K is Kelvin, J is Joules, eV is electronvolts. The "k"B is assumed when using the electronvolt to express temperature, for example, a typical magnetic confinement fusion plasma is (kilo-electronvolts), which is equal to 170 MK (million Kelvin). As an approximation: "k"B"T" is about (≈ ) at a temperature of . The energy "E", frequency "v", and wavelength λ of a photon are related by where "h" is the Planck constant, "c" is the speed of light. This reduces to A photon with a wavelength of (green light) would have an energy of approximately . Similarly, would correspond to an infrared photon of wavelength or frequency . In a low-energy nuclear scattering experiment, it is conventional to refer to the nuclear recoil energy in units of eVr, keVr, etc. This distinguishes the nuclear recoil energy from the "electron equivalent" recoil energy (eVee, keVee, etc.) measured by scintillation light. For example, the yield of a phototube is measured in phe/keVee (photoelectrons per keV electron-equivalent energy). The relationship between eV, eVr, and eVee depends on the medium the scattering takes place in, and must be established empirically for each material. One mole of particles given 1 eV of energy has approximately 96.5 kJ of energy — this corresponds to the Faraday constant ("F" ≈ ), where the energy in joules of "n" moles of particles each with energy "E" eV is equal to "E"·"F"·"n".
https://en.wikipedia.org/wiki?curid=9598
Electrochemistry Electrochemistry is the branch of physical chemistry that studies the relationship between electricity, as a measurable and quantitative phenomenon, and identifiable chemical change, with either electricity considered an outcome of a particular chemical change or vice versa. These reactions involve electric charges moving between electrodes and an electrolyte (or ionic species in a solution). Thus electrochemistry deals with the interaction between electrical energy and chemical change. When a chemical reaction is caused by an externally supplied current, as in electrolysis, or if an electric current is produced by a spontaneous chemical reaction as in a battery, it is called an "electrochemical" reaction. Chemical reactions where electrons are transferred directly between molecules and/or atoms are called oxidation-reduction or redox reactions. In general, electrochemistry describes the overall reactions when individual redox reactions are separate but connected by an external electric circuit and an intervening electrolyte. Understanding of electrical matters began in the sixteenth century. During this century, the English scientist William Gilbert spent 17 years experimenting with magnetism and, to a lesser extent, electricity. For his work on magnets, Gilbert became known as the ""Father of Magnetism."" He discovered various methods for producing and strengthening magnets. In 1663, the German physicist Otto von Guericke created the first electric generator, which produced static electricity by applying friction in the machine. The generator was made of a large sulfur ball cast inside a glass globe, mounted on a shaft. The ball was rotated by means of a crank and an electric spark was produced when a pad was rubbed against the ball as it rotated. The globe could be removed and used as source for experiments with electricity. By the mid—18th century the French chemist Charles François de Cisternay du Fay had discovered two types of static electricity, and that like charges repel each other whilst unlike charges attract. Du Fay announced that electricity consisted of two fluids: ""vitreous"" (from the Latin for ""glass""), or positive, electricity; and ""resinous,"" or negative, electricity. This was the "two-fluid theory" of electricity, which was to be opposed by Benjamin Franklin's "one-fluid theory" later in the century. In 1785, Charles-Augustin de Coulomb developed the law of electrostatic attraction as an outgrowth of his attempt to investigate the law of electrical repulsions as stated by Joseph Priestley in England. In the late 18th century the Italian physician and anatomist Luigi Galvani marked the birth of electrochemistry by establishing a bridge between chemical reactions and electricity on his essay ""De Viribus Electricitatis in Motu Musculari Commentarius"" (Latin for Commentary on the Effect of Electricity on Muscular Motion) in 1791 where he proposed a ""nerveo-electrical substance"" on biological life forms. In his essay Galvani concluded that animal tissue contained a here-to-fore neglected innate, vital force, which he termed ""animal electricity,"" which activated nerves and muscles spanned by metal probes. He believed that this new force was a form of electricity in addition to the ""natural"" form produced by lightning or by the electric eel and torpedo ray as well as the ""artificial"" form produced by friction (i.e., static electricity). Galvani's scientific colleagues generally accepted his views, but Alessandro Volta rejected the idea of an ""animal electric fluid,"" replying that the frog's legs responded to differences in metal temper, composition, and bulk. Galvani refuted this by obtaining muscular action with two pieces of the same material. In 1800, William Nicholson and Johann Wilhelm Ritter succeeded in decomposing water into hydrogen and oxygen by electrolysis. Soon thereafter Ritter discovered the process of electroplating. He also observed that the amount of metal deposited and the amount of oxygen produced during an electrolytic process depended on the distance between the electrodes. By 1801, Ritter observed thermoelectric currents and anticipated the discovery of thermoelectricity by Thomas Johann Seebeck. By the 1810s, William Hyde Wollaston made improvements to the galvanic cell. Sir Humphry Davy's work with electrolysis led to the conclusion that the production of electricity in simple electrolytic cells resulted from chemical action and that chemical combination occurred between substances of opposite charge. This work led directly to the isolation of sodium and potassium from their compounds and of the alkaline earth metals from theirs in 1808. Hans Christian Ørsted's discovery of the magnetic effect of electric currents in 1820 was immediately recognized as an epoch-making advance, although he left further work on electromagnetism to others. André-Marie Ampère quickly repeated Ørsted's experiment, and formulated them mathematically. In 1821, Estonian-German physicist Thomas Johann Seebeck demonstrated the electrical potential in the juncture points of two dissimilar metals when there is a heat difference between the joints. In 1827, the German scientist Georg Ohm expressed his law in this famous book ""Die galvanische Kette, mathematisch bearbeitet"" (The Galvanic Circuit Investigated Mathematically) in which he gave his complete theory of electricity. In 1832, Michael Faraday's experiments led him to state his two laws of electrochemistry. In 1836, John Daniell invented a primary cell which solved the problem of polarization by eliminating hydrogen gas generation at the positive electrode. Later results revealed that alloying the amalgamated zinc with mercury would produce a higher voltage. William Grove produced the first fuel cell in 1839. In 1846, Wilhelm Weber developed the electrodynamometer. In 1868, Georges Leclanché patented a new cell which eventually became the forerunner to the world's first widely used battery, the zinc carbon cell. Svante Arrhenius published his thesis in 1884 on "Recherches sur la conductibilité galvanique des électrolytes" (Investigations on the galvanic conductivity of electrolytes). From his results the author concluded that electrolytes, when dissolved in water, become to varying degrees split or dissociated into electrically opposite positive and negative ions. In 1886, Paul Héroult and Charles M. Hall developed an efficient method (the Hall–Héroult process) to obtain aluminium using electrolysis of molten alumina. In 1894, Friedrich Ostwald concluded important studies of the conductivity and electrolytic dissociation of organic acids. Walther Hermann Nernst developed the theory of the electromotive force of the voltaic cell in 1888. In 1889, he showed how the characteristics of the current produced could be used to calculate the free energy change in the chemical reaction producing the current. He constructed an equation, known as Nernst equation, which related the voltage of a cell to its properties. In 1898, Fritz Haber showed that definite reduction products can result from electrolytic processes if the potential at the cathode is kept constant. In 1898, he explained the reduction of nitrobenzene in stages at the cathode and this became the model for other similar reduction processes. In 1902, The Electrochemical Society (ECS) was founded. In 1909, Robert Andrews Millikan began a series of experiments (see oil drop experiment) to determine the electric charge carried by a single electron. In 1923, Johannes Nicolaus Brønsted and Martin Lowry published essentially the same theory about how acids and bases behave, using an electrochemical basis. In 1937, Arne Tiselius developed the first sophisticated electrophoretic apparatus. Some years later, he was awarded the 1948 Nobel Prize for his work in protein electrophoresis. A year later, in 1949, the International Society of Electrochemistry (ISE) was founded. By the 1960s–1970s quantum electrochemistry was developed by Revaz Dogonadze and his pupils. The term "redox" stands for reduction-oxidation. It refers to electrochemical processes involving electron transfer to or from a molecule or ion changing its oxidation state. This reaction can occur through the application of an external voltage or through the release of chemical energy. Oxidation and reduction describe the change of oxidation state that takes place in the atoms, ions or molecules involved in an electrochemical reaction. Formally, oxidation state is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic. An atom or ion that gives up an electron to another atom or ion has its oxidation state increase, and the recipient of the negatively charged electron has its oxidation state decrease. For example, when atomic sodium reacts with atomic chlorine, sodium donates one electron and attains an oxidation state of +1. Chlorine accepts the electron and its oxidation state is reduced to −1. The sign of the oxidation state (positive/negative) actually corresponds to the value of each ion's electronic charge. The attraction of the differently charged sodium and chlorine ions is the reason they then form an ionic bond. The loss of electrons from an atom or molecule is called oxidation, and the gain of electrons is reduction. This can be easily remembered through the use of mnemonic devices. Two of the most popular are ""OIL RIG"" (Oxidation Is Loss, Reduction Is Gain) and ""LEO"" the lion says ""GER"" (Lose Electrons: Oxidation, Gain Electrons: Reduction). Oxidation and reduction always occur in a paired fashion such that one species is oxidized when another is reduced. For cases where electrons are shared (covalent bonds) between atoms with large differences in electronegativity, the electron is assigned to the atom with the largest electronegativity in determining the oxidation state. The atom or molecule which loses electrons is known as the "reducing agent", or "reductant", and the substance which accepts the electrons is called the "oxidizing agent", or "oxidant". Thus, the oxidizing agent is always being reduced in a reaction; the reducing agent is always being oxidized. Oxygen is a common oxidizing agent, but not the only one. Despite the name, an oxidation reaction does not necessarily need to involve oxygen. In fact, a fire can be fed by an oxidant other than oxygen; fluorine fires are often unquenchable, as fluorine is an even stronger oxidant (it has a higher electronegativity and thus accepts electrons even better) than oxygen. For reactions involving oxygen, the gain of oxygen implies the oxidation of the atom or molecule to which the oxygen is added (and the oxygen is reduced). In organic compounds, such as butane or ethanol, the loss of hydrogen implies oxidation of the molecule from which it is lost (and the hydrogen is reduced). This follows because the hydrogen donates its electron in covalent bonds with non-metals but it takes the electron along when it is lost. Conversely, loss of oxygen or gain of hydrogen implies reduction. Electrochemical reactions in water are better understood by balancing redox reactions using the ion-electron method where H+, OH− ion, H2O and electrons (to compensate the oxidation changes) are added to cell's half-reactions for oxidation and reduction. In acid medium H+ ions and water are added to half-reactions to balance the overall reaction. For example, when manganese reacts with sodium bismuthate. Finally, the reaction is balanced by multiplying the number of electrons from the reduction half reaction to oxidation half reaction and vice versa and adding both half reactions, thus solving the equation. Reaction balanced: In basic medium OH− ions and water are added to half reactions to balance the overall reaction. For example, on reaction between potassium and sodium sulfite. The same procedure as followed on acid medium by multiplying electrons to opposite half reactions solve the equation thus balancing the overall reaction. Equation balanced: The same procedure as used on acid medium is applied, for example on balancing using electron ion method to complete combustion of propane. As in acid and basic medium, electrons which were used to compensate oxidation changes are multiplied to opposite half reactions, thus solving the equation. Equation balanced: An electrochemical cell is a device that produces an electric current from energy released by a spontaneous redox reaction, this can be caused from electricity. This kind of cell includes the Galvanic cell or Voltaic cell, named after Luigi Galvani and Alessandro Volta, both scientists who conducted several experiments on chemical reactions and electric current during the late 18th century. Electrochemical cells have two conductive electrodes (the anode and the cathode). The anode is defined as the electrode where oxidation occurs and the cathode is the electrode where the reduction takes place. Electrodes can be made from any sufficiently conductive materials, such as metals, semiconductors, graphite, and even conductive polymers. In between these electrodes is the electrolyte, which contains ions that can freely move. The galvanic cell uses two different metal electrodes, each in an electrolyte where the positively charged ions are the oxidized form of the electrode metal. One electrode will undergo oxidation (the anode) and the other will undergo reduction (the cathode). The metal of the anode will oxidize, going from an oxidation state of 0 (in the solid form) to a positive oxidation state and become an ion. At the cathode, the metal ion in solution will accept one or more electrons from the cathode and the ion's oxidation state is reduced to 0. This forms a solid metal that electrodeposits on the cathode. The two electrodes must be electrically connected to each other, allowing for a flow of electrons that leave the metal of the anode and flow through this connection to the ions at the surface of the cathode. This flow of electrons is an electric current that can be used to do work, such as turn a motor or power a light. A galvanic cell whose electrodes are zinc and copper submerged in zinc sulfate and copper sulfate, respectively, is known as a Daniell cell. Half reactions for a Daniell cell are these: In this example, the anode is the zinc metal which is oxidized (loses electrons) to form zinc ions in solution, and copper ions accept electrons from the copper metal electrode and the ions deposit at the copper cathode as an electrodeposit. This cell forms a simple battery as it will spontaneously generate a flow of electric current from the anode to the cathode through the external connection. This reaction can be driven in reverse by applying a voltage, resulting in the deposition of zinc metal at the anode and formation of copper ions at the cathode. To provide a complete electric circuit, there must also be an ionic conduction path between the anode and cathode electrolytes in addition to the electron conduction path. The simplest ionic conduction path is to provide a liquid junction. To avoid mixing between the two electrolytes, the liquid junction can be provided through a porous plug that allows ion flow while reducing electrolyte mixing. To further minimize mixing of the electrolytes, a salt bridge can be used which consists of an electrolyte saturated gel in an inverted U-tube. As the negatively charged electrons flow in one direction around this circuit, the positively charged metal ions flow in the opposite direction in the electrolyte. A voltmeter is capable of measuring the change of electrical potential between the anode and the cathode. Electrochemical cell voltage is also referred to as electromotive force or emf. A cell diagram can be used to trace the path of the electrons in the electrochemical cell. For example, here is a cell diagram of a Daniell cell: First, the reduced form of the metal to be oxidized at the anode (Zn) is written. This is separated from its oxidized form by a vertical line, which represents the limit between the phases (oxidation changes). The double vertical lines represent the saline bridge on the cell. Finally, the oxidised form of the metal to be reduced at the cathode, is written, separated from its reduced form by the vertical line. The electrolyte concentration is given as it is an important variable in determining the cell potential. To allow prediction of the cell potential, tabulations of standard electrode potential are available. Such tabulations are referenced to the standard hydrogen electrode (SHE). The standard hydrogen electrode undergoes the reaction which is shown as reduction but, in fact, the SHE can act as either the anode or the cathode, depending on the relative oxidation/reduction potential of the other electrode/electrolyte combination. The term standard in SHE requires a supply of hydrogen gas bubbled through the electrolyte at a pressure of 1 atm and an acidic electrolyte with H+ activity equal to 1 (usually assumed to be [H+] = 1 mol/liter). The SHE electrode can be connected to any other electrode by a salt bridge to form a cell. If the second electrode is also at standard conditions, then the measured cell potential is called the standard electrode potential for the electrode. The standard electrode potential for the SHE is zero, by definition. The polarity of the standard electrode potential provides information about the relative reduction potential of the electrode compared to the SHE. If the electrode has a positive potential with respect to the SHE, then that means it is a strongly reducing electrode which forces the SHE to be the anode (an example is Cu in aqueous CuSO4 with a standard electrode potential of 0.337 V). Conversely, if the measured potential is negative, the electrode is more oxidizing than the SHE (such as Zn in ZnSO4 where the standard electrode potential is −0.76 V). Standard electrode potentials are usually tabulated as reduction potentials. However, the reactions are reversible and the role of a particular electrode in a cell depends on the relative oxidation/reduction potential of both electrodes. The oxidation potential for a particular electrode is just the negative of the reduction potential. A standard cell potential can be determined by looking up the standard electrode potentials for both electrodes (sometimes called half cell potentials). The one that is smaller will be the anode and will undergo oxidation. The cell potential is then calculated as the sum of the reduction potential for the cathode and the oxidation potential for the anode. For example, the standard electrode potential for a copper electrode is: At standard temperature, pressure and concentration conditions, the cell's emf (measured by a multimeter) is 0.34 V. By definition, the electrode potential for the SHE is zero. Thus, the Cu is the cathode and the SHE is the anode giving Or, Changes in the stoichiometric coefficients of a balanced cell equation will not change E°red value because the standard electrode potential is an intensive property. During operation of electrochemical cells, chemical energy is transformed into electrical energy and is expressed mathematically as the product of the cell's emf and the electric charge transferred through the external circuit. where Ecell is the cell potential measured in volts (V) and Ctrans is the cell current integrated over time and measured in coulombs (C); Ctrans can also be determined by multiplying the total number of electrons transferred (measured in moles) times Faraday's constant (F). The emf of the cell at zero current is the maximum possible emf. It is used to calculate the maximum possible electrical energy that could be obtained from a chemical reaction. This energy is referred to as electrical work and is expressed by the following equation: where work is defined as positive into the system. Since the free energy is the maximum amount of work that can be extracted from a system, one can write: A positive cell potential gives a negative change in Gibbs free energy. This is consistent with the cell production of an electric current from the cathode to the anode through the external circuit. If the current is driven in the opposite direction by imposing an external potential, then work is done on the cell to drive electrolysis. A spontaneous electrochemical reaction (change in Gibbs free energy less than zero) can be used to generate an electric current in electrochemical cells. This is the basis of all batteries and fuel cells. For example, gaseous oxygen (O2) and hydrogen (H2) can be combined in a fuel cell to form water and energy, typically a combination of heat and electrical energy. Conversely, non-spontaneous electrochemical reactions can be driven forward by the application of a current at sufficient voltage. The electrolysis of water into gaseous oxygen and hydrogen is a typical example. The relation between the equilibrium constant, "K", and the Gibbs free energy for an electrochemical cell is expressed as follows: Rearranging to express the relation between standard potential and equilibrium constant yields The previous equation can use Briggsian logarithm as shown below: The standard potential of an electrochemical cell requires standard conditions (ΔG°) for all of the reactants. When reactant concentrations differ from standard conditions, the cell potential will deviate from the standard potential. In the 20th century German chemist Walther Nernst proposed a mathematical model to determine the effect of reactant concentration on electrochemical cell potential. In the late 19th century, Josiah Willard Gibbs had formulated a theory to predict whether a chemical reaction is spontaneous based on the free energy Here "ΔG" is change in Gibbs free energy, "ΔG°" is the cell potential when "Q" is equal to 1, "T" is absolute temperature (Kelvin), "R" is the gas constant and "Q" is reaction quotient which can be found by dividing products by reactants using only those products and reactants that are aqueous or gaseous. Gibbs' key contribution was to formalize the understanding of the effect of reactant concentration on spontaneity. Based on Gibbs' work, Nernst extended the theory to include the contribution from electric potential on charged species. As shown in the previous section, the change in Gibbs free energy for an electrochemical cell can be related to the cell potential. Thus, Gibbs' theory becomes Here "n" is the number of electrons/mole product, "F" is the Faraday constant (coulombs/mole), and "ΔE" is cell potential. Finally, Nernst divided through by the amount of charge transferred to arrive at a new equation which now bears his name: Assuming standard conditions (T = 25 °C) and R = 8.3145 J/(K·mol), the equation above can be expressed on base—10 logarithm as shown below: Note that "RT/F" is also known as the thermal voltage "VT" and is found in the study of plasma's and semiconductors as well. The value 0.05916 V in the above equation is just the thermal voltage at standard temperature multiplied by the natural logarithm of 10. A concentration cell is an electrochemical cell where the two electrodes are the same material, the electrolytes on the two half-cells involve the same ions, but the electrolyte concentration differs between the two half-cells. An example is an electrochemical cell, where two copper electrodes are submerged in two copper(II) sulfate solutions, whose concentrations are 0.05 M and 2.0 M, connected through a salt bridge. This type of cell will generate a potential that can be predicted by the Nernst equation. Both can undergo the same chemistry (although the reaction proceeds in reverse at the anode) Le Chatelier's principle indicates that the reaction is more favorable to reduction as the concentration of Cu2+ ions increases. Reduction will take place in the cell's compartment where concentration is higher and oxidation will occur on the more dilute side. The following cell diagram describes the cell mentioned above: Where the half cell reactions for oxidation and reduction are: The cell's emf is calculated through Nernst equation as follows: The value of "E"° in this kind of cell is zero, as electrodes and ions are the same in both half-cells. After replacing values from the case mentioned, it is possible to calculate cell's potential: or by: However, this value is only approximate, as reaction quotient is defined in terms of ion activities which can be approximated with the concentrations as calculated here. The Nernst equation plays an important role in understanding electrical effects in cells and organelles. Such effects include nerve synapses and cardiac beat as well as the resting potential of a somatic cell. Many types of battery have been commercialized and represent an important practical application of electrochemistry. Early wet cells powered the first telegraph and telephone systems, and were the source of current for electroplating. The zinc-manganese dioxide dry cell was the first portable, non-spillable battery type that made flashlights and other portable devices practical. The mercury battery using zinc and mercuric oxide provided higher levels of power and capacity than the original dry cell for early electronic devices, but has been phased out of common use due to the danger of mercury pollution from discarded cells. The lead–acid battery was the first practical secondary (rechargeable) battery that could have its capacity replenished from an external source. The electrochemical reaction that produced current was (to a useful degree) reversible, allowing electrical energy and chemical energy to be interchanged as needed. Common lead acid batteries contain a mixture of sulfuric acid and water, as well as lead plates. The most common mixture used today is 30% acid. One problem however is if left uncharged acid will crystallize within the lead plates of the battery rendering it useless. These batteries last an average of 3 years with daily use however it is not unheard of for a lead acid battery to still be functional after 7–10 years. Lead-acid cells continue to be widely used in automobiles. All the preceding types have water-based electrolytes, which limits the maximum voltage per cell. The freezing of water limits low temperature performance. The lithium battery, which does not (and cannot) use water in the electrolyte, provides improved performance over other types; a rechargeable lithium-ion battery is an essential part of many mobile devices. The flow battery, an experimental type, offers the option of vastly larger energy capacity because its reactants can be replenished from external reservoirs. The fuel cell can turn the chemical energy bound in hydrocarbon gases or hydrogen directly into electrical energy with much higher efficiency than any combustion process; such devices have powered many spacecraft and are being applied to grid energy storage for the public power system. Corrosion is an electrochemical process, which reveals itself in rust or tarnish on metals like iron or copper and their respective alloys, steel and brass. For iron rust to occur the metal has to be in contact with oxygen and water, although chemical reactions for this process are relatively complex and not all of them are completely understood. It is believed the causes are the following: Electron transfer (reduction-oxidation) Iron corrosion takes place in an acid medium; H+ ions come from reaction between carbon dioxide in the atmosphere and water, forming carbonic acid. Fe2+ ions oxidize, following this equation: Iron(III) oxide hydrate is known as rust. The concentration of water associated with iron oxide varies, thus the chemical formula is represented by 2Fe2O3.\mathit{x}H2O. An electric circuit is formed as passage of electrons and ions occurs, thus if an electrolyte is present it will facilitate oxidation, explaining why rusting is quicker in salt water. Coinage metals, such as copper and silver, slowly corrode through use. A patina of green-blue copper carbonate forms on the surface of copper with exposure to the water and carbon dioxide in the air. Silver coins or cutlery that are exposed to high sulfur foods such as eggs or the low levels of sulfur species in the air develop a layer of black Silver sulfide. Gold and platinum are extremely difficult to oxidize under normal circumstances, and require exposure to a powerful chemical oxidizing agent such as aqua regia. Some common metals oxidize extremely rapidly in air. Titanium and aluminium oxidize instantaneously in contact with the oxygen in the air. These metals form an extremely thin layer of oxidized metal on the surface which bonds with the underlying metal. This thin layer of oxide protects the underlying layers of the metal from the air preventing the entire metal from oxidizing. These metals are used in applications where corrosion resistance is important. Iron, in contrast, has an oxide that forms in air and water, called rust, that does not bond with the iron and therefore does not stop the further oxidation of the iron. Thus iron left exposed to air and water will continue to rust until all of the iron is oxidized. Attempts to save a metal from becoming anodic are of two general types. Anodic regions dissolve and destroy the structural integrity of the metal. While it is almost impossible to prevent anode/cathode formation, if a non-conducting material covers the metal, contact with the electrolyte is not possible and corrosion will not occur. Metals can be coated with paint or other less conductive metals ("passivation"). This prevents the metal surface from being exposed to electrolytes. Scratches exposing the metal substrate will result in corrosion. The region under the coating adjacent to the scratch acts as the anode of the reaction. See Anodizing A method commonly used to protect a structural metal is to attach a metal which is more anodic than the metal to be protected. This forces the structural metal to be cathodic, thus spared corrosion. It is called ""sacrificial"" because the anode dissolves and has to be replaced periodically. Zinc bars are attached to various locations on steel ship hulls to render the ship hull cathodic. The zinc bars are replaced periodically. Other metals, such as magnesium, would work very well but zinc is the least expensive useful metal. To protect pipelines, an ingot of buried or exposed magnesium (or zinc) is buried beside the pipeline and is connected electrically to the pipe above ground. The pipeline is forced to be a cathode and is protected from being oxidized and rusting. The magnesium anode is sacrificed. At intervals new ingots are buried to replace those lost. The spontaneous redox reactions of a conventional battery produce electricity through the different chemical potentials of the cathode and anode in the electrolyte. However, electrolysis requires an external source of electrical energy to induce a chemical reaction, and this process takes place in a compartment called an electrolytic cell. When molten, the salt sodium chloride can be electrolyzed to yield metallic sodium and gaseous chlorine. Industrially this process takes place in a special cell named Down's cell. The cell is connected to an electrical power supply, allowing electrons to migrate from the power supply to the electrolytic cell. Reactions that take place at Down's cell are the following: This process can yield large amounts of metallic sodium and gaseous chlorine, and is widely used on mineral dressing and metallurgy industries. The emf for this process is approximately −4 V indicating a (very) non-spontaneous process. In order for this reaction to occur the power supply should provide at least a potential of 4 V. However, larger voltages must be used for this reaction to occur at a high rate. Water can be converted to its component elemental gasses, H2 and O2 through the application of an external voltage. Water doesn't decompose into hydrogen and oxygen spontaneously as the Gibbs free energy for the process at standard conditions is about 474.4 kJ. The decomposition of water into hydrogen and oxygen can be performed in an electrolytic cell. In it, a pair of inert electrodes usually made of platinum immersed in water act as anode and cathode in the electrolytic process. The electrolysis starts with the application of an external voltage between the electrodes. This process will not occur except at extremely high voltages without an electrolyte such as sodium chloride or sulfuric acid (most used 0.1 M). Bubbles from the gases will be seen near both electrodes. The following half reactions describe the process mentioned above: Although strong acids may be used in the apparatus, the reaction will not net consume the acid. While this reaction will work at any conductive electrode at a sufficiently large potential, platinum catalyzes both hydrogen and oxygen formation, allowing for relatively mild voltages (~2 V depending on the pH). Electrolysis in an aqueous solution is a similar process as mentioned in electrolysis of water. However, it is considered to be a complex process because the contents in solution have to be analyzed in half reactions, whether reduced or oxidized. The presence of water in a solution of sodium chloride must be examined in respect to its reduction and oxidation in both electrodes. Usually, water is electrolysed as mentioned in electrolysis of water yielding "gaseous oxygen in the anode" and gaseous hydrogen in the cathode. On the other hand, sodium chloride in water dissociates in Na+ and Cl− ions, cation, which is the positive ion, will be attracted to the cathode (-), thus reducing the sodium ion. The anion will then be attracted to the anode (+) oxidizing chloride ion. The following half reactions describes the process mentioned: Reaction 1 is discarded as it has the most negative value on standard reduction potential thus making it less thermodynamically favorable in the process. When comparing the reduction potentials in reactions 2 and 4, the reduction of chloride ion is favored. Thus, if the Cl− ion is favored for reduction, then the water reaction is favored for oxidation producing gaseous oxygen, however experiments show gaseous chlorine is produced and not oxygen. Although the initial analysis is correct, there is another effect that can happen, known as the overvoltage effect. Additional voltage is sometimes required, beyond the voltage predicted by the E°cell. This may be due to kinetic rather than thermodynamic considerations. In fact, it has been proven that the activation energy for the chloride ion is very low, hence favorable in kinetic terms. In other words, although the voltage applied is thermodynamically sufficient to drive electrolysis, the rate is so slow that to make the process proceed in a reasonable time frame, the voltage of the external source has to be increased (hence, overvoltage). Finally, reaction 3 is favorable because it describes the proliferation of OH− ions thus letting a probable reduction of H+ ions less favorable an option. The overall reaction for the process according to the analysis would be the following: As the overall reaction indicates, the concentration of chloride ions is reduced in comparison to OH− ions (whose concentration increases). The reaction also shows the production of gaseous hydrogen, chlorine and aqueous sodium hydroxide. Quantitative aspects of electrolysis were originally developed by Michael Faraday in 1834. Faraday is also credited to have coined the terms "electrolyte", electrolysis, among many others while he studied quantitative analysis of electrochemical reactions. Also he was an advocate of the law of conservation of energy. Faraday concluded after several experiments on electric current in non-spontaneous process, the mass of the products yielded on the electrodes was proportional to the value of current supplied to the cell, the length of time the current existed, and the molar mass of the substance analyzed. In other words, the amount of a substance deposited on each electrode of an electrolytic cell is directly proportional to the quantity of electricity passed through the cell. Below is a simplified equation of Faraday's first law: Where Faraday devised the laws of chemical electrodeposition of metals from solutions in 1857. He formulated the second law of electrolysis stating ""the amounts of bodies which are equivalent to each other in their ordinary chemical action have equal quantities of electricity naturally associated with them."" In other words, the quantities of different elements deposited by a given amount of electricity are in the ratio of their chemical equivalent weights. An important aspect of the second law of electrolysis is electroplating which together with the first law of electrolysis, has a significant number of applications in the industry, as when used to protect metals to avoid corrosion. There are various extremely important electrochemical processes in both nature and industry, like the coating of objects with metals or metal oxides through electrodeposition, the addition (electroplating) or removal (electropolishing) of thin layers of metal from an object's surface, and the detection of alcohol in drunken drivers through the redox reaction of ethanol. The generation of chemical energy through photosynthesis is inherently an electrochemical process, as is production of metals like aluminum and titanium from their ores. Certain diabetes blood sugar meters measure the amount of glucose in the blood through its redox potential. As well as the established electrochemical technologies (like deep cycle lead acid batteries) there is also a wide range of new emerging technologies such as fuel cells, large format lithium-ion batteries, electrochemical reactors and super-capacitors that are becoming increasingly commercial. Electrochemistry has also important applications in the food industry, like the assessment of food/package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, the determination of free acidity in olive oil. The action potentials that travel down connected neurons are based on electric current generated by the movement of sodium and potassium ions into and out of cells. Specialized cells in certain animals like the electric eel can generate electric currents powerful enough to disable much larger animals.
https://en.wikipedia.org/wiki?curid=9601
Alexander Graham Bell Alexander Graham Bell (; March 3, 1847 – August 2, 1922) was a Scottish-born inventor, scientist, and engineer who is credited with inventing and patenting the first practical telephone. He also co-founded the American Telephone and Telegraph Company (AT&T) in 1885. Bell's father, grandfather, and brother had all been associated with work on elocution and speech and both his mother and wife were deaf, profoundly influencing Bell's life's work. His research on hearing and speech further led him to experiment with hearing devices which eventually culminated in Bell being awarded the first U.S. patent for the telephone, on March 7, 1876. Bell considered his invention an intrusion on his real work as a scientist and refused to have a telephone in his study. Many other inventions marked Bell's later life, including groundbreaking work in optical telecommunications, hydrofoils, and aeronautics. Although Bell was not one of the 33 founders of the National Geographic Society, he had a strong influence on the magazine while serving as the second president from January 7, 1898, until 1903. Beyond his scientific work, Bell was an advocate of compulsory sterilization, and served as chairman or president of several eugenics organizations. Alexander Bell was born in Edinburgh, Scotland, on March 3, 1847. The family home was at South Charlotte Street, and has a stone inscription marking it as Alexander Graham Bell's birthplace. He had two brothers: Melville James Bell (1845–1870) and Edward Charles Bell (1848–1867), both of whom would die of tuberculosis. His father was Professor Alexander Melville Bell, a phonetician, and his mother was Eliza Grace (née Symonds). Born as just "Alexander Bell", at age 10, he made a plea to his father to have a middle name like his two brothers. For his 11th birthday, his father acquiesced and allowed him to adopt the name "Graham", chosen out of respect for Alexander Graham, a Canadian being treated by his father who had become a family friend. To close relatives and friends he remained "Aleck". As a child, young Bell displayed a curiosity about his world; he gathered botanical specimens and ran experiments at an early age. His best friend was Ben Herdman, a neighbour whose family operated a flour mill. At the age of 12, Bell built a homemade device that combined rotating paddles with sets of nail brushes, creating a simple dehusking machine that was put into operation at the mill and used steadily for a number of years. In return, Ben's father John Herdman gave both boys the run of a small workshop in which to "invent". From his early years, Bell showed a sensitive nature and a talent for art, poetry, and music that was encouraged by his mother. With no formal training, he mastered the piano and became the family's pianist. Despite being normally quiet and introspective, he revelled in mimicry and "voice tricks" akin to ventriloquism that continually entertained family guests during their occasional visits. Bell was also deeply affected by his mother's gradual deafness (she began to lose her hearing when he was 12), and learned a manual finger language so he could sit at her side and tap out silently the conversations swirling around the family parlour. He also developed a technique of speaking in clear, modulated tones directly into his mother's forehead wherein she would hear him with reasonable clarity. Bell's preoccupation with his mother's deafness led him to study acoustics. His family was long associated with the teaching of elocution: his grandfather, Alexander Bell, in London, his uncle in Dublin, and his father, in Edinburgh, were all elocutionists. His father published a variety of works on the subject, several of which are still well known, especially his "The Standard Elocutionist" (1860), which appeared in Edinburgh in 1868. "The Standard Elocutionist" appeared in 168 British editions and sold over a quarter of a million copies in the United States alone. In this treatise, his father explains his methods of how to instruct deaf-mutes (as they were then known) to articulate words and read other people's lip movements to decipher meaning. Bell's father taught him and his brothers not only to write Visible Speech but to identify any symbol and its accompanying sound. Bell became so proficient that he became a part of his father's public demonstrations and astounded audiences with his abilities. He could decipher Visible Speech representing virtually every language, including Latin, Scottish Gaelic, and even Sanskrit, accurately reciting written tracts without any prior knowledge of their pronunciation. As a young child, Bell, like his brothers, received his early schooling at home from his father. At an early age, he was enrolled at the Royal High School, Edinburgh, Scotland, which he left at the age of 15, having completed only the first four forms. His school record was undistinguished, marked by absenteeism and lacklustre grades. His main interest remained in the sciences, especially biology, while he treated other school subjects with indifference, to the dismay of his father. Upon leaving school, Bell travelled to London to live with his grandfather, Alexander Bell. During the year he spent with his grandfather, a love of learning was born, with long hours spent in serious discussion and study. The elder Bell took great efforts to have his young pupil learn to speak clearly and with conviction, the attributes that his pupil would need to become a teacher himself. At the age of 16, Bell secured a position as a "pupil-teacher" of elocution and music, in Weston House Academy at Elgin, Moray, Scotland. Although he was enrolled as a student in Latin and Greek, he instructed classes himself in return for board and £10 per session. The following year, he attended the University of Edinburgh; joining his older brother Melville who had enrolled there the previous year. In 1868, not long before he departed for Canada with his family, Bell completed his matriculation exams and was accepted for admission to University College London. His father encouraged Bell's interest in speech and, in 1863, took his sons to see a unique automaton developed by Sir Charles Wheatstone based on the earlier work of Baron Wolfgang von Kempelen. The rudimentary "mechanical man" simulated a human voice. Bell was fascinated by the machine and after he obtained a copy of von Kempelen's book, published in German, and had laboriously translated it, he and his older brother Melville built their own automaton head. Their father, highly interested in their project, offered to pay for any supplies and spurred the boys on with the enticement of a "big prize" if they were successful. While his brother constructed the throat and larynx, Bell tackled the more difficult task of recreating a realistic skull. His efforts resulted in a remarkably lifelike head that could "speak", albeit only a few words. The boys would carefully adjust the "lips" and when a bellows forced air through the windpipe, a very recognizable "Mama" ensued, to the delight of neighbours who came to see the Bell invention. Intrigued by the results of the automaton, Bell continued to experiment with a live subject, the family's Skye Terrier, "Trouve". After he taught it to growl continuously, Bell would reach into its mouth and manipulate the dog's lips and vocal cords to produce a crude-sounding "Ow ah oo ga ma ma". With little convincing, visitors believed his dog could articulate "How are you, grandma?" Indicative of his playful nature, his experiments convinced onlookers that they saw a "talking dog". These initial forays into experimentation with sound led Bell to undertake his first serious work on the transmission of sound, using tuning forks to explore resonance. At age 19, Bell wrote a report on his work and sent it to philologist Alexander Ellis, a colleague of his father (who would later be portrayed as Professor Henry Higgins in "Pygmalion"). Ellis immediately wrote back indicating that the experiments were similar to existing work in Germany, and also lent Bell a copy of Hermann von Helmholtz's work, "The Sensations of Tone as a Physiological Basis for the Theory of Music". Dismayed to find that groundbreaking work had already been undertaken by Helmholtz who had conveyed vowel sounds by means of a similar tuning fork "contraption", Bell pored over the German scientist's book. Working from his own erroneous mistranslation of a French edition, Bell fortuitously then made a deduction that would be the underpinning of all his future work on transmitting sound, reporting: "Without knowing much about the subject, it seemed to me that if vowel sounds could be produced by electrical means, so could consonants, so could articulate speech." He also later remarked: "I thought that Helmholtz had done it ... and that my failure was due only to my ignorance of electricity. It was a valuable blunder ... If I had been able to read German in those days, I might never have commenced my experiments!" In 1865, when the Bell family moved to London, Bell returned to Weston House as an assistant master and, in his spare hours, continued experiments on sound using a minimum of laboratory equipment. Bell concentrated on experimenting with electricity to convey sound and later installed a telegraph wire from his room in Somerset College to that of a friend. Throughout late 1867, his health faltered mainly through exhaustion. His younger brother, Edward "Ted," was similarly bed-ridden, suffering from tuberculosis. While Bell recovered (by then referring to himself in correspondence as "A. G. Bell") and served the next year as an instructor at Somerset College, Bath, England, his brother's condition deteriorated. Edward would never recover. Upon his brother's death, Bell returned home in 1867. His older brother Melville had married and moved out. With aspirations to obtain a degree at University College London, Bell considered his next years as preparation for the degree examinations, devoting his spare time at his family's residence to studying. Helping his father in Visible Speech demonstrations and lectures brought Bell to Susanna E. Hull's private school for the deaf in South Kensington, London. His first two pupils were deaf-mute girls who made remarkable progress under his tutelage. While his older brother seemed to achieve success on many fronts including opening his own elocution school, applying for a patent on an invention, and starting a family, Bell continued as a teacher. However, in May 1870, Melville died from complications due to tuberculosis, causing a family crisis. His father had also suffered a debilitating illness earlier in life and had been restored to health by a convalescence in Newfoundland. Bell's parents embarked upon a long-planned move when they realized that their remaining son was also sickly. Acting decisively, Alexander Melville Bell asked Bell to arrange for the sale of all the family property, conclude all of his brother's affairs (Bell took over his last student, curing a pronounced lisp), and join his father and mother in setting out for the "New World". Reluctantly, Bell also had to conclude a relationship with Marie Eccleston, who, as he had surmised, was not prepared to leave England with him. In 1870, 23-year-old Bell travelled with his parents and his brother's widow, Caroline Margaret Ottaway, to Paris, Ontario, to stay with the Reverend Thomas Henderson, a family friend. The Bell family soon purchased a farm of at Tutelo Heights (now called Tutela Heights), near Brantford, Ontario. The property consisted of an orchard, large farmhouse, stable, pigsty, hen-house, and a carriage house, which bordered the Grand River. At the homestead, Bell set up his own workshop in the converted carriage house near to what he called his "dreaming place", a large hollow nestled in trees at the back of the property above the river. Despite his frail condition upon arriving in Canada, Bell found the climate and environs to his liking, and rapidly improved. He continued his interest in the study of the human voice and when he discovered the Six Nations Reserve across the river at Onondaga, he learned the Mohawk language and translated its unwritten vocabulary into Visible Speech symbols. For his work, Bell was awarded the title of Honorary Chief and participated in a ceremony where he donned a Mohawk headdress and danced traditional dances. After setting up his workshop, Bell continued experiments based on Helmholtz's work with electricity and sound. He also modified a melodeon (a type of pump organ) so that it could transmit its music electrically over a distance. Once the family was settled in, both Bell and his father made plans to establish a teaching practice and in 1871, he accompanied his father to Montreal, where Melville was offered a position to teach his System of Visible Speech. Bell's father was invited by Sarah Fuller, principal of the Boston School for Deaf Mutes (which continues today as the public Horace Mann School for the Deaf), in Boston, Massachusetts, United States, to introduce the Visible Speech System by providing training for Fuller's instructors, but he declined the post in favour of his son. Travelling to Boston in April 1871, Bell proved successful in training the school's instructors. He was subsequently asked to repeat the programme at the American Asylum for Deaf-mutes in Hartford, Connecticut, and the Clarke School for the Deaf in Northampton, Massachusetts. Returning home to Brantford after six months abroad, Bell continued his experiments with his "harmonic telegraph". The basic concept behind his device was that messages could be sent through a single wire if each message was transmitted at a different pitch, but work on both the transmitter and receiver was needed. Unsure of his future, he first contemplated returning to London to complete his studies, but decided to return to Boston as a teacher. His father helped him set up his private practice by contacting Gardiner Greene Hubbard, the president of the Clarke School for the Deaf for a recommendation. Teaching his father's system, in October 1872, Alexander Bell opened his "School of Vocal Physiology and Mechanics of Speech" in Boston, which attracted a large number of deaf pupils, with his first class numbering 30 students. While he was working as a private tutor, one of his pupils was Helen Keller, who came to him as a young child unable to see, hear, or speak. She was later to say that Bell dedicated his life to the penetration of that "inhuman silence which separates and estranges". In 1893, Keller performed the sod-breaking ceremony for the construction of Bell's new Volta Bureau, dedicated to "the increase and diffusion of knowledge relating to the deaf". Several influential people of the time, including Bell, viewed deafness as something that should be eradicated, and also believed that with resources and effort, they could teach the deaf to speak and avoid the use of sign language, thus enabling their integration within the wider society from which many were often being excluded. Owing to his efforts to suppress the teaching of sign language, Bell is often viewed negatively by those embracing Deaf culture. In 1872, Bell became professor of Vocal Physiology and Elocution at the Boston University School of Oratory. During this period, he alternated between Boston and Brantford, spending summers in his Canadian home. At Boston University, Bell was "swept up" by the excitement engendered by the many scientists and inventors residing in the city. He continued his research in sound and endeavored to find a way to transmit musical notes and articulate speech, but although absorbed by his experiments, he found it difficult to devote enough time to experimentation. While days and evenings were occupied by his teaching and private classes, Bell began to stay awake late into the night, running experiment after experiment in rented facilities at his boarding house. Keeping "night owl" hours, he worried that his work would be discovered and took great pains to lock up his notebooks and laboratory equipment. Bell had a specially made table where he could place his notes and equipment inside a locking cover. Worse still, his health deteriorated as he suffered severe headaches. Returning to Boston in fall 1873, Bell made a fateful decision to concentrate on his experiments in sound. Deciding to give up his lucrative private Boston practice, Bell retained only two students, six-year-old "Georgie" Sanders, deaf from birth, and 15-year-old Mabel Hubbard. Each pupil would play an important role in the next developments. George's father, Thomas Sanders, a wealthy businessman, offered Bell a place to stay in nearby Salem with Georgie's grandmother, complete with a room to "experiment". Although the offer was made by George's mother and followed the year-long arrangement in 1872 where her son and his nurse had moved to quarters next to Bell's boarding house, it was clear that Mr. Sanders was backing the proposal. The arrangement was for teacher and student to continue their work together, with free room and board thrown in. Mabel was a bright, attractive girl who was ten years Bell's junior but became the object of his affection. Having lost her hearing after a near-fatal bout of scarlet fever close to her fifth birthday, she had learned to read lips but her father, Gardiner Greene Hubbard, Bell's benefactor and personal friend, wanted her to work directly with her teacher. By 1874, Bell's initial work on the harmonic telegraph had entered a formative stage, with progress made both at his new Boston "laboratory" (a rented facility) and at his family home in Canada a big success. While working that summer in Brantford, Bell experimented with a "phonautograph", a pen-like machine that could draw shapes of sound waves on smoked glass by tracing their vibrations. Bell thought it might be possible to generate undulating electrical currents that corresponded to sound waves. Bell also thought that multiple metal reeds tuned to different frequencies like a harp would be able to convert the undulating currents back into sound. But he had no working model to demonstrate the feasibility of these ideas. In 1874, telegraph message traffic was rapidly expanding and in the words of Western Union President William Orton, had become "the nervous system of commerce". Antonio Meucci sent a telephone model and technical details to the Western Union telegraph company but failed to win a meeting with executives. When he asked for his materials to be returned, in 1874, he was told they had been lost. Two years later Bell, who shared a laboratory with Meucci, filed a patent for a telephone, became a celebrity and made a lucrative deal with Western Union. Meucci sued and was nearing victory—the supreme court agreed to hear the case and fraud charges were initiated against Bell—when the Florentine died in 1889. The legal action died with him. Orton had contracted with inventors Thomas Edison and Elisha Gray to find a way to send multiple telegraph messages on each telegraph line to avoid the great cost of constructing new lines. When Bell mentioned to Gardiner Hubbard and Thomas Sanders that he was working on a method of sending multiple tones on a telegraph wire using a multi-reed device, the two wealthy patrons began to financially support Bell's experiments. Patent matters would be handled by Hubbard's patent attorney, Anthony Pollok. In March 1875, Bell and Pollok visited the scientist Joseph Henry, who was then director of the Smithsonian Institution, and asked Henry's advice on the electrical multi-reed apparatus that Bell hoped would transmit the human voice by telegraph. Henry replied that Bell had "the germ of a great invention". When Bell said that he did not have the necessary knowledge, Henry replied, "Get it!" That declaration greatly encouraged Bell to keep trying, even though he did not have the equipment needed to continue his experiments, nor the ability to create a working model of his ideas. However, a chance meeting in 1874 between Bell and Thomas A. Watson, an experienced electrical designer and mechanic at the electrical machine shop of Charles Williams, changed all that. With financial support from Sanders and Hubbard, Bell hired Thomas Watson as his assistant, and the two of them experimented with acoustic telegraphy. On June 2, 1875, Watson accidentally plucked one of the reeds and Bell, at the receiving end of the wire, heard the overtones of the reed; overtones that would be necessary for transmitting speech. That demonstrated to Bell that only one reed or armature was necessary, not multiple reeds. This led to the "gallows" sound-powered telephone, which could transmit indistinct, voice-like sounds, but not clear speech. In 1875, Bell developed an acoustic telegraph and drew up a patent application for it. Since he had agreed to share U.S. profits with his investors Gardiner Hubbard and Thomas Sanders, Bell requested that an associate in Ontario, George Brown, attempt to patent it in Britain, instructing his lawyers to apply for a patent in the U.S. only after they received word from Britain (Britain would issue patents only for discoveries not previously patented elsewhere). Meanwhile, Elisha Gray was also experimenting with acoustic telegraphy and thought of a way to transmit speech using a water transmitter. On February 14, 1876, Gray filed a caveat with the U.S. Patent Office for a telephone design that used a water transmitter. That same morning, Bell's lawyer filed Bell's application with the patent office. There is considerable debate about who arrived first and Gray later challenged the primacy of Bell's patent. Bell was in Boston on February 14 and did not arrive in Washington until February 26. Bell's patent 174,465, was issued to Bell on March 7, 1876, by the U.S. Patent Office. Bell's patent covered "the method of, and apparatus for, transmitting vocal or other sounds telegraphically ... by causing electrical undulations, similar in form to the vibrations of the air accompanying the said vocal or other sound" Bell returned to Boston the same day and the next day resumed work, drawing in his notebook a diagram similar to that in Gray's patent caveat. On March 10, 1876, three days after his patent was issued, Bell succeeded in getting his telephone to work, using a liquid transmitter similar to Gray's design. Vibration of the diaphragm caused a needle to vibrate in the water, varying the electrical resistance in the circuit. When Bell spoke the sentence "Mr. Watson—Come here—I want to see you" into the liquid transmitter, Watson, listening at the receiving end in an adjoining room, heard the words clearly. Although Bell was, and still is, accused of stealing the telephone from Gray, Bell used Gray's water transmitter design only after Bell's patent had been granted, and only as a proof of concept scientific experiment, to prove to his own satisfaction that intelligible "articulate speech" (Bell's words) could be electrically transmitted. After March 1876, Bell focused on improving the electromagnetic telephone and never used Gray's liquid transmitter in public demonstrations or commercial use. The question of priority for the variable resistance feature of the telephone was raised by the examiner before he approved Bell's patent application. He told Bell that his claim for the variable resistance feature was also described in Gray's caveat. Bell pointed to a variable resistance device in his previous application in which he described a cup of mercury, not water. He had filed the mercury application at the patent office a year earlier on February 25, 1875, long before Elisha Gray described the water device. In addition, Gray abandoned his caveat, and because he did not contest Bell's priority, the examiner approved Bell's patent on March 3, 1876. Gray had reinvented the variable resistance telephone, but Bell was the first to write down the idea and the first to test it in a telephone. The patent examiner, Zenas Fisk Wilber, later stated in an affidavit that he was an alcoholic who was much in debt to Bell's lawyer, Marcellus Bailey, with whom he had served in the Civil War. He claimed he showed Gray's patent caveat to Bailey. Wilber also claimed (after Bell arrived in Washington D.C. from Boston) that he showed Gray's caveat to Bell and that Bell paid him $100 (). Bell claimed they discussed the patent only in general terms, although in a letter to Gray, Bell admitted that he learned some of the technical details. Bell denied in an affidavit that he ever gave Wilber any money. On March 10, 1876 Bell used "the instrument" in Boston to call Thomas Watson who was in another room but out of earshot. He said, "Mr. Watson, come here – I want to see you" and Watson soon appeared at his side. Continuing his experiments in Brantford, Bell brought home a working model of his telephone. On August 3, 1876, from the telegraph office in Brantford, Ontario, Bell sent a tentative telegram to the village of Mount Pleasant distant, indicating that he was ready. He made a telephone call via telegraph wires and faint voices were heard replying. The following night, he amazed guests as well as his family with a call between the Bell Homestead and the office of the Dominion Telegraph Company in Brantford along an improvised wire strung up along telegraph lines and fences, and laid through a tunnel. This time, guests at the household distinctly heard people in Brantford reading and singing. The third test on August 10, 1876, was made via the telegraph line between Brantford and Paris, Ontario, distant. This test was said by many sources to be the "world's first long-distance call". The final test certainly proved that the telephone could work over long distances, at least as a one-way call. The first two-way (reciprocal) conversation over a line occurred between Cambridge and Boston (roughly 2.5 miles) on October 9, 1876. During that conversation, Bell was on Kilby Street in Boston and Watson was at the offices of the Walworth Manufacturing Company. Bell and his partners, Hubbard and Sanders, offered to sell the patent outright to Western Union for $100,000. The president of Western Union balked, countering that the telephone was nothing but a toy. Two years later, he told colleagues that if he could get the patent for $25 million he would consider it a bargain. By then, the Bell company no longer wanted to sell the patent. Bell's investors would become millionaires while he fared well from residuals and at one point had assets of nearly one million dollars. Bell began a series of public demonstrations and lectures to introduce the new invention to the scientific community as well as the general public. A short time later, his demonstration of an early telephone prototype at the 1876 Centennial Exposition in Philadelphia brought the telephone to international attention. Influential visitors to the exhibition included Emperor Pedro II of Brazil. One of the judges at the Exhibition, Sir William Thomson (later, Lord Kelvin), a renowned Scottish scientist, described the telephone as "the greatest by far of all the marvels of the electric telegraph". On January 14, 1878, at Osborne House, on the Isle of Wight, Bell demonstrated the device to Queen Victoria, placing calls to Cowes, Southampton and London. These were the first publicly witnessed long-distance telephone calls in the UK. The queen considered the process to be "quite extraordinary" although the sound was "rather faint". She later asked to buy the equipment that was used, but Bell offered to make "a set of telephones" specifically for her. The Bell Telephone Company was created in 1877, and by 1886, more than 150,000 people in the U.S. owned telephones. Bell Company engineers made numerous other improvements to the telephone, which emerged as one of the most successful products ever. In 1879, the Bell company acquired Edison's patents for the carbon microphone from Western Union. This made the telephone practical for longer distances, and it was no longer necessary to shout to be heard at the receiving telephone. Emperor Pedro II of Brazil was the first person to buy stock in Bell's company, the Bell Telephone Company. One of the first telephones in a private residence was installed in his palace in Petrópolis, his summer retreat from Rio de Janeiro. In January 1915, Bell made the first ceremonial transcontinental telephone call. Calling from the AT&T head office at 15 Dey Street in New York City, Bell was heard by Thomas Watson at 333 Grant Avenue in San Francisco. "The New York Times" reported: As is sometimes common in scientific discoveries, simultaneous developments can occur, as evidenced by a number of inventors who were at work on the telephone. Over a period of 18 years, the Bell Telephone Company faced 587 court challenges to its patents, including five that went to the U.S. Supreme Court, but none was successful in establishing priority over the original Bell patent and the Bell Telephone Company never lost a case that had proceeded to a final trial stage. Bell's laboratory notes and family letters were the key to establishing a long lineage to his experiments. The Bell company lawyers successfully fought off myriad lawsuits generated initially around the challenges by Elisha Gray and Amos Dolbear. In personal correspondence to Bell, both Gray and Dolbear had acknowledged his prior work, which considerably weakened their later claims. On January 13, 1887, the U.S. Government moved to annul the patent issued to Bell on the grounds of fraud and misrepresentation. After a series of decisions and reversals, the Bell company won a decision in the Supreme Court, though a couple of the original claims from the lower court cases were left undecided. By the time that the trial wound its way through nine years of legal battles, the U.S. prosecuting attorney had died and the two Bell patents (No. 174,465 dated March 7, 1876, and No. 186,787 dated January 30, 1877) were no longer in effect, although the presiding judges agreed to continue the proceedings due to the case's importance as a precedent. With a change in administration and charges of conflict of interest (on both sides) arising from the original trial, the US Attorney General dropped the lawsuit on November 30, 1897, leaving several issues undecided on the merits. During a deposition filed for the 1887 trial, Italian inventor Antonio Meucci also claimed to have created the first working model of a telephone in Italy in 1834. In 1886, in the first of three cases in which he was involved, Meucci took the stand as a witness in the hope of establishing his invention's priority. Meucci's testimony in this case was disputed due to a lack of material evidence for his inventions, as his working models were purportedly lost at the laboratory of American District Telegraph (ADT) of New York, which was later incorporated as a subsidiary of Western Union in 1901. Meucci's work, like many other inventors of the period, was based on earlier acoustic principles and despite evidence of earlier experiments, the final case involving Meucci was eventually dropped upon Meucci's death. However, due to the efforts of Congressman Vito Fossella, the U.S. House of Representatives on June 11, 2002, stated that Meucci's "work in the invention of the telephone should be acknowledged". This did not put an end to the still-contentious issue. Some modern scholars do not agree with the claims that Bell's work on the telephone was influenced by Meucci's inventions. The value of the Bell patent was acknowledged throughout the world, and patent applications were made in most major countries, but when Bell delayed the German patent application, the electrical firm of Siemens & Halske (S&H) set up a rival manufacturer of Bell telephones under their own patent. The Siemens company produced near-identical copies of the Bell telephone without having to pay royalties. The establishment of the International Bell Telephone Company in Brussels, Belgium in 1880, as well as a series of agreements in other countries eventually consolidated a global telephone operation. The strain put on Bell by his constant appearances in court, necessitated by the legal battles, eventually resulted in his resignation from the company. On July 11, 1877, a few days after the Bell Telephone Company was established, Bell married Mabel Hubbard (1857–1923) at the Hubbard estate in Cambridge, Massachusetts. His wedding present to his bride was to turn over 1,487 of his 1,497 shares in the newly formed Bell Telephone Company. Shortly thereafter, the newlyweds embarked on a year-long honeymoon in Europe. During that excursion, Bell took a handmade model of his telephone with him, making it a "working holiday". The courtship had begun years earlier; however, Bell waited until he was more financially secure before marrying. Although the telephone appeared to be an "instant" success, it was not initially a profitable venture and Bell's main sources of income were from lectures until after 1897. One unusual request exacted by his fiancée was that he use "Alec" rather than the family's earlier familiar name of "Aleck". From 1876, he would sign his name "Alec Bell". They had four children: The Bell family home was in Cambridge, Massachusetts, until 1880 when Bell's father-in-law bought a house in Washington, D.C.; in 1882 he bought a home in the same city for Bell's family, so they could be with him while he attended to the numerous court cases involving patent disputes. Bell was a British subject throughout his early life in Scotland and later in Canada until 1882 when he became a naturalized citizen of the United States. In 1915, he characterized his status as: "I am not one of those hyphenated Americans who claim allegiance to two countries." Despite this declaration, Bell has been proudly claimed as a "native son" by all three countries he resided in: the United States, Canada, and the United Kingdom. By 1885, a new summer retreat was contemplated. That summer, the Bells had a vacation on Cape Breton Island in Nova Scotia, spending time at the small village of Baddeck. Returning in 1886, Bell started building an estate on a point across from Baddeck, overlooking Bras d'Or Lake. By 1889, a large house, christened "The Lodge" was completed and two years later, a larger complex of buildings, including a new laboratory, were begun that the Bells would name Beinn Bhreagh (Gaelic: "beautiful mountain") after Bell's ancestral Scottish highlands. Bell also built the Bell Boatyard on the estate, employing up to 40 people building experimental craft as well as wartime lifeboats and workboats for the Royal Canadian Navy and pleasure craft for the Bell family. He was an enthusiastic boater, and Bell and his family sailed or rowed a long series of vessels on Bras d'Or Lake, ordering additional vessels from the H.W. Embree and Sons boatyard in Port Hawkesbury, Nova Scotia. In his final, and some of his most productive years, Bell split his residency between Washington, D.C., where he and his family initially resided for most of the year, and Beinn Bhreagh, where they spent increasing amounts of time. Until the end of his life, Bell and his family would alternate between the two homes, but "Beinn Bhreagh" would, over the next 30 years, become more than a summer home as Bell became so absorbed in his experiments that his annual stays lengthened. Both Mabel and Bell became immersed in the Baddeck community and were accepted by the villagers as "their own". The Bells were still in residence at "Beinn Bhreagh" when the Halifax Explosion occurred on December 6, 1917. Mabel and Bell mobilized the community to help victims in Halifax. Although Alexander Graham Bell is most often associated with the invention of the telephone, his interests were extremely varied. According to one of his biographers, Charlotte Gray, Bell's work ranged "unfettered across the scientific landscape" and he often went to bed voraciously reading the "Encyclopædia Britannica", scouring it for new areas of interest. The range of Bell's inventive genius is represented only in part by the 18 patents granted in his name alone and the 12 he shared with his collaborators. These included 14 for the telephone and telegraph, four for the photophone, one for the phonograph, five for aerial vehicles, four for "hydroairplanes", and two for selenium cells. Bell's inventions spanned a wide range of interests and included a metal jacket to assist in breathing, the audiometer to detect minor hearing problems, a device to locate icebergs, investigations on how to separate salt from seawater, and work on finding alternative fuels. Bell worked extensively in medical research and invented techniques for teaching speech to the deaf. During his Volta Laboratory period, Bell and his associates considered impressing a magnetic field on a record as a means of reproducing sound. Although the trio briefly experimented with the concept, they could not develop a workable prototype. They abandoned the idea, never realizing they had glimpsed a basic principle which would one day find its application in the tape recorder, the hard disc and floppy disc drive, and other magnetic media. Bell's own home used a primitive form of air conditioning, in which fans blew currents of air across great blocks of ice. He also anticipated modern concerns with fuel shortages and industrial pollution. Methane gas, he reasoned, could be produced from the waste of farms and factories. At his Canadian estate in Nova Scotia, he experimented with composting toilets and devices to capture water from the atmosphere. In a magazine interview published shortly before his death, he reflected on the possibility of using solar panels to heat houses. Bell and his assistant Charles Sumner Tainter jointly invented a wireless telephone, named a photophone, which allowed for the transmission of both sounds and normal human conversations on a beam of light. Both men later became full associates in the Volta Laboratory Association. On June 21, 1880, Bell's assistant transmitted a wireless voice telephone message a considerable distance, from the roof of the Franklin School in Washington, D.C., to Bell at the window of his laboratory, some away, 19 years before the first voice radio transmissions. Bell believed the photophone's principles were his life's "greatest achievement", telling a reporter shortly before his death that the photophone was "the greatest invention [I have] ever made, greater than the telephone". The photophone was a precursor to the fiber-optic communication systems which achieved popular worldwide usage in the 1980s. Its master patent was issued in December 1880, many decades before the photophone's principles came into popular use. Bell is also credited with developing one of the early versions of a metal detector through the use of an induction balance, after the shooting of U.S. President James A. Garfield in 1881. According to some accounts, the metal detector worked flawlessly in tests but did not find Guiteau's bullet, partly because the metal bed frame on which the President was lying disturbed the instrument, resulting in static. Garfield's surgeons, led by self-appointed chief physician Doctor Willard Bliss, were skeptical of the device, and ignored Bell's requests to move the President to a bed not fitted with metal springs. Alternatively, although Bell had detected a slight sound on his first test, the bullet may have been lodged too deeply to be detected by the crude apparatus. Bell's own detailed account, presented to the American Association for the Advancement of Science in 1882, differs in several particulars from most of the many and varied versions now in circulation, by concluding that extraneous metal was not to blame for failure to locate the bullet. Perplexed by the peculiar results he had obtained during an examination of Garfield, Bell "proceeded to the Executive Mansion the next morning ... to ascertain from the surgeons whether they were perfectly sure that all metal had been removed from the neighborhood of the bed. It was then recollected that underneath the horse-hair mattress on which the President lay was another mattress composed of steel wires. Upon obtaining a duplicate, the mattress was found to consist of a sort of net of woven steel wires, with large meshes. The extent of the [area that produced a response from the detector] having been so small, as compared with the area of the bed, it seemed reasonable to conclude that the steel mattress had produced no detrimental effect." In a footnote, Bell adds, "The death of President Garfield and the subsequent "post-mortem" examination, however, proved that the bullet was at too great a distance from the surface to have affected our apparatus." The March 1906 "Scientific American" article by American pioneer William E. Meacham explained the basic principle of hydrofoils and hydroplanes. Bell considered the invention of the hydroplane as a very significant achievement. Based on information gained from that article, he began to sketch concepts of what is now called a hydrofoil boat. Bell and assistant Frederick W. "Casey" Baldwin began hydrofoil experimentation in the summer of 1908 as a possible aid to airplane takeoff from water. Baldwin studied the work of the Italian inventor Enrico Forlanini and began testing models. This led him and Bell to the development of practical hydrofoil watercraft. During his world tour of 1910–11, Bell and Baldwin met with Forlanini in France. They had rides in the Forlanini hydrofoil boat over Lake Maggiore. Baldwin described it as being as smooth as flying. On returning to Baddeck, a number of initial concepts were built as experimental models, including the "Dhonnas Beag" (Scottish Gaelic for "little devil"), the first self-propelled Bell-Baldwin hydrofoil. The experimental boats were essentially proof-of-concept prototypes that culminated in the more substantial HD-4, powered by Renault engines. A top speed of was achieved, with the hydrofoil exhibiting rapid acceleration, good stability, and steering, along with the ability to take waves without difficulty. In 1913, Dr. Bell hired Walter Pinaud, a Sydney yacht designer and builder as well as the proprietor of Pinaud's Yacht Yard in Westmount, Nova Scotia, to work on the pontoons of the HD-4. Pinaud soon took over the boatyard at Bell Laboratories on Beinn Bhreagh, Bell's estate near Baddeck, Nova Scotia. Pinaud's experience in boat-building enabled him to make useful design changes to the HD-4. After the First World War, work began again on the HD-4. Bell's report to the U.S. Navy permitted him to obtain two engines in July 1919. On September 9, 1919, the HD-4 set a world marine speed record of , a record which stood for ten years. In 1891, Bell had begun experiments to develop motor-powered heavier-than-air aircraft. The AEA was first formed as Bell shared the vision to fly with his wife, who advised him to seek "young" help as Bell was at the age of 60. In 1898, Bell experimented with tetrahedral box kites and wings constructed of multiple compound tetrahedral kites covered in maroon silk. The tetrahedral wings were named "Cygnet" I, II, and III, and were flown both unmanned and manned ("Cygnet I" crashed during a flight carrying Selfridge) in the period from 1907–1912. Some of Bell's kites are on display at the Alexander Graham Bell National Historic Site. Bell was a supporter of aerospace engineering research through the Aerial Experiment Association (AEA), officially formed at Baddeck, Nova Scotia, in October 1907 at the suggestion of his wife Mabel and with her financial support after the sale of some of her real estate. The AEA was headed by Bell and the founding members were four young men: American Glenn H. Curtiss, a motorcycle manufacturer at the time and who held the title "world's fastest man", having ridden his self-constructed motor bicycle around in the shortest time, and who was later awarded the Scientific American Trophy for the first official one-kilometre flight in the Western hemisphere, and who later became a world-renowned airplane manufacturer; Lieutenant Thomas Selfridge, an official observer from the U.S. Federal government and one of the few people in the army who believed that aviation was the future; Frederick W. Baldwin, the first Canadian and first British subject to pilot a public flight in Hammondsport, New York; and J. A. D. McCurdy–Baldwin and McCurdy being new engineering graduates from the University of Toronto. The AEA's work progressed to heavier-than-air machines, applying their knowledge of kites to gliders. Moving to Hammondsport, the group then designed and built the "Red Wing", framed in bamboo and covered in red silk and powered by a small air-cooled engine. On March 12, 1908, over Keuka Lake, the biplane lifted off on the first public flight in North America. The innovations that were incorporated into this design included a cockpit enclosure and tail rudder (later variations on the original design would add ailerons as a means of control). One of the AEA's inventions, a practical wingtip form of the aileron, was to become a standard component on all aircraft. The "White Wing" and "June Bug" were to follow and by the end of 1908, over 150 flights without mishap had been accomplished. However, the AEA had depleted its initial reserves and only a $15,000 grant from Mrs. Bell allowed it to continue with experiments. Lt. Selfridge had also become the first person killed in a powered heavier-than-air flight in a crash of the Wright Flyer at Fort Myer, Virginia, on September 17, 1908. Their final aircraft design, the "Silver Dart", embodied all of the advancements found in the earlier machines. On February 23, 1909, Bell was present as the "Silver Dart" flown by J. A. D. McCurdy from the frozen ice of Bras d'Or made the first aircraft flight in Canada. Bell had worried that the flight was too dangerous and had arranged for a doctor to be on hand. With the successful flight, the AEA disbanded and the "Silver Dart" would revert to Baldwin and McCurdy, who began the Canadian Aerodrome Company and would later demonstrate the aircraft to the Canadian Army. Bell was connected with the eugenics movement in the United States. In his lecture "Memoir upon the formation of a deaf variety of the human race" presented to the National Academy of Sciences on November 13, 1883 (the year of his election as a Member of the National Academy of Sciences), he noted that congenitally deaf parents were more likely to produce deaf children and tentatively suggested that couples where both parties were deaf should not marry. However, it was his hobby of livestock breeding which led to his appointment to biologist David Starr Jordan's Committee on Eugenics, under the auspices of the American Breeders' Association. The committee unequivocally extended the principle to humans. Honors and tributes flowed to Bell in increasing numbers as his invention became ubiquitous and his personal fame grew. Bell received numerous honorary degrees from colleges and universities to the point that the requests almost became burdensome. During his life, he also received dozens of major awards, medals, and other tributes. These included statuary monuments to both him and the new form of communication his telephone created, including the Bell Telephone Memorial erected in his honor in "Alexander Graham Bell Gardens" in Brantford, Ontario, in 1917. A large number of Bell's writings, personal correspondence, notebooks, papers, and other documents reside in both the United States Library of Congress Manuscript Division (as the "Alexander Graham Bell Family Papers"), and at the Alexander Graham Bell Institute, Cape Breton University, Nova Scotia; major portions of which are available for online viewing. A number of historic sites and other marks commemorate Bell in North America and Europe, including the first telephone companies in the United States and Canada. Among the major sites are: In 1880, Bell received the Volta Prize with a purse of 50,000 French francs (approximately US$ in today's dollars) for the invention of the telephone from the French government. Among the luminaries who judged were Victor Hugo and Alexandre Dumas, "fils". The Volta Prize was conceived by Napoleon III in 1852, and named in honor of Alessandro Volta, with Bell becoming the second recipient of the grand prize in its history. Since Bell was becoming increasingly affluent, he used his prize money to create endowment funds (the 'Volta Fund') and institutions in and around the United States capital of Washington, D.C.. These included the prestigious" 'Volta Laboratory Association' "(1880), also known as the" Volta Laboratory "and as the" 'Alexander Graham Bell Laboratory', "and which eventually led to the Volta Bureau (1887) as a center for studies on deafness which is still in operation in Georgetown, Washington, D.C. The Volta Laboratory became an experimental facility devoted to scientific discovery, and the very next year it improved Edison's phonograph by substituting wax for tinfoil as the recording medium and incising the recording rather than indenting it, key upgrades that Edison himself later adopted. The laboratory was also the site where he and his associate invented his "proudest achievement", "the photophone", the "optical telephone" which presaged fibre optical telecommunications while the Volta Bureau would later evolve into the Alexander Graham Bell Association for the Deaf and Hard of Hearing (the AG Bell), a leading center for the research and pedagogy of deafness. In partnership with Gardiner Greene Hubbard, Bell helped establish the publication "Science" during the early 1880s. In 1898, Bell was elected as the second president of the National Geographic Society, serving until 1903, and was primarily responsible for the extensive use of illustrations, including photography, in the magazine. He also served for many years as a Regent of the Smithsonian Institution (1898–1922). The French government conferred on him the decoration of the Légion d'honneur (Legion of Honor); the Royal Society of Arts in London awarded him the Albert Medal in 1902; the University of Würzburg, Bavaria, granted him a PhD, and he was awarded the Franklin Institute's Elliott Cresson Medal in 1912. He was one of the founders of the American Institute of Electrical Engineers in 1884 and served as its president from 1891–92. Bell was later awarded the AIEE's Edison Medal in 1914 "For meritorious achievement in the invention of the telephone". The "bel" (B) and the smaller "decibel" (dB) are units of measurement of sound pressure level (SPL) invented by Bell Labs and named after him. Since 1976, the IEEE's Alexander Graham Bell Medal has been awarded to honor outstanding contributions in the field of telecommunications. In 1936, the US Patent Office declared Bell first on its list of the country's greatest inventors, leading to the US Post Office issuing a commemorative stamp honoring Bell in 1940 as part of its 'Famous Americans Series'. The First Day of Issue ceremony was held on October 28 in Boston, Massachusetts, the city where Bell spent considerable time on research and working with the deaf. The Bell stamp became very popular and sold out in little time. The stamp became, and remains to this day, the most valuable one of the series. The 150th anniversary of Bell's birth in 1997 was marked by a special issue of commemorative £1 banknotes from the Royal Bank of Scotland. The illustrations on the reverse of the note include Bell's face in profile, his signature, and objects from Bell's life and career: users of the telephone over the ages; an audio wave signal; a diagram of a telephone receiver; geometric shapes from engineering structures; representations of sign language and the phonetic alphabet; the geese which helped him to understand flight; and the sheep which he studied to understand genetics. Additionally, the Government of Canada honored Bell in 1997 with a C$100 gold coin, in tribute also to the 150th anniversary of his birth, and with a silver dollar coin in 2009 in honor of the 100th anniversary of flight in Canada. That first flight was made by an airplane designed under Dr. Bell's tutelage, named the Silver Dart. Bell's image, and also those of his many inventions have graced paper money, coinage, and postal stamps in numerous countries worldwide for many dozens of years. Alexander Graham Bell was ranked 57th among the 100 Greatest Britons (2002) in an official BBC nationwide poll, and among the Top Ten Greatest Canadians (2004), and the 100 Greatest Americans (2005). In 2006, Bell was also named as one of the 10 greatest Scottish scientists in history after having been listed in the National Library of Scotland's 'Scottish Science Hall of Fame'. Bell's name is still widely known and used as part of the names of dozens of educational institutes, corporate namesakes, street and place names around the world. Alexander Graham Bell, who could not complete the university program of his youth, received at least a dozen honorary degrees from academic institutions, including eight honorary LL.D.s (Doctorate of Laws), two Ph.D.s, a D.Sc., and an M.D.: Bell died of complications arising from diabetes on August 2, 1922, at his private estate in Cape Breton, Nova Scotia, at age 75. Bell had also been afflicted with pernicious anemia. His last view of the land he had inhabited was by moonlight on his mountain estate at 2:00 a.m. While tending to him after his long illness, Mabel, his wife, whispered, "Don't leave me." By way of reply, Bell signed "no...", lost consciousness, and died shortly after. On learning of Bell's death, the Canadian Prime Minister, Mackenzie King, cabled Mrs. Bell, saying: Bell's coffin was constructed of Beinn Bhreagh pine by his laboratory staff, lined with the same red silk fabric used in his tetrahedral kite experiments. To help celebrate his life, his wife asked guests not to wear black (the traditional funeral color) while attending his service, during which soloist Jean MacDonald sang a verse of Robert Louis Stevenson's "Requiem": Upon the conclusion of Bell's funeral, "every phone on the continent of North America was silenced in honor of the man who had given to mankind the means for direct communication at a distance". Alexander Graham Bell was buried atop Beinn Bhreagh mountain, on his estate where he had resided increasingly for the last 35 years of his life, overlooking Bras d'Or Lake. He was survived by his wife Mabel, his two daughters, Elsie May and Marian, and nine of his grandchildren.
https://en.wikipedia.org/wiki?curid=852
Anatolia Anatolia (from Greek: , ', ’east’ or ’[sun]rise’; ), also known as Asia Minor (Medieval and Modern Greek: , ', ’small Asia’; ), Asian Turkey, the Anatolian peninsula or the Anatolian plateau, is a large peninsula in West Asia and the westernmost protrusion of the Asian continent. It makes up the majority of modern-day Turkey. The region is bounded by the Black Sea to the north, the Mediterranean Sea to the south, the Armenian Highlands to the east and the Aegean Sea to the west. The Sea of Marmara forms a connection between the Black and Aegean seas through the Bosphorus and Dardanelles straits and separates Anatolia from Thrace on the Balkan peninsula of Europe. The eastern border of Anatolia is traditionally held to be a line between the Gulf of Alexandretta and the Black Sea, bounded by the Armenian Highland to the east and Mesopotamia to the southeast. Thus, traditionally Anatolia is the territory that comprises approximately the western two-thirds of the Asian part of Turkey. Today, Anatolia is also often considered to be synonymous with Asian Turkey, which comprises almost the entire country; its eastern and southeastern borders are widely taken to be Turkey's eastern border. By some definitions, the Armenian Highlands lies beyond the boundary of the Anatolian plateau. The official name of this inland region is the Eastern Anatolia Region. The ancient inhabitants of Anatolia spoke the now-extinct Anatolian languages, which were largely replaced by the Greek language starting from classical antiquity and during the Hellenistic, Roman and Byzantine periods. Major Anatolian languages included Hittite, Luwian, and Lydian, among other more poorly attested relatives. The Turkification of Anatolia began under the Seljuk Empire in the late 11th century and continued under the Ottoman Empire between the late 13th and early 20th centuries. However, various non-Turkic languages continue to be spoken by minorities in Anatolia today, including Kurdish, Neo-Aramaic, Armenian, Arabic, Laz, Georgian and Greek. Other ancient peoples in the region included Galatians, Hurrians, Assyrians, Hattians, Cimmerians, as well as Ionian, Dorian and Aeolian Greeks. Traditionally, Anatolia is considered to extend in the east to an indefinite line running from the Gulf of Alexandretta to the Black Sea, coterminous with the Anatolian Plateau. This traditional geographical definition is used, for example, in the latest edition of "Merriam-Webster's Geographical Dictionary". Under this definition, Anatolia is bounded to the east by the Armenian Highlands, and the Euphrates before that river bends to the southeast to enter Mesopotamia. To the southeast, it is bounded by the ranges that separate it from the Orontes valley in Syria and the Mesopotamian plain. Following the Armenian genocide, Ottoman Armenia was renamed "Eastern Anatolia" by the newly established Turkish government. Vazken Davidian terms the expanded use of "Anatolia" to apply to territory formerly referred to as Armenia an "ahistorical imposition", and notes that a growing body of literature is uncomfortable with referring to the Ottoman East as "Eastern Anatolia". The highest mountain in "Eastern Anatolia" (on the Armenian Plateau) is Mount Ararat (5123 m). The Euphrates, Araxes, Karasu and Murat rivers connect the Armenian Plateau to the South Caucasus and the Upper Euphrates Valley. Along with the Çoruh, these rivers are the longest in "Eastern Anatolia". The English-language name "Anatolia" derives from the Greek ("") meaning "the East" or more literally "sunrise" (comparable to the Latin-derived terms "levant" and "orient"). The precise reference of this term has varied over time, perhaps originally referring to the Aeolian, Ionian and Dorian colonies on the west coast of Asia Minor. In the Byzantine Empire, the Anatolic Theme (Ἀνατολικόν θέμα "the Eastern theme") was a "theme" covering the western and central parts of Turkey's present-day Central Anatolia Region, centered around Iconium, but ruled from the city of Amorium. The term "Anatolia", with its "-ia" ending, is probably a Medieval Latin innovation. The modern Turkish form "Anadolu" derives directly from the Greek name Aνατολή ("Anatolḗ"). The Russian male name Anatoly, the French Anatole and plain Anatol, all stemming from saints Anatolius of Laodicea (d. 283) and Anatolius of Constantinople (d. 458; the first Patriarch of Constantinople), share the same linguistic origin. The oldest known reference to Anatolia – as "Land of the Hatti" – appears on Mesopotamian cuneiform tablets from the period of the Akkadian Empire (2350–2150 BC). The first recorded name the Greeks used for the Anatolian peninsula, though not particularly popular at the time, was Ἀσία ("Asía"), perhaps from an Akkadian expression for the "sunrise", or possibly echoing the name of the Assuwa league in western Anatolia. The Romans used it as the name of their province, comprising the west of the peninsula plus the nearby Aegean islands. As the name "Asia" broadened its scope to apply to the vaster region east of the Mediterranean, some Greeks in Late Antiquity came to use the name Asia Minor (Μικρὰ Ἀσία, "Mikrà Asía"), meaning "Lesser Asia", to refer to present-day Anatolia, whereas the administration of the Empire preferred the description Ἀνατολή ("Anatolḗ" "the East"). The endonym Ῥωμανία ("Rhōmanía" "the land of the Romans, i. e. the Eastern Roman Empire") was understood as another name for the province by the invading Seljuq Turks, who founded a Sultanate of Rûm in 1077. Thus (land of the) Rûm became another name for Anatolia. By the 12th century Europeans had started referring to Anatolia as "Turchia". During the era of the Ottoman Empire, mapmakers outside the Empire referred to the mountainous plateau in eastern Anatolia as Armenia. Other contemporary sources called the same area Kurdistan. Geographers have variously used the terms east Anatolian plateau and Armenian plateau to refer to the region, although the territory encompassed by each term largely overlaps with the other. According to archaeologist Lori Khatchadourian, this difference in terminology "primarily result[s] from the shifting political fortunes and cultural trajectories of the region since the nineteenth century." Turkey's First Geography Congress in 1941 created two geographical regions of Turkey to the east of the Gulf of Iskenderun-Black Sea line named the Eastern Anatolia Region and the Southeastern Anatolia Region, the former largely corresponding to the western part of the Armenian Highlands, the latter to the northern part of the Mesopotamian plain. According to Richard Hovannisian, this changing of toponyms was "necessary to obscure all evidence" of Armenian presence as part of a campaign of genocide denial embarked upon by the newly established Turkish government and what Hovannisian calls its "foreign collaborators". Human habitation in Anatolia dates back to the Paleolithic. Neolithic Anatolia has been proposed as the homeland of the Indo-European language family, although linguists tend to favour a later origin in the steppes north of the Black Sea. However, it is clear that the Anatolian languages, the earliest attested branch of Indo-European, have been spoken in Anatolia since at least the 19th century BC. The earliest historical records of Anatolia stem from the southeast of the region and are from the Mesopotamian-based Akkadian Empire during the reign of Sargon of Akkad in the 24th century BC. Scholars generally believe the earliest indigenous populations of Anatolia were the Hattians and Hurrians. The Hattians spoke a language of unclear affiliation, and the Hurrian language belongs to a small family called Hurro-Urartian. These languages are now extinct; relationships with indigenous languages of the Caucasus have been proposed but are not generally accepted. The region was famous for exporting raw materials, and areas of Hattian- and Hurrian-populated southeast Anatolia were colonised by the Akkadians. After the fall of the Akkadian Empire in the mid-21st century BC, the Assyrians, who were the northern branch of the Akkadian people, colonised parts of the region between the 21st and mid-18th centuries BC and claimed its resources, notably silver. One of the numerous cuneiform records dated circa 20th century BC, found in Anatolia at the Assyrian colony of Kanesh, uses an advanced system of trading computations and credit lines. Unlike the Akkadians and their descendants, the Assyrians, whose Anatolian possessions were peripheral to their core lands in Mesopotamia, the Hittites were centred at Hattusa (modern Boğazkale) in north-central Anatolia by the 17th century BC. They were speakers of an Indo-European language, the Hittite language, or "nesili" (the language of Nesa) in Hittite. The Hittites originated of local ancient cultures that grew in Anatolia, in addition to the arrival of Indo-European languages. Attested for the first time in the Assyrian tablets of Nesa around 2000BC, they conquered Hattusa in the 18th century BC, imposing themselves over Hattian- and Hurrian-speaking populations. According to the widely accepted Kurgan theory on the Proto-Indo-European homeland, however, the Hittites (along with the other Indo-European ancient Anatolians) were themselves relatively recent immigrants to Anatolia from the north. However, they did not necessarily displace the population genetically; they assimilated into the former peoples' culture, preserving the Hittite language. The Hittites adopted the Mesopotamian cuneiform script. In the Late Bronze Age, Hittite New Kingdom (c. 1650 BC) was founded, becoming an empire in the 14th century BC after the conquest of Kizzuwatna in the south-east and the defeat of the Assuwa league in western Anatolia. The empire reached its height in the 13th century BC, controlling much of Asia Minor, northwestern Syria and northwest upper Mesopotamia. However, the Hittite advance toward the Black Sea coast was halted by the semi-nomadic pastoralist and tribal Kaskians, a non-Indo-European people who had earlier displaced the Palaic-speaking Indo-Europeans. Much of the history of the Hittite Empire concerned war with the rival empires of Egypt, Assyria and the Mitanni. The Egyptians eventually withdrew from the region after failing to gain the upper hand over the Hittites and becoming wary of the power of Assyria, which had destroyed the Mitanni Empire. The Assyrians and Hittites were then left to battle over control of eastern and southern Anatolia and colonial territories in Syria. The Assyrians had better success than the Egyptians, annexing much Hittite (and Hurrian) territory in these regions. After 1180 BC, during the Late Bronze Age collapse, the Hittite empire disintegrated into several independent Syro-Hittite states, subsequent to losing much territory to the Middle Assyrian Empire and being finally overrun by the Phrygians, another Indo-European people who are believed to have migrated from the Balkans. The Phrygian expansion into southeast Anatolia was eventually halted by the Assyrians, who controlled that region. Arameans encroached over the borders of south central Anatolia in the century or so after the fall of the Hittite empire, and some of the Syro-Hittite states in this region became an amalgam of Hittites and Arameans. These became known as Syro-Hittite states. Another Indo-European people, the Luwians, rose to prominence in central and western Anatolia circa 2000 BC. Their language belonged to the same linguistic branch as Hittite. The general consensus amongst scholars is that Luwian was spoken across a large area of western Anatolia, including (possibly) Wilusa (Troy), the Seha River Land (to be identified with the Hermos and/or Kaikos valley), and the kingdom of Mira-Kuwaliya with its core territory of the Maeander valley. From the 9th century BC, Luwian regions coalesced into a number of states such as Lydia, Caria and Lycia, all of which had Hellenic influence. From the 10th to late 7th centuries BC, much of Anatolia (particularly the southeastern regions) fell to the Neo-Assyrian Empire, including all of the Syro-Hittite states, Tabal, Kingdom of Commagene, the Cimmerians and Scythians and swathes of Cappadocia. The Neo-Assyrian empire collapsed due to a bitter series of civil wars followed by a combined attack by Medes, Persians, Scythians and their own Babylonian relations. The last Assyrian city to fall was Harran in southeast Anatolia. This city was the birthplace of the last king of Babylon, the Assyrian Nabonidus and his son and regent Belshazzar. Much of the region then fell to the short-lived Iran-based Median Empire, with the Babylonians and Scythians briefly appropriating some territory. From the late 8th century BC, a new wave of Indo-European-speaking raiders entered northern and northeast Anatolia: the Cimmerians and Scythians. The Cimmerians overran Phrygia and the Scythians threatened to do the same to Urartu and Lydia, before both were finally checked by the Assyrians. The north-western coast of Anatolia was inhabited by Greeks of the Achaean/Mycenaean culture from the 20th century BC, related to the Greeks of south eastern Europe and the Aegean. Beginning with the Bronze Age collapse at the end of the 2nd millennium BC, the west coast of Anatolia was settled by Ionian Greeks, usurping the area of the related but earlier Mycenaean Greeks. Over several centuries, numerous Ancient Greek city-states were established on the coasts of Anatolia. Greeks started Western philosophy on the western coast of Anatolia (Pre-Socratic philosophy). In classical antiquity, Anatolia was described by Herodotus and later historians as divided into regions that were diverse in culture, language and religious practices. The northern regions included Bithynia, Paphlagonia and Pontus; to the west were Mysia, Lydia and Caria; and Lycia, Pamphylia and Cilicia belonged to the southern shore. There were also several inland regions: Phrygia, Cappadocia, Pisidia and Galatia. Languages spoken included the late surviving Anatolic languages Isaurian and Pisidian, Greek in Western and coastal regions, Phrygian spoken until the 7th century CE, local variants of Thracian in the Northwest, the Galatian variant of Gaulish in Galatia until the 6th century CE, Cappadocian and Armenian in the East, and Kartvelian languages in the Northeast. Anatolia is known as the birthplace of minted coinage (as opposed to unminted coinage, which first appears in Mesopotamia at a much earlier date) as a medium of exchange, some time in the 7th century BC in Lydia. The use of minted coins continued to flourish during the Greek and Roman eras. During the 6th century BC, all of Anatolia was conquered by the Persian Achaemenid Empire, the Persians having usurped the Medes as the dominant dynasty in Iran. In 499 BC, the Ionian city-states on the west coast of Anatolia rebelled against Persian rule. The Ionian Revolt, as it became known, though quelled, initiated the Greco-Persian Wars, which ended in a Greek victory in 449 BC, and the Ionian cities regained their independence. By the Peace of Antalcidas (387 BC), which ended the Corinthian War, Persia regained control over Ionia. In 334 BC, the Macedonian Greek king Alexander the Great conquered the peninsula from the Achaemenid Persian Empire. Alexander's conquest opened up the interior of Asia Minor to Greek settlement and influence. Following the death of Alexander and the breakup of his empire, Anatolia was ruled by a series of Hellenistic kingdoms, such as the Attalids of Pergamum and the Seleucids, the latter controlling most of Anatolia. A period of peaceful Hellenization followed, such that the local Anatolian languages had been supplanted by Greek by the 1st century BC. In 133 BC the last Attalid king bequeathed his kingdom to the Roman Republic, and western and central Anatolia came under Roman control, but Hellenistic culture remained predominant. Further annexations by Rome, in particular of the Kingdom of Pontus by Pompey, brought all of Anatolia under Roman control, except for the eastern frontier with the Parthian Empire, which remained unstable for centuries, causing a series of wars, culminating in the Roman-Parthian Wars. After the division of the Roman Empire, Anatolia became part of the East Roman, or Byzantine Empire. Anatolia was one of the first places where Christianity spread, so that by the 4th century AD, western and central Anatolia were overwhelmingly Christian and Greek-speaking. For the next 600 years, while Imperial possessions in Europe were subjected to barbarian invasions, Anatolia would be the center of the Hellenic world. It was one of the wealthiest and most densely populated places in the Late Roman Empire. Anatolia's wealth grew during the 4th and 5th centuries thanks, in part, to the Pilgrim's Road that ran through the peninsula. Literary evidence about the rural landscape has come down to us from the hagiographies of 6th century Nicholas of Sion and 7th century Theodore of Sykeon. Large urban centers included Ephesus, Pergamum, Sardis and Aphrodisias. Scholars continue to debate the cause of urban decline in the 6th and 7th centuries variously attributing it to the Plague of Justinian (541), and the 7th century Persian incursion and Arab conquest of the Levant. In the ninth and tenth century a resurgent Byzantine Empire regained its lost territories, including even long lost territory such as Armenia and Syria (ancient Aram). In the 10 years following the Battle of Manzikert in 1071, the Seljuk Turks from Central Asia migrated over large areas of Anatolia, with particular concentrations around the northwestern rim. The Turkish language and the Islamic religion were gradually introduced as a result of the Seljuk conquest, and this period marks the start of Anatolia's slow transition from predominantly Christian and Greek-speaking, to predominantly Muslim and Turkish-speaking (although ethnic groups such as Armenians, Greeks, and Assyrians remained numerous and retained Christianity and their native languages). In the following century, the Byzantines managed to reassert their control in western and northern Anatolia. Control of Anatolia was then split between the Byzantine Empire and the Seljuk Sultanate of Rûm, with the Byzantine holdings gradually being reduced. In 1255, the Mongols swept through eastern and central Anatolia, and would remain until 1335. The Ilkhanate garrison was stationed near Ankara. After the decline of the Ilkhanate from 1335–1353, the Mongol Empire's legacy in the region was the Uyghur Eretna Dynasty that was overthrown by Kadi Burhan al-Din in 1381. By the end of the 14th century, most of Anatolia was controlled by various Anatolian beyliks. Smyrna fell in 1330, and the last Byzantine stronghold in Anatolia, Philadelphia, fell in 1390. The Turkmen Beyliks were under the control of the Mongols, at least nominally, through declining Seljuk sultans. The Beyliks did not mint coins in the names of their own leaders while they remained under the suzerainty of the Mongol Ilkhanids. The Osmanli ruler Osman I was the first Turkish ruler who minted coins in his own name in 1320s; they bear the legend "Minted by Osman son of Ertugrul". Since the minting of coins was a prerogative accorded in Islamic practice only to a sovereign, it can be considered that the Osmanli, or Ottoman Turks, had become formally independent from the Mongol Khans. Among the Turkish leaders, the Ottomans emerged as great power under Osman I and his son Orhan I. The Anatolian beyliks were successively absorbed into the rising Ottoman Empire during the 15th century. It is not well understood how the Osmanlı, or Ottoman Turks, came to dominate their neighbours, as the history of medieval Anatolia is still little known. The Ottomans completed the conquest of the peninsula in 1517 with the taking of Halicarnassus (modern Bodrum) from the Knights of Saint John. With the acceleration of the decline of the Ottoman Empire in the early 19th century, and as a result of the expansionist policies of the Russian Empire in the Caucasus, many Muslim nations and groups in that region, mainly Circassians, Tatars, Azeris, Lezgis, Chechens and several Turkic groups left their homelands and settled in Anatolia. As the Ottoman Empire further shrank in the Balkan regions and then fragmented during the Balkan Wars, much of the non-Christian populations of its former possessions, mainly Balkan Muslims (Bosnian Muslims, Albanians, Turks, Muslim Bulgarians and Greek Muslims such as the Vallahades from Greek Macedonia), were resettled in various parts of Anatolia, mostly in formerly Christian villages throughout Anatolia. A continuous reverse migration occurred since the early 19th century, when Greeks from Anatolia, Constantinople and Pontus area migrated toward the newly independent Kingdom of Greece, and also towards the United States, southern part of the Russian Empire, Latin America and rest of Europe. Following the Russo-Persian Treaty of Turkmenchay (1828) and the incorporation of the Eastern Armenia into the Russian Empire, another migration involved the large Armenian population of Anatolia, which recorded significant migration rates from Western Armenia (Eastern Anatolia) toward the Russian Empire, especially toward its newly established Armenian provinces. Anatolia remained multi-ethnic until the early 20th century (see the rise of nationalism under the Ottoman Empire). During World War I, the Armenian Genocide, the Greek genocide (especially in Pontus), and the Assyrian genocide almost entirely removed the ancient indigenous communities of Armenian, Greek, and Assyrian populations in Anatolia and surrounding regions. Following the Greco-Turkish War of 1919–1922, most remaining ethnic Anatolian Greeks were forced out during the 1923 population exchange between Greece and Turkey. Of the remainder, most have left Turkey since then, leaving fewer than 5,000 Greeks in Anatolia today. Since the foundation of the Republic of Turkey in 1923, Anatolia has been within Turkey, its inhabitants being mainly Turks and Kurds (see demographics of Turkey and history of Turkey). Anatolia's terrain is structurally complex. A central massif composed of uplifted blocks and downfolded troughs, covered by recent deposits and giving the appearance of a plateau with rough terrain, is wedged between two folded mountain ranges that converge in the east. True lowland is confined to a few narrow coastal strips along the Aegean, Mediterranean, and Black Sea coasts. Flat or gently sloping land is rare and largely confined to the deltas of the Kızıl River, the coastal plains of Çukurova and the valley floors of the Gediz River and the Büyük Menderes River as well as some interior high plains in Anatolia, mainly around Lake Tuz (Salt Lake) and the Konya Basin ("Konya Ovasi"). There are two mountain ranges in southern Anatolia: the Taurus and the Zagros mountains. Anatolia has a varied range of climates. The central plateau is characterized by a continental climate, with hot summers and cold snowy winters. The south and west coasts enjoy a typical Mediterranean climate, with mild rainy winters, and warm dry summers. The Black Sea and Marmara coasts have a temperate oceanic climate, with cool foggy summers and much rainfall throughout the year. There is a diverse number of plant and animal communities. The mountains and coastal plain of northern Anatolia experiences humid and mild climate. There are temperate broadleaf, mixed and coniferous forests. The central and eastern plateau, with its drier continental climate, has deciduous forests and forest steppes. Western and southern Anatolia, which have a Mediterranean climate, contain Mediterranean forests, woodlands, and scrub ecoregions. Almost 80% of the people currently residing in Anatolia are Turks. Kurds (Kurmanjis and Zazas) constitute a major community in southeastern Anatolia, and are the largest ethnic minority. Abkhazians, Albanians, Arabs, Arameans, Armenians, Assyrians, Azerbaijanis, Bosnian Muslims, Circassians, Gagauz, Georgians, Serbs, Greeks, Hemshin, Jews, Laz, Levantines, Pomaks, and a number of other ethnic groups also live in Anatolia in smaller numbers. Bamia is a traditional Anatolian-era stew dish prepared using lamb, okra, onion and tomatoes as primary ingredients.
https://en.wikipedia.org/wiki?curid=854
Apple Inc. Apple Inc. is an American multinational technology company headquartered in Cupertino, California, that designs, develops, and sells consumer electronics, computer software, and online services. It is considered one of the Big Tech technology companies, alongside Amazon, Google, Microsoft and Facebook. The company's hardware products include the iPhone smartphone, the iPad tablet computer, the Mac personal computer, the iPod portable media player, the Apple Watch smartwatch, the Apple TV digital media player, the AirPods wireless earbuds and the HomePod smart speaker. Apple's software includes macOS, iOS, iPadOS, watchOS, and tvOS operating systems, the iTunes media player, the Safari web browser, the Shazam music identifier, and the iLife and iWork creativity and productivity suites, as well as professional applications like Final Cut Pro, Logic Pro, and Xcode. Its online services include the iTunes Store, the iOS App Store, Mac App Store, Apple Music, Apple TV+, iMessage, and iCloud. Other services include Apple Store, Genius Bar, AppleCare, Apple Pay, Apple Pay Cash, and Apple Card. Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976 to develop and sell Wozniak's Apple I personal computer, though Wayne sold his share back within 12 days. It was incorporated as Apple Computer, Inc., in January 1977, and sales of its computers, including the Apple II, grew quickly. Within a few years, Jobs and Wozniak had hired a staff of computer designers and had a production line. Apple went public in 1980 to instant financial success. Over the next few years, Apple shipped new computers featuring innovative graphical user interfaces, such as the original Macintosh in 1984, and Apple's marketing advertisements for its products received widespread critical acclaim. However, the high price of its products and limited application library caused problems, as did power struggles between executives. In 1985, Wozniak departed Apple amicably and remained an honorary employee, while Jobs and others resigned to found NeXT. As the market for personal computers expanded and evolved through the 1990s, Apple lost market share to the lower-priced duopoly of Microsoft Windows on Intel PC clones. The board recruited CEO Gil Amelio to what would be a 500-day charge for him to rehabilitate the financially troubled company—reshaping it with layoffs, executive restructuring, and product focus. In 1997, he led Apple to buy NeXT, solving the desperately failed operating system strategy and bringing Jobs back. Jobs pensively regained leadership status, becoming CEO in 2000. Apple swiftly returned to profitability under the revitalizing Think different campaign, as he rebuilt Apple's status by launching the iMac in 1998, opening the retail chain of Apple Stores in 2001, and acquiring numerous companies to broaden the software portfolio. In January 2007, Jobs renamed the company Apple Inc., reflecting its shifted focus toward consumer electronics, and launched the iPhone to great critical acclaim and financial success. In August 2011, Jobs resigned as CEO due to health complications, and Tim Cook became the new CEO. Two months later, Jobs died, marking the end of an era for the company. In June 2019, Jony Ive, Apple's CDO, left the company to start his own firm, but stated he would work with Apple as its primary client. Apple is well known for its size and revenues. Its worldwide annual revenue totaled $265 billion for the 2018 fiscal year. Apple is the world's largest technology company by revenue and one of the world's most valuable companies. It is also the world's third-largest mobile phone manufacturer after Samsung and Huawei. In August 2018, Apple became the first publicly traded U.S. company to be valued at over $1 trillion. The company employs 137,000 full-time employees and maintains 510 retail stores in 25 countries . It operates the iTunes Store, which is the world's largest music retailer. , more than 1.5 billion Apple products are actively in use worldwide. The company also has a high level of brand loyalty and is ranked as the world's most valuable brand. However, Apple receives significant criticism regarding the labor practices of its contractors, its environmental practices and unethical business practices, including anti-competitive behavior, as well as the origins of source materials. Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a business partnership. The company's first product was the Apple I, a computer designed and hand-built entirely by Wozniak, and first shown to the public at the Homebrew Computer Club. Apple I was sold as a motherboard (with CPU, RAM, and basic textual-video chips)—a base kit concept which would now not be marketed as a complete personal computer. The Apple I went on sale in July 1976 and was market-priced at $666.66 ($ in dollars, adjusted for inflation). Wozniak later said he had no idea about the relation between the number and the mark of the beast, and that he came up with the price because he liked "repeating digits". Apple Computer, Inc. was incorporated on January 3, 1977, without Wayne, who had left and sold his share of the company back to Jobs and Wozniak for $800 only twelve days after having co-founded Apple. Multimillionaire Mike Markkula provided essential business expertise and funding of $250,000 during the incorporation of Apple. During the first five years of operations, revenues grew exponentially, doubling about every four months. Between September 1977 and September 1980, yearly sales grew from $775,000 to $118 million, an average annual growth rate of 533%. The Apple II, also invented by Wozniak, was introduced on April 16, 1977, at the first West Coast Computer Faire. It differs from its major rivals, the TRS-80 and Commodore PET, because of its character cell-based color graphics and open architecture. While early Apple II models use ordinary cassette tapes as storage devices, they were superseded by the introduction of a -inch floppy disk drive and interface called the Disk II in 1978. The Apple II was chosen to be the desktop platform for the first "killer application" of the business world: VisiCalc, a spreadsheet program released in 1979. VisiCalc created a business market for the Apple II and gave home users an additional reason to buy an Apple II: compatibility with the office. Before VisiCalc, Apple had been a distant third place competitor to Commodore and Tandy. By the end of the 1970s, Apple had a staff of computer designers and a production line. The company introduced the Apple III in May 1980 in an attempt to compete with IBM in the business and corporate computing market. Jobs and several Apple employees, including human–computer interface expert Jef Raskin, visited Xerox PARC in December 1979 to see a demonstration of the Xerox Alto. Xerox granted Apple engineers three days of access to the PARC facilities in return for the option to buy 100,000 shares (5.6 million split-adjusted shares ) of Apple at the pre-IPO price of $10 a share. Jobs was immediately convinced that all future computers would use a graphical user interface (GUI), and development of a GUI began for the Apple Lisa. In 1982, however, he was pushed from the Lisa team due to infighting. Jobs then took over Wozniak's and Raskin's low-cost-computer project, the Macintosh, and redefined it as a graphical system cheaper and faster than Lisa. In 1983, Lisa became the first personal computer sold to the public with a GUI, but was a commercial failure due to its high price and limited software titles, so in 1985 it would be repurposed as the high end Macintosh and discontinued in its second year. On December 12, 1980, Apple (ticker symbol "AAPL") went public selling 4.6 million shares at $22 per share ($.39 per share when adjusting for stock splits ), generating over $100 million, which was more capital than any IPO since Ford Motor Company in 1956. By the end of the day, the stock rose to $29 per share and 300 millionaires were created. Apple's market cap was $1.778 billion at the end of its first day of trading. In 1984, Apple launched the Macintosh, the first personal computer to be sold without a programming language. Its debut was signified by "1984", a $1.5 million television advertisement directed by Ridley Scott that aired during the third quarter of Super Bowl XVIII on January 22, 1984. This is now hailed as a watershed event for Apple's success and was called a "masterpiece" by CNN and one of the greatest TV advertisements of all time by "TV Guide". Macintosh sales were initially good, but began to taper off dramatically after the first three months due to its high price, slow speed, and limited range of available software. In early 1985, this sales slump triggered a power struggle between Steve Jobs and CEO John Sculley, who had been hired two years earlier by Jobs using the famous line, "Do you want to sell sugar water for the rest of your life or come with me and change the world?" Sculley decided to remove Jobs as the general manager of the Macintosh division, and gained unanimous support from the Apple board of directors. The board of directors instructed Sculley to contain Jobs and his ability to launch expensive forays into untested products. Rather than submit to Sculley's direction, Jobs attempted to oust him from his leadership role at Apple. Informed by Jean-Louis Gassée, Sculley found out that Jobs had been attempting to organize a coup and called an emergency executive meeting at which Apple's executive staff sided with Sculley and stripped Jobs of all operational duties. Jobs resigned from Apple in September 1985 and took a number of Apple employees with him to found NeXT Inc. Wozniak had also quit his active employment at Apple earlier in 1985 to pursue other ventures, expressing his frustration with Apple's treatment of the Apple II division and stating that the company had "been going in the wrong direction for the last five years". Despite Wozniak's grievances, he left the company amicably and both Jobs and Wozniak remained Apple shareholders. Wozniak continues to represent the company at events or in interviews, receiving a stipend estimated to be $120,000 per year for this role. The outlook on Macintosh improved with the introduction of the LaserWriter, the first reasonably priced PostScript laser printer, and PageMaker, an early desktop publishing application released in July 1985. It has been suggested that the combination of Macintosh, LaserWriter, and PageMaker was responsible for the creation of the desktop publishing market. After the departures of Jobs and Wozniak, the Macintosh product line underwent a steady change of focus to higher price points, the so-called "high-right policy" named for the position on a chart of price vs. profits. Jobs had argued the company should produce products aimed at the consumer market and aimed for a $1,000 price for the Macintosh, which they were unable to meet. Newer models selling at higher price points offered higher profit margin, and appeared to have no effect on total sales as power users snapped up every increase in power. Although some worried about pricing themselves out of the market, the high-right policy was in full force by the mid-1980s, notably due to Jean-Louis Gassée's mantra of "fifty-five or die", referring to the 55% profit margins of the Macintosh II. Selling Macintosh at such high profit margins was only possible because of its dominant position in the desktop publishing market. This policy began to backfire in the last years of the decade as new desktop publishing programs appeared on PC clones that offered some or much of the same functionality of the Macintosh but at far lower price points. The company lost its monopoly in this market and had already estranged many of its original consumer customer base who could no longer afford their high-priced products. The Christmas season of 1989 is the first in the company's history to have declining sales, which led to a 20% drop in Apple's stock price. During this period, the relationship between Sculley and Gassée deteriorated, leading Sculley to effectively demote Gassée in January 1990 by appointing Michael Spindler as the chief operating officer. Gassée left the company later that year. In October 1990, Apple introduced three lower-cost models, the Macintosh Classic, Macintosh LC, and Macintosh IIsi, all of which saw significant sales due to pent-up demand. In 1991, Apple introduced the PowerBook, replacing the "luggable" Macintosh Portable with a design that set the current shape for almost all modern laptops. The same year, Apple introduced System 7, a major upgrade to the operating system which added color to the interface and introduced new networking capabilities. It remained the architectural basis for the Classic Mac OS. The success of the PowerBook and other products brought increasing revenue. For some time, Apple was doing incredibly well, introducing fresh new products and generating increasing profits in the process. The magazine "MacAddict" named the period between 1989 and 1991 as the "first golden age" of the Macintosh. Apple believed the Apple II series was too expensive to produce and took away sales from the low-end Macintosh. In October 1990, Apple released the Macintosh LC, and began efforts to promote that computer by advising developer technical support staff to recommend developing applications for Macintosh rather than Apple II, and authorizing salespersons to direct consumers towards Macintosh and away from Apple II. The Apple IIe was discontinued in 1993. The success of Apple's lower-cost consumer models, especially the LC, also led to the cannibalization of their higher-priced machines. To address this, management introduced several new brands, selling largely identical machines at different price points aimed at different markets. These were the high-end Quadra, the mid-range Centris line, and the consumer-marketed Performa series. This led to significant market confusion, as customers did not understand the difference between models. Apple also experimented with a number of other unsuccessful consumer targeted products during the 1990s, including digital cameras, portable CD audio players, speakers, video consoles, the eWorld online service, and TV appliances. Enormous resources were also invested in the problem-plagued Newton division based on John Sculley's unrealistic market forecasts. Ultimately, none of these products helped and Apple's market share and stock prices continued to slide. Throughout this period, Microsoft continued to gain market share with Windows by focusing on delivering software to cheap commodity personal computers, while Apple was delivering a richly engineered but expensive experience. Apple relied on high profit margins and never developed a clear response; instead, they sued Microsoft for using a GUI similar to the Apple Lisa in "Apple Computer, Inc. v. Microsoft Corp." The lawsuit dragged on for years before it was finally dismissed. At this time, a series of major product flops and missed deadlines sullied Apple's reputation, and Sculley was replaced as CEO by Michael Spindler. By the late 1980s, Apple was developing alternative platforms to System 6, such as A/UX and Pink. The System 6 platform itself was outdated because it was not originally built for multitasking. By the 1990s, Apple was facing competition from OS/2 and UNIX vendors such as Sun Microsystems. System 6 and 7 would need to be replaced by a new platform or reworked to run on modern hardware. In 1994, Apple, IBM, and Motorola formed the AIM alliance with the goal of creating a new computing platform (the PowerPC Reference Platform), which would use IBM and Motorola hardware coupled with Apple software. The AIM alliance hoped that PReP's performance and Apple's software would leave the PC far behind and thus counter Microsoft's monopoly. The same year, Apple introduced the Power Macintosh, the first of many Apple computers to use Motorola's PowerPC processor. In 1996, Spindler was replaced by Gil Amelio as CEO. Hired for his reputation as a corporate rehabilitator, Amelio made deep changes, including extensive layoffs and cost-cutting. After numerous failed attempts to modernize Mac OS, first with the Pink project from 1988 and later with Copland from 1994, Apple in 1997 purchased NeXT for its NeXTSTEP operating system and to bring Steve Jobs back. Apple was only weeks away from bankruptcy when Jobs returned. The NeXT acquisition was finalized on February 9, 1997, bringing Jobs back to Apple as an advisor. On July 9, 1997, Amelio was ousted by the board of directors after overseeing a three-year record-low stock price and crippling financial losses. Jobs acted as the interim CEO and began restructuring the company's product line; it was during this period that he identified the design talent of Jonathan Ive, and the pair worked collaboratively to rebuild Apple's status. At the August 1997 Macworld Expo in Boston, Jobs announced that Apple would join Microsoft to release new versions of Microsoft Office for the Macintosh, and that Microsoft had made a $150 million investment in non-voting Apple stock. On November 10, 1997, Apple introduced the Apple Store website, which was tied to a new build-to-order manufacturing strategy. On August 15, 1998, Apple introduced a new all-in-one computer reminiscent of the Macintosh 128K: the iMac. The iMac design team was led by Ive, who would later design the iPod and the iPhone. The iMac featured modern technology and a unique design, and sold almost 800,000 units in its first five months. During this period, Apple completed numerous acquisitions to create a portfolio of digital production software for both professionals and consumers. In 1998, Apple purchased Macromedia's Key Grip software project, signaling an expansion into the digital video editing market. The sale was an outcome of Macromedia's decision to solely focus on web development software. The product, still unfinished at the time of the sale, was renamed "Final Cut Pro" when it was launched on the retail market in April 1999. The development of Key Grip also led to Apple's release of the consumer video-editing product iMovie in October 1999. Next, Apple successfully acquired the German company Astarte, which had developed DVD authoring technology, as well as Astarte's corresponding products and engineering team in April 2000. Astarte's digital tool DVDirector was subsequently transformed into the professional-oriented DVD Studio Pro software product. Apple then employed the same technology to create iDVD for the consumer market. In July 2001, Apple acquired Spruce Technologies, a PC DVD authoring platform, to incorporate their technology into Apple's expanding portfolio of digital video projects. SoundJam MP, released by Casady & Greene in 1998, was renamed "iTunes" when Apple purchased it in 2000. The primary developers of the MP3 player and music library software moved to Apple as part of the acquisition, and simplified SoundJam's user interface, added the ability to burn CDs, and removed its recording feature and skin support. SoundJam was Apple's second choice for the core of Apple's music software project, originally codenamed iMusic, behind Panic's Audion. Apple was not able to set up a meeting with Panic in time to be fully considered as the latter was in the middle of similar negotiations with AOL. In 2002, Apple purchased Nothing Real for their advanced digital compositing application Shake, as well as Emagic for the music productivity application Logic. The purchase of Emagic made Apple the first computer manufacturer to own a music software company. The acquisition was followed by the development of Apple's consumer-level GarageBand application. The release of iPhoto in the same year completed the iLife suite. Mac OS X, based on NeXT's NeXTSTEP, OPENSTEP, and BSD Unix, was released on March 24, 2001, after several years of development. Aimed at consumers and professionals alike, Mac OS X aimed to combine the stability, reliability, and security of Unix with the ease of use afforded by an overhauled user interface. To aid users in migrating from Mac OS 9, the new operating system allowed the use of OS 9 applications within Mac OS X via the Classic Environment. On May 19, 2001, Apple opened its first official eponymous retail stores in Virginia and California. On October 23 of the same year, Apple debuted the iPod portable digital audio player. The product, which was first sold on November 10, 2001, was phenomenally successful with over 100 million units sold within six years. In 2003, Apple's iTunes Store was introduced. The service offered online music downloads for $0.99 a song and integration with the iPod. The iTunes Store quickly became the market leader in online music services, with over five billion downloads by June 19, 2008. Two years later, the iTunes Store was the world's largest music retailer. At the Worldwide Developers Conference keynote address on June 6, 2005, Jobs announced that Apple would begin producing Intel-based Mac computers in 2006. On January 10, 2006, the new MacBook Pro and iMac became the first Apple computers to use Intel's Core Duo CPU. By August 7, 2006, Apple made the transition to Intel chips for the entire Mac product line—over one year sooner than announced. The Power Mac, iBook, and PowerBook brands were retired during the transition; the Mac Pro, MacBook, and MacBook Pro became their respective successors. On April 29, 2009, "The Wall Street Journal" reported that Apple was building its own team of engineers to design microchips. Apple also introduced Boot Camp in 2006 to help users install Windows XP or Windows Vista on their Intel Macs alongside Mac OS X. Apple's success during this period was evident in its stock price. Between early 2003 and 2006, the price of Apple's stock increased more than tenfold, from around $6 per share (split-adjusted) to over $80. When Apple surpassed Dell's market cap in January 2006, Jobs sent an email to Apple employees saying Dell's CEO Michael Dell should eat his words. Nine years prior, Dell had said that if he ran Apple he would "shut it down and give the money back to the shareholders". Although Apple's market share in computers had grown, it remained far behind its competitor Microsoft Windows, accounting for about 8% of desktops and laptops in the US. Since 2001, Apple's design team has progressively abandoned the use of translucent colored plastics first used in the iMac G3. This design change began with the titanium-made PowerBook and was followed by the iBook's white polycarbonate structure and the flat-panel iMac. During his keynote speech at the Macworld Expo on January 9, 2007, Jobs announced that Apple Computer, Inc. would thereafter be known as "Apple Inc.", because the company had shifted its emphasis from computers to consumer electronics. This event also saw the announcement of the iPhone and the Apple TV. The company sold 270,000 iPhone units during the first 30 hours of sales, and the device was called "a game changer for the industry". Apple would achieve widespread success with its iPhone, iPod Touch, and iPad products, which introduced innovations in mobile phones, portable music players, and personal computers respectively. Furthermore, by early 2007, 800,000 Final Cut Pro users were registered. In an article posted on Apple's website on February 6, 2007, Jobs wrote that Apple would be willing to sell music on the iTunes Store without digital rights management (DRM), thereby allowing tracks to be played on third-party players, if record labels would agree to drop the technology. On April 2, 2007, Apple and EMI jointly announced the removal of DRM technology from EMI's catalog in the iTunes Store, effective in May 2007. Other record labels eventually followed suit and Apple published a press release in January 2009 to announce that all songs on the iTunes Store are available without their FairPlay DRM. In July 2008, Apple launched the App Store to sell third-party applications for the iPhone and iPod Touch. Within a month, the store sold 60 million applications and registered an average daily revenue of $1 million, with Jobs speculating in August 2008 that the App Store could become a billion-dollar business for Apple. By October 2008, Apple was the third-largest mobile handset supplier in the world due to the popularity of the iPhone. On December 16, 2008, Apple announced that 2009 would be the last year the corporation would attend the Macworld Expo, after more than 20 years of attendance, and that senior vice president of Worldwide Product Marketing Phil Schiller would deliver the 2009 keynote address in lieu of the expected Jobs. The official press release explained that Apple was "scaling back" on trade shows in general, including Macworld Tokyo and the Apple Expo in Paris, France, primarily because the enormous successes of the Apple Retail Stores and website had rendered trade shows a minor promotional channel. On January 14, 2009, Jobs announced in an internal memo that he would be taking a six-month medical leave of absence from Apple until the end of June 2009 and would spend the time focusing on his health. In the email, Jobs stated that "the curiosity over my personal health continues to be a distraction not only for me and my family, but everyone else at Apple as well", and explained that the break would allow the company "to focus on delivering extraordinary products". Though Jobs was absent, Apple recorded its best non-holiday quarter (Q1 FY 2009) during the recession with revenue of $8.16 billion and profit of $1.21 billion. After years of speculation and multiple rumored "leaks", Apple unveiled a large screen, tablet-like media device known as the iPad on January 27, 2010. The iPad ran the same touch-based operating system as the iPhone, and all iPhone apps were compatible with the iPad. This gave the iPad a large app catalog on launch, though having very little development time before the release. Later that year on April 3, 2010, the iPad was launched in the US. It sold more than 300,000 units on its first day, and 500,000 by the end of the first week. In May of the same year, Apple's market cap exceeded that of competitor Microsoft for the first time since 1989. In June 2010, Apple released the iPhone 4, which introduced video calling, multitasking, and a new uninsulated stainless steel design that acted as the phone's antenna. Later that year, Apple again refreshed its iPod line of MP3 players by introducing a multi-touch iPod Nano, an iPod Touch with FaceTime, and an iPod Shuffle that brought back the clickwheel buttons of earlier generations. It also introduced the smaller, cheaper second generation Apple TV which allowed renting of movies and shows. In October 2010, Apple shares hit an all-time high, eclipsing $300 (~$43 split adjusted). Later that month, Apple updated the MacBook Air laptop, iLife suite of applications, and unveiled Mac OS X Lion, the last version with the name Mac OS X. On January 6, 2011, the company opened its Mac App Store, a digital software distribution platform similar to the iOS App Store. On January 17, 2011, Jobs announced in an internal Apple memo that he would take another medical leave of absence for an indefinite period to allow him to focus on his health. Chief Operating Officer Tim Cook assumed Jobs's day-to-day operations at Apple, although Jobs would still remain "involved in major strategic decisions". Apple became the most valuable consumer-facing brand in the world. In June 2011, Jobs surprisingly took the stage and unveiled iCloud, an online storage and syncing service for music, photos, files, and software which replaced MobileMe, Apple's previous attempt at content syncing. This would be the last product launch Jobs would attend before his death. Alongside peer entities such as Atari and Cisco Systems, Apple was featured in the documentary "Something Ventured", which premiered in 2011 and explored the three-decade era that led to the establishment and dominance of Silicon Valley. It has been argued that Apple has achieved such efficiency in its supply chain that the company operates as a monopsony (one buyer with many sellers) and can dictate terms to its suppliers. In July 2011, due to the American debt-ceiling crisis, Apple's financial reserves were briefly larger than those of the U.S. Government. On August 24, 2011, Jobs resigned his position as CEO of Apple. He was replaced by Cook and Jobs became Apple's chairman. Apple did not have a chairman at the time and instead had two co-lead directors, Andrea Jung and Arthur D. Levinson, who continued with those titles until Levinson replaced Jobs as chairman of the board in November after Jobs' death. On October 5, 2011, Steve Jobs died, marking the end of an era for Apple. The first major product announcement by Apple following Jobs's passing occurred on January 19, 2012, when Apple's Phil Schiller introduced iBooks Textbooks for iOS and iBook Author for Mac OS X in New York City. Jobs had stated in his biography that he wanted to reinvent the textbook industry and education. From 2011 to 2012, Apple released the iPhone 4S and iPhone 5, which featured improved cameras, an intelligent software assistant named Siri, and cloud-synced data with iCloud; the third and fourth generation iPads, which featured Retina displays; and the iPad Mini, which featured a 7.9-inch screen in contrast to the iPad's 9.7-inch screen. These launches were successful, with the iPhone 5 (released September 21, 2012) becoming Apple's biggest iPhone launch with over two million pre-orders and sales of three million iPads in three days following the launch of the iPad Mini and fourth generation iPad (released November 3, 2012). Apple also released a third-generation 13-inch MacBook Pro with a Retina display and new iMac and Mac Mini computers. On August 20, 2012, Apple's rising stock price increased the company's market capitalization to a world-record $624 billion. This beat the non-inflation-adjusted record for market capitalization set by Microsoft in 1999. On August 24, 2012, a US jury ruled that Samsung should pay Apple $1.05 billion (£665m) in damages in an intellectual property lawsuit. Samsung appealed the damages award, which the Court reduced by $450 million. The Court further granted Samsung's request for a new trial. On November 10, 2012, Apple confirmed a global settlement that would dismiss all lawsuits between Apple and HTC up to that date, in favor of a ten-year license agreement for current and future patents between the two companies. It is predicted that Apple will make $280 million a year from this deal with HTC. A previously confidential email written by Jobs a year before his death was presented during the proceedings of the "Apple Inc. v. Samsung Electronics Co." lawsuits and became publicly available in early April 2014. With a subject line that reads "Top 100 – A," the email was sent only to the company's 100 most senior employees and outlines Jobs's vision of Apple Inc.'s future under 10 subheadings. Notably, Jobs declares a "Holy War with Google" for 2011 and schedules a "new campus" for 2015. In March 2013, Apple filed a patent for an augmented reality (AR) system that can identify objects in a live video stream and present information corresponding to these objects through a computer-generated information layer overlaid on top of the real-world image. The company also made several high-profile hiring decisions in 2013. On July 2, 2013, Apple recruited Paul Deneve, Belgian President and CEO of Yves Saint Laurent as a vice president reporting directly to Tim Cook. A mid-October 2013 announcement revealed that Burberry CEO Angela Ahrendts will commence as a senior vice president at Apple in mid-2014. Ahrendts oversaw Burberry's digital strategy for almost eight years and, during her tenure, sales increased to about $3.2 billion and shares gained more than threefold. She resigned from Apple in 2019. Alongside Google vice-president Vint Cerf and AT&T CEO Randall Stephenson, Cook attended a closed-door summit held by President Obama on August 8, 2013, in regard to government surveillance and the Internet in the wake of the Edward Snowden NSA incident. On February 4, 2014, Cook met with Abdullah Gül, the President of Turkey, in Ankara to discuss the company's involvement in the Fatih project. In the first quarter of 2014, Apple reported sales of 51 million iPhones and 26 million iPads, becoming all-time quarterly sales records. It also experienced a significant year-over-year increase in Mac sales. This was contrasted with a significant drop in iPod sales. In May 2014, the company confirmed its intent to acquire Dr. Dre and Jimmy Iovine's audio company Beats Electronics—producer of the "Beats by Dr. Dre" line of headphones and speaker products, and operator of the music streaming service Beats Music—for $3 billion, and to sell their products through Apple's retail outlets and resellers. Iovine believed that Beats had always "belonged" with Apple, as the company modeled itself after Apple's "unmatched ability to marry culture and technology." The acquisition was the largest purchase in Apple's history. Apple was at the top of Interbrand's annual Best Global Brands report for six consecutive years; 2013, 2014, 2015, 2016, 2017, and 2018 with a valuation of $214.48 billion. In January 2016, it was announced that one billion Apple devices were in active use worldwide. On May 12, 2016, Apple Inc., invested $1 billion in DiDi, a Chinese transportation network company. "The Information" reported in October 2016 that Apple had taken a board seat in Didi Chuxing, a move that James Vincent of "The Verge" speculated could be a strategic company decision by Apple to get closer to the automobile industry, particularly Didi Chuxing's reported interest in self-driving cars. On June 6, 2016, Fortune released Fortune 500, their list of companies ranked on revenue generation. In the trailing fiscal year (2015), Apple appeared on the list as the top tech company. It ranked third, overall, with $233 billion in revenue. This represents a movement upward of two spots from the previous year's list. On April 6, 2017, Apple launched Clips, an app that allows iPad and iPhone users to make and edit short videos with text, graphics, and effects. The app provides a way to produce short videos to share with other users on the Messages app, Instagram, Facebook, and other social networks. Apple also introduced Live Titles for Clips that allows users to add live animated captions and titles using their voice. In May 2017, Apple refreshed two of its website designs. Their public relations "Apple Press Info" website was changed to an "Apple Newsroom" site, featuring a greater emphasis on imagery and therefore lower information density, and combines press releases, news items, and photos. Its "Apple Leadership" overview of company executives was also refreshed, adding a simpler layout with a prominent header image and two-column text fields. "9to5Mac" noted the design similarities to several of Apple's redesigned apps in iOS 10, particularly its Apple Music and News software. In June 2017, Apple announced the HomePod, its smart speaker aimed to compete against Sonos, Google Home, and Amazon Echo. Towards the end of the year, "TechCrunch" reported that Apple was acquiring Shazam, a company specializing in music, TV, film and advertising recognition. The acquisition was confirmed a few days later, reportedly costing Apple $400 million, with media reports noting that the purchase looked like a move by Apple to get data and tools to bolster its Apple Music streaming service. The purchase was approved by EU later in September 2018. Also in June 2017, Apple appointed Jamie Erlicht and Zack Van Amburg to head the newly formed worldwide video unit. In November 2017, Apple announced it was branching out into original scripted programming: a drama series starring Jennifer Aniston and Reese Witherspoon, and a reboot of the anthology series Amazing Stories with Steven Spielberg. In June 2018, Apple signed the Writer's Guild of America's minimum basic agreement and Oprah Winfrey to a multi-year content partnership. Additional partnerships for original series include Sesame Workshop and DHX Media and its subsidiary Peanuts Worldwide, as well as a partnership with A24 to create original films. , Apple has ordered twenty-one television series and one film. There are five series in development at Apple. On June 5, 2018, Apple deprecated OpenGL and OpenGL ES across all operating systems and urged developers to use Metal instead. In August 2018, Apple purchased Akonia Holographics for its augmented reality goggle lens. On February 14, 2019, Apple acquired DataTiger for its digital marketing technology. On January 29, 2019, Apple reported its first decline in revenues and profits in a decade. In February 2019 they bought Conversational computing company PullString (formerly ToyTalk) On July 25, 2019, Apple and Intel announced an agreement for Apple to acquire the smartphone modem business of Intel Mobile Communications for US$1 billion. On March 30, 2020 Apple acquired Dark Sky, a local weather app for an undisclosed sum. The stand alone app be closed at the end of 2021. On April 3, 2020, Apple acquired Voysis, a Dublin based company focused on AI digital voice technology for an undisclosed sum. On May 14, 2020, Apple acquired NextVR, a virtual reality company, based in Newport Beach, California. During its annual WWDC keynote speech on June 22, 2020, Apple announced it will transition away from Intel processors to processors developed in-house. The announcement was expected by industry analysts, and it has been noted that products featuring Apple's processors would allow for big increases in performance over current Intel-based Mac models. Macintoshes currently in production: Apple sells a variety of computer accessories for Macs, including Thunderbolt Display, Magic Mouse, Magic Trackpad, Magic Keyboard, the AirPort wireless networking products, and Time Capsule. On October 23, 2001, Apple introduced the iPod digital music player. Several updated models have since been introduced, and the iPod brand is now the market leader in portable music players by a significant margin. More than 390 million units have shipped . Apple has partnered with Nike to offer the Nike+iPod Sports Kit, enabling runners to synchronize and monitor their runs with iTunes and the Nike+ website. In late July 2017, Apple discontinued its iPod Nano and iPod Shuffle models, leaving only the iPod Touch available for purchase. At the Macworld Conference & Expo in January 2007, Steve Jobs introduced the long-anticipated iPhone, a convergence of an Internet-enabled smartphone and iPod. The first-generation iPhone was released on June 29, 2007, for $499 (4 GB) and $599 (8 GB) with an AT&T contract. On February 5, 2008, it was updated to have 16 GB of memory, in addition to the 8 GB and 4 GB models. It combined a 2.5G quad band GSM and EDGE cellular phone with features found in handheld devices, running a scaled-down version of OS X (dubbed iPhone OS after the launch and later renamed to iOS), with various Mac OS X applications such as Safari and Mail. It also includes web-based and Dashboard apps such as Google Maps and Weather. The iPhone features a touchscreen display, Bluetooth, and Wi-Fi (both "b" and "g"). A second version, the iPhone 3G, was released on July 11, 2008, with a reduced price of $199 for the 8 GB model and $299 for the 16 GB model. This version added support for 3G networking and assisted GPS navigation. The flat silver back and large antenna square of the original model were eliminated in favor of a glossy, curved black or white back. Software capabilities were improved with the release of the App Store, which provided iPhone-compatible applications to download. On April 24, 2009, the App Store surpassed one billion downloads. On June 8, 2009, Apple announced the iPhone 3GS. It provided an incremental update to the device, including faster internal components, support for faster 3G speeds, video recording capability, and voice control. At the Worldwide Developers Conference (WWDC) on June 7, 2010, Apple announced the redesigned iPhone 4. It featured a 960 × 640 display, the Apple A4 processor, a gyroscope for enhanced gaming, a 5MP camera with LED flash, front-facing VGA camera and FaceTime video calling. Shortly after its release, reception issues were discovered by consumers, due to the stainless steel band around the edge of the device, which also serves as the phone's cellular signal and Wi-Fi antenna. The issue was corrected by a "Bumper Case" distributed by Apple for free to all owners for a few months. In June 2011, Apple overtook Nokia to become the world's biggest smartphone maker by volume. On October 4, 2011, Apple unveiled the iPhone 4S, which was first released on October 14, 2011. It features the Apple A5 processor and Siri voice assistant technology, the latter of which Apple had acquired in 2010 from SRI International Artificial Intelligence Center. It also features an updated 8MP camera with new optics. Apple began a new accessibility feature, Made for iPhone Hearing Aids with the iPhone 4S. Made for iPhone Hearing Aids feature Live Listen, it can help the user hear a conversation in a noisy room or hear someone speaking across the room. Apple sold 4 million iPhone 4S phones in the first three days of availability. On September 12, 2012, Apple introduced the iPhone 5. It has a 4-inch display, 4G LTE connectivity, and the upgraded Apple A6 chip, among several other improvements. Two million iPhones were sold in the first twenty-four hours of pre-ordering and over five million handsets were sold in the first three days of its launch. Upon the launch of the iPhone 5S and iPhone 5C, Apple set a new record for first-weekend smartphone sales by selling over nine million devices in the first three days of its launch. The release of the iPhone 5S and 5C is the first time that Apple simultaneously launched two models. A patent filed in July 2013 revealed the development of a new iPhone battery system that uses location data in combination with data on the user's habits to moderate the handsets' power settings accordingly. Apple is working towards a power management system that will provide features such as the ability of the iPhone to estimate the length of time a user will be away from a power source to modify energy usage and a detection function that adjusts the charging rate to best suit the type of power source that is being used. In a March 2014 interview, Apple designer Jonathan Ive used the iPhone as an example of Apple's ethos of creating high-quality, life-changing products. He explained that the phones are comparatively expensive due to the intensive effort that is used to make them: On September 9, 2014, Apple introduced the iPhone 6, alongside the iPhone 6 Plus that both have screen sizes over 4-inches. One year later, Apple introduced the iPhone 6S, and iPhone 6S Plus, which introduced a new technology called 3D Touch, including an increase of the rear camera to 12 MP, and the FaceTime camera to 5 MP. On March 21, 2016, Apple introduced the first-generation iPhone SE that has a 4-inch screen size last used with the 5S and has nearly the same internal hardware as the 6S. In July 2016, Apple announced that one billion iPhones had been sold. On September 7, 2016, Apple introduced the iPhone 7 and the iPhone 7 Plus, which feature improved system and graphics performance, add water resistance, a new rear dual-camera system on the 7 Plus model, and, controversially, remove the 3.5 mm headphone jack. On September 12, 2017, Apple introduced the iPhone 8 and iPhone 8 Plus, standing as evolutionary updates to its previous phones with a faster processor, improved display technology, upgraded camera systems and wireless charging. The company also announced iPhone X, which radically changes the hardware of the iPhone lineup, removing the home button in favor of facial recognition technology and featuring a near bezel-less design along with wireless charging. On September 12, 2018, Apple introduced the iPhone XS, iPhone XS Max and iPhone XR. The iPhone XS and iPhone XS Max features Super Retina displays, a faster and improved dual camera system that offers breakthrough photo and video features, the first 7-nanometer chip in a smartphone — the A12 Bionic chip with next-generation Neural Engine — faster Face ID, wider stereo sound and introduces Dual SIM to iPhone. The iPhone XR comes in an all-screen glass and aluminium design with the most advanced LCD in a smartphone featuring a 6.1-inch Liquid Retina display, A12 Bionic chip with next-generation Neural Engine, the TrueDepth camera system, Face ID and an advanced camera system that creates dramatic portraits using a single camera lens. On September 10, 2019, Apple introduced the iPhone 11, iPhone 11 Pro, and the iPhone 11 Pro Max. The iPhone 11 features the same Liquid Retina LCD display used in 2018's iPhone XR. Overall, the iPhone 11 retains the same glass and aluminum design as the iPhone XR while adding in new features such as the addition of an Ultra-Wide 12mp camera, a battery that lasts 1 hour longer than the iPhone XR and an IP68 rating for water and dust resistance. The iPhone 11 Pro and iPhone 11 Pro Max feature an all-new textured matte glass and stainless steel design and a triple camera setup that included an Ultra Wide, Wide and Telephoto camera. The iPhone 11 Pro series' battery life is capable of lasting up to 5 hours more than the iPhone XS and XS Max. The iPhone 11 Pro and Pro Max also features a new Super Retina XDR OLED display that is capable of a screen brightness of 800 nits. All new iPhones announced at Apple's September 2019 feature an A13 Bionic chip with a third-generation Neural Engine, an Apple U1 chip, spatial audio playback, a low light photo mode and an improved Face ID system. On April 15, 2020, Apple announced a new second-generation iPhone SE. It replicates the iPhone 8 design - has a 4.7-inch screen, sizable bezels on the top and bottom, and a home button with Touch ID. Yet, the 2020 iPhone SE features A13 Bionic chip and a 12 MP rear wide camera, similarly to the iPhone 11 lineup. On January 27, 2010, Apple introduced their much-anticipated media tablet, the iPad. It offers multi-touch interaction with multimedia formats including newspapers, e-books, photos, videos, music, word processing documents, video games, and most existing iPhone apps using a 9.7-inch screen. It also includes a mobile version of Safari for web browsing, as well as access to the App Store, iTunes Library, iBookstore, Contacts, and Notes. Content is downloadable via Wi-Fi and optional 3G service or synced through the user's computer. AT&T was initially the sole U.S. provider of 3G wireless access for the iPad. On March 2, 2011, Apple introduced the iPad 2 with a faster processor and a camera on the front and back. It also added support for optional 3G service provided by Verizon in addition to AT&T. The availability of the iPad 2 was initially limited as a result of a devastating earthquake and tsunami in Japan in March 2011. The third-generation iPad was released on March 7, 2012, and marketed as "the new iPad". It added LTE service from AT&T or Verizon, an upgraded A5X processor, and Retina display. The dimensions and form factor remained relatively unchanged, with the new iPad being a fraction thicker and heavier than the previous version and featuring minor positioning changes. On October 23, 2012, Apple's fourth-generation iPad came out, marketed as the "iPad with Retina display". It added the upgraded A6X processor and replaced the traditional 30-pin dock connector with the all-digital Lightning connector. The iPad Mini was also introduced. It featured a reduced 7.9-inch display and much of the same internal specifications as the iPad 2. On October 22, 2013, Apple introduced the iPad Air and the iPad Mini with Retina Display, both featuring a new 64-bit Apple A7 processor. The iPad Air 2 was unveiled on October 16, 2014. It added better graphics and central processing and a camera burst mode as well as minor updates. The iPad Mini 3 was unveiled at the same time. Since its launch, iPad users have downloaded over three billion apps. The total number of App Store downloads, , is over 100 billion. On September 9, 2015, Apple announced the iPad Pro, an iPad with a 12.9-inch display that supports two new accessories, the Smart Keyboard and Apple Pencil. An updated IPad Mini 4 was announced at the same time. A 9.7-inch iPad Pro was announced on March 21, 2016. On June 5, 2017, Apple announced a new iPad Pro with a 10.5-inch display to replace the 9.7 inch model and an updated 12.9-inch model. The original Apple Watch smartwatch was announced by Tim Cook on September 9, 2014, being introduced as a product with health and fitness-tracking. It was released on April 24, 2015. The second generation of Apple Watch, Apple Watch Series 2, was released in September 2016, featuring greater water resistance, a faster processor, and brighter display. It was also released alongside a cheaper Series 1. On September 12, 2017, Apple introduced the Apple Watch Series 3 featuring LTE cellular connectivity, giving the wearable independence from an iPhone except for the setup process. On September 12, 2018, Apple introduced the Apple Watch Series 4, featuring new display, electrocardiogram, and fall detection. On September 10, 2019, Apple introduced the Apple Watch Series 5, featuring a new magnetometer, a faster processor, and a new always-on display. The Series 4 was discontinued. At the 2007 Macworld conference, Jobs demonstrated the Apple TV (Jobs accidentally referred to the device as "iTV", its codename, while on stage), a set-top video device intended to bridge the sale of content from iTunes with high-definition televisions. The device, running a variant of Mac OS X, links up to a user's TV and syncs over the wireless or wired network with one computer's iTunes library and can stream content from an additional four. The Apple TV originally incorporated a 40 GB hard drive for storage, included outputs for HDMI and component video, and played video at a maximum resolution of 720p. On May 30, 2007, a 160 GB hard disk drive was released alongside the existing 40 GB model. A software update released on January 15, 2008, allowed media to be purchased directly from the Apple TV. In September 2009, Apple discontinued the original 40 GB Apple TV but continued to produce and sell the 160 GB Apple TV. On September 1, 2010, Apple released a completely redesigned Apple TV running on an iOS variant and discontinued the older model, which ran on a Mac OS X variant. The new device is one-fourth the size, runs quieter, and replaces the need for a hard drive with media streaming from any iTunes library on the network along with 8 GB of flash memory to cache downloaded media. Like the iPad and the iPhone, Apple TV runs on an A4 processor. The memory included in the device is half of that in the iPhone 4 at 256 MB; the same as the iPad, iPhone 3GS, third and fourth-generation iPod Touch. It has HDMI out as the only video output source. Features include access to the iTunes Store to rent movies and TV shows (purchasing has been discontinued), streaming from internet video sources, including YouTube and Netflix, and media streaming from an iTunes library. Apple also reduced the price of the device to $99. A third generation of the device was introduced at an Apple event on March 7, 2012, with new features such as higher resolution (1080p) and a new user interface. At the September 9, 2015, event, Apple unveiled an overhauled Apple TV, which now runs a subsequent variant of iOS called tvOS, and contains 32 GB or 64 GB of NAND Flash to store games, programs, and to cache the current media playing. The release also coincided with the opening of a separate Apple TV App Store and a new Siri Remote with a glass touchpad, gyroscope, and microphone. On December 12, 2016, Apple released a new iOS and tvOS media player app called TV to replace the existing "Videos" iOS application. At the September 12, 2017, event, Apple released a new 4K Apple TV with the same form factor as the 4th Generation model. The 4K model is powered by the A10X SoC designed in-house that also powers their second-generation iPad Pro. The 4K model also has support for high dynamic range. On March 25, 2019, Apple announced Apple TV+, their upcoming over-the-top subscription video on-demand web television service, will arrive Fall 2019. TV+ features exclusive original shows, movies, and documentaries. They also announced an update to the TV app with a new "Channels" feature and that the TV app will expand to macOS, numerous smart television models, Roku devices, and Amazon Fire TV devices later in 2019. Apple's first smart speaker, the HomePod was released on February 9, 2018, after being delayed from its initial December 2017 release. It features seven tweeters in the base, a four-inch woofer in the top, and six microphones for voice control and acoustic optimization On September 12, 2018, Apple announced that HomePod is adding new features—search by lyrics, set multiple timers, make and receive phone calls, Find My iPhone, Siri Shortcuts—and Siri languages. In 2019, Apple, Google, Amazon, and Zigbee Alliance announced a partnership to make smart home products work together. Apple develops its own operating systems to run on its devices, including macOS for Mac personal computers, iOS for its iPhone, iPad and iPod Touch smartphones and tablets, watchOS for its Apple Watch smartwatches, and tvOS for its Apple TV digital media player. For iOS and macOS, Apple also develops its own software titles, including Pages for writing, Numbers for spreadsheets, and Keynote for presentations, as part of its iWork productivity suite. For macOS, it also offers iMovie and Final Cut Pro X for video editing, and GarageBand and Logic Pro X for music creation. Apple's range of server software includes the operating system macOS Server; Apple Remote Desktop, a remote systems management application; and Xsan, a storage area network file system. Apple also offers online services with iCloud, which provides cloud storage and synchronization for a wide range of user data, including documents, photos, music, device backups, and application data, and Apple Music, its music and video streaming service. According to the "Sydney Morning Herald", Apple wants to start producing an electric car with autonomous driving as soon as 2020. Apple has made efforts to recruit battery development engineers and other electric automobile engineers from A123 Systems, LG Chem, Samsung Electronics, Panasonic, Toshiba, Johnson Controls and Tesla Motors. According to Steve Jobs, the company's name was inspired by his visit to an apple farm while on a fruitarian diet. Jobs thought the name "Apple" was "fun, spirited and not intimidating". Apple's first logo, designed by Ron Wayne, depicts Sir Isaac Newton sitting under an apple tree. It was almost immediately replaced by Rob Janoff's "rainbow Apple", the now-familiar rainbow-colored silhouette of an apple with a bite taken out of it. Janoff presented Jobs with several different monochromatic themes for the "bitten" logo, and Jobs immediately took a liking to it. However, Jobs insisted that the logo be colorized to humanize the company. The logo was designed with a bite so that it would not be confused with a cherry. The colored stripes were conceived to make the logo more accessible, and to represent the fact the Apple II could generate graphics in color. This logo is often erroneously referred to as a tribute to Alan Turing, with the bite mark a reference to his method of suicide. Both Janoff and Apple deny any homage to Turing in the design of the logo. On August 27, 1999 (the year following the introduction of the iMac G3), Apple officially dropped the rainbow scheme and began to use monochromatic logos nearly identical in shape to the previous rainbow incarnation. An Aqua-themed version of the monochrome logo was used from 1998 to 2003, and a glass-themed version was used from 2007 to 2013. Steve Jobs and Steve Wozniak were Beatles fans, but Apple Inc. had name and logo trademark issues with Apple Corps Ltd., a multimedia company started by the Beatles in 1968. This resulted in a series of lawsuits and tension between the two companies. These issues ended with the settling of their lawsuit in 2007. Apple's first slogan, "Byte into an Apple", was coined in the late 1970s. From 1997 to 2002, the slogan "Think Different" was used in advertising campaigns, and is still closely associated with Apple. Apple also has slogans for specific product lines — for example, "iThink, therefore iMac" was used in 1998 to promote the iMac, and "Say hello to iPhone" has been used in iPhone advertisements. "Hello" was also used to introduce the original Macintosh, Newton, iMac ("hello (again)"), and iPod. From the introduction of the Macintosh in 1984, with the 1984 Super Bowl advertisement to the more modern Get a Mac adverts, Apple has been recognized for its efforts towards effective advertising and marketing for its products. However, claims made by later campaigns were criticized, particularly the 2005 Power Mac ads. Apple's product advertisements gained a lot of attention as a result of their eye-popping graphics and catchy tunes. Musicians who benefited from an improved profile as a result of their songs being included on Apple advertisements include Canadian singer Feist with the song "1234" and Yael Naïm with the song "New Soul". Apple owns a YouTube channel where they release advertisements, tips, and introductions for their devices. Semiotics is the study of how meaning is derived from symbols and signs and provides major insight for understanding brand management and brand loyalty. Ferdinand de Saussure, a Swiss linguist and semiotician, created a semiotic model that identifies two parts of a sign: the signified and signifier. The signifier is the perceptual component that we physically see, and the signified is then the concept which the sign refers to. In Saussure’s model, the sign results from the recognition of a sound or object with a concept. In his model, the signified and signifier are “as inseparable as two sides of a piece of paper". The second popular semiotic model that exists is the Peircean Model. Charles Sanders Pierce was a logician. His model, like Saussure’s model, involved the relationship between the elements of signs and objects. However, the Peircean model added that whoever is decoding the sign must have some previous understanding or knowledge about the transmitted message. Peirce’s model can be represented using the three sides of triangle: the representamen (the sign), an object (what the sign represents), and the interpretant (the produced effect by the sign). The symbolic representation that a brand carries can affect how a consumer “recalls, internalizes, and relates” to the performance of a company. There is plenty of evidence to show that a company can easily fail if they do not keep track of how the brand changes with the media culture. Semiotic research can be used to help a company relate to their customer’s culture over time and help their brand to stand out in competitive markets. The first two Apple logos are drastically different from each other. However, they both share the sign of an apple. In the original logo designed by Ronald Wayne, Sir Isaac Newton is seen sitting under the infamous apple tree about to bear fruit above, just before his discovery of gravity. Analysis of the semiotics with Saussure's model yields the signified, or sign, of the apple. The signifier represents discovery, innovation, and the notion of thought. It was quickly realized that the original logo was too complicated and intellectual for the needed purpose. The company’s mission was, and still is, to simplify technology for the everyday life. A fun and clever logo that spoke to computer-savvy people was needed. In 1977, Rob Janoff created the iconic rainbow apple symbol that is still recognized today. The logo has double meaning and differs from the many serious corporate logos in existence at the time. Apple Inc. is well known for being an innovative company who challenge the status quo and established standards. Again, using Saussure’s semiotic model, the signified, is an apple, but with a bite taken out of it. Because Apple is seen as a challenger in the industry, the most common signifier is the forbidden fruit from the Biblical reference, the Garden of Eden. The signified is the bite from the apple, and the represented signifier is the tree of knowledge, thus symbolizing Apple as a rebellious young company ready to challenge the world and the promise of knowledge that an entire culture of Apple users may gain from the product. The semiotics of the bite and the color of the logo can also be looked at from a technologic viewpoint. The bite is the signified and the computer storage unit, byte, is the signifier. The rainbow color of the logo portrays the message that the its computer monitor could be produce color images. Steve Jobs argued that color was crucial for "humanizing the company" at that time. The only thing to change with the logo since 1977 has been the color. In 1998, a monochromatic logo was implemented with the release of the first iMac. This is the first Mac to not have the iconic rainbow-colored apple since its creation 20 years prior. The new look represents a new era of Apple Inc. The logo's shape had become untouchable and Apple's message is that it is better to be different. Apple customers gained a reputation for devotion and loyalty early in the company's history. In 1984, "BYTE" stated that: Apple evangelists were actively engaged by the company at one time, but this was after the phenomenon had already been firmly established. Apple evangelist Guy Kawasaki has called the brand fanaticism "something that was stumbled upon," while Ive explained in 2014 that "People have an incredibly personal relationship" with Apple's products. Apple Store openings and new product releases can draw crowds of hundreds, with some waiting in line as much as a day before the opening. The opening of New York City's Fifth Avenue "Cube" store in 2006 became the setting of a marriage proposal, and had visitors from Europe who flew in for the event. In June 2017, a newlywed couple took their wedding photos inside the then-recently opened Orchard Road Apple Store in Singapore. The high level of brand loyalty has been criticized and ridiculed, applying the epithet "Apple fanboy" and mocking the lengthy lines before a product launch. An internal memo leaked in 2015 suggested the company planned to discourage long lines and direct customers to purchase its products on its website. "Fortune" magazine named Apple the most admired company in the United States in 2008, and in the world from 2008 to 2012. On September 30, 2013, Apple surpassed Coca-Cola to become the world's most valuable brand in the Omnicom Group's "Best Global Brands" report. Boston Consulting Group has ranked Apple as the world's most innovative brand every year since 2005. "The New York Times" in 1985 stated that "Apple above all else is a marketing company". John Sculley agreed, telling "The Guardian" newspaper in 1997 that "People talk about technology, but Apple was a marketing company. It was the marketing company of the decade." Research in 2002 by NetRatings indicate that the average Apple consumer was usually more affluent and better educated than other PC company consumers. The research indicated that this correlation could stem from the fact that on average Apple Inc. products were more expensive than other PC products. In response to a query about the devotion of loyal Apple consumers, Jonathan Ive responded: The Apple website home page has been used to commemorate, or pay tribute to, milestones and events outside of Apple's product offerings, including: Apple Inc.'s world corporate headquarters are located in the middle of Silicon Valley, at 1–6 Infinite Loop, Cupertino, California. This Apple campus has six buildings that total and was built in 1993 by Sobrato Development Cos. Apple has a satellite campus in neighboring Sunnyvale, California, where it houses a testing and research laboratory. AppleInsider claimed in March 2014 that Apple has a top-secret facility for development of the SG5 electric vehicle project codenamed "Titan" under the shell company name SixtyEight Research. In 2006, Apple announced its intention to build a second campus in Cupertino about east of the current campus and next to Interstate 280. The new campus building has been designed by Norman Foster. The Cupertino City Council approved the proposed "spaceship" design campus on October 15, 2013, after a 2011 presentation by Jobs detailing the architectural design of the new building and its environs. The new campus is planned to house up to 13,000 employees in one central, four-storied, circular building surrounded by extensive landscape. It will feature a café with room for 3,000 sitting people and parking underground as well as in a parking structure. The 2.8 million square foot facility will also include Jobs's original designs for a fitness center and a corporate auditorium. Apple has expanded its campuses in Austin, Texas, concurrently with building Apple Park in Cupertino. The expansion consists of two locations, with one having of workspace, and the other . Apple will invest $1 billion to build the North Austin campus. At the biggest location, 6,000 employees work on technical support, manage Apple's network of suppliers to fulfill product shipments, aid in maintaining iTunes Store and App Store, handle economy, and continuously update Apple Maps with new data. At its smaller campus, 500 engineers work on next-generation processor chips to run in future Apple products. Apple's headquarters for Europe, the Middle East and Africa (EMEA) are located in Cork in the south of Ireland. The facility, which opened in 1980, is Apple's first location outside of the United States. Apple Sales International, which deals with all of Apple's international sales outside of the US, is located at Apple's campus in Cork along with Apple Distribution International, which similarly deals with Apple's international distribution network. On April 20, 2012, Apple added 500 new jobs at its European headquarters, increasing the total workforce from around 2,800 to 3,300 employees. The company will build a new office block on its Hollyhill Campus to accommodate the additional staff. Its United Kingdom headquarters is at Stockley Park on the outskirts of London. In February 2015, Apple opened their new 180,000-square-foot headquarters in Herzliya, Israel, designed to accommodate approximately 800 employees. This is Apple's third office located within Israel; the first, also in Herzliya, was obtained as part of the Anobit acquisition, and the other is a research center in Haifa. In December 2015, Apple bought the 70,000-square-foot manufacturing facility in North San Jose previously used by Maxim Integrated in an $18.2 million deal. The first Apple Stores were originally opened as two locations in May 2001 by then-CEO Steve Jobs, after years of attempting but failing store-within-a-store concepts. Seeing a need for improved retail presentation of the company's products, he began an effort in 1997 to revamp the retail program to get an improved relationship to consumers, and hired Ron Johnson in 2000. Jobs relaunched Apple's online store in 1997, and opened the first two physical stores in 2001. The media initially speculated that Apple would fail, but its stores were highly successful, bypassing the sales numbers of competing nearby stores and within three years reached US$1 billion in annual sales, becoming the fastest retailer in history to do so. Over the years, Apple has expanded the number of retail locations and its geographical coverage, with 499 stores across 22 countries worldwide . Strong product sales have placed Apple among the top-tier retail stores, with sales over $16 billion globally in 2011. In May 2016, Angela Ahrendts, Apple's then Senior Vice President of Retail, unveiled a significantly redesigned Apple Store in Union Square, San Francisco, featuring large glass doors for the entry, open spaces, and rebranded rooms. In addition to purchasing products, consumers can get advice and help from "Creative Pros" – individuals with specialized knowledge of creative arts; get product support in a tree-lined Genius Grove; and attend sessions, conferences and community events, with Ahrendts commenting that the goal is to make Apple Stores into "town squares", a place where people naturally meet up and spend time. The new design will be applied to all Apple Stores worldwide, a process that has seen stores temporarily relocate or close. Many Apple Stores are located inside shopping malls, but Apple has built several stand-alone "flagship" stores in high-profile locations. It has been granted design patents and received architectural awards for its stores' designs and construction, specifically for its use of glass staircases and cubes. The success of Apple Stores have had significant influence over other consumer electronics retailers, who have lost traffic, control and profits due to a perceived higher quality of service and products at Apple Stores. Apple's notable brand loyalty among consumers causes long lines of hundreds of people at new Apple Store openings or product releases. Due to the popularity of the brand, Apple receives a large number of job applications, many of which come from young workers. Although Apple Store employees receive above-average pay, are offered money toward education and health care, and receive product discounts, there are limited or no paths of career advancement. A May 2016 report with an anonymous retail employee highlighted a hostile work environment with harassment from customers, intense internal criticism, and a lack of significant bonuses for securing major business contracts. Due to the COVID-19 pandemic, Apple closed its stores outside China until March 27, 2020. Despite the stores being closed, hourly workers continue to be paid. Workers across the company are allowed to work remotely if their jobs permit it. On March 24, 2020, in a memo, Senior Vice President of People and Retail Deirdre O’Brien announced that some of its retail stores are expected to reopen at the beginning of April. Apple is one of several highly successful companies founded in the 1970s that bucked the traditional notions of corporate culture. Jobs often walked around the office barefoot even after Apple became a Fortune 500 company. By the time of the "1984" television advertisement, Apple's informal culture had become a key trait that differentiated it from its competitors. According to a 2011 report in "Fortune," this has resulted in a corporate culture more akin to a startup rather than a multinational corporation. As the company has grown and been led by a series of differently opinionated chief executives, it has arguably lost some of its original character. Nonetheless, it has maintained a reputation for fostering individuality and excellence that reliably attracts talented workers, particularly after Jobs returned to the company. Numerous Apple employees have stated that projects without Jobs's involvement often took longer than projects with it. To recognize the best of its employees, Apple created the Apple Fellows program which awards individuals who make extraordinary technical or leadership contributions to personal computing while at the company. The Apple Fellowship has so far been awarded to individuals including Bill Atkinson, Steve Capps, Rod Holt, Alan Kay, Guy Kawasaki, Al Alcorn, Don Norman, Rich Page, and Steve Wozniak. At Apple, employees are intended to be specialists who are not exposed to functions outside their area of expertise. Jobs saw this as a means of having "best-in-class" employees in every role. For instance, Ron Johnson—Senior Vice President of Retail Operations until November 1, 2011—was responsible for site selection, in-store service, and store layout, yet had no control of the inventory in his stores. This was done by Tim Cook, who had a background in supply-chain management. Apple is known for strictly enforcing accountability. Each project has a "directly responsible individual" or "DRI" in Apple jargon. As an example, when iOS senior vice president Scott Forstall refused to sign Apple's official apology for numerous errors in the redesigned Maps app, he was forced to resign. Unlike other major U.S. companies, Apple provides a relatively simple compensation policy for executives that does not include perks enjoyed by other CEOs like country club fees or private use of company aircraft. The company typically grants stock options to executives every other year. In 2015, Apple had 110,000 full-time employees. This increased to 116,000 full-time employees the next year, a notable hiring decrease, largely due to its first revenue decline. Apple does not specify how many of its employees work in retail, though its 2014 SEC filing put the number at approximately half of its employee base. In September 2017, Apple announced that it had over 123,000 full-time employees. Apple has a strong culture of corporate secrecy, and has an anti-leak Global Security team that recruits from the National Security Agency, the Federal Bureau of Investigation, and the United States Secret Service. In December 2017, Glassdoor said Apple was the 48th best place to work, having originally entered at rank 19 in 2009, peaking at rank 10 in 2012, and falling down the ranks in subsequent years. An editorial article in "The Verge" in September 2016 by technology journalist Thomas Ricker explored some of the public's perceived lack of innovation at Apple in recent years, specifically stating that Samsung has "matched and even surpassed Apple in terms of smartphone industrial design" and citing the belief that Apple is incapable of producing another breakthrough moment in technology with its products. He goes on to write that the criticism focuses on individual pieces of hardware rather than the ecosystem as a whole, stating "Yes, iteration is boring. But it's also how Apple does business. [...] It enters a new market and then refines and refines and continues refining until it yields a success". He acknowledges that people are wishing for the "excitement of revolution", but argues that people want "the comfort that comes with harmony". Furthermore, he writes that "a device is only the starting point of an experience that will ultimately be ruled by the ecosystem in which it was spawned", referring to how decent hardware products can still fail without a proper ecosystem (specifically mentioning that Walkman did not have an ecosystem to keep users from leaving once something better came along), but how Apple devices in different hardware segments are able to communicate and cooperate through the iCloud cloud service with features including Universal Clipboard (in which text copied on one device can be pasted on a different device) as well as inter-connected device functionality including Auto Unlock (in which an Apple Watch can unlock a Mac in close proximity). He argues that Apple's ecosystem is its greatest innovation. "The Wall Street Journal" reported in June 2017 that Apple's increased reliance on Siri, its virtual personal assistant, has raised questions about how much Apple can actually accomplish in terms of functionality. Whereas Google and Amazon make use of big data and analyze customer information to personalize results, Apple has a strong pro-privacy stance, intentionally not retaining user data. "Siri is a textbook of leading on something in tech and then losing an edge despite having all the money and the talent and sitting in Silicon Valley", Holger Mueller, a technology analyst, told the "Journal". The report further claims that development on Siri has suffered due to team members and executives leaving the company for competitors, a lack of ambitious goals, and shifting strategies. Though switching Siri's functions to machine learning and algorithms, which dramatically cut its error rate, the company reportedly still failed to anticipate the popularity of Amazon's Echo, which features the Alexa personal assistant. Improvements to Siri stalled, executives clashed, and there were disagreements over the restrictions imposed on third-party app interactions. While Apple acquired an England-based startup specializing in conversational assistants, Google's Assistant had already become capable of helping users select Wi-Fi networks by voice, and Siri was lagging in functionality. In December 2017, two articles from "The Verge" and "ZDNet" debated what had been a particularly devastating week for Apple's macOS and iOS software platforms. The former had experienced a severe security vulnerability, in which Macs running the then-latest macOS High Sierra software were vulnerable to a bug that let anyone gain administrator privileges by entering "root" as the username in system prompts, leaving the password field empty and twice clicking "unlock", gaining full access. The bug was publicly disclosed on Twitter, rather than through proper bug bounty programs. Apple released a security fix within a day and issued an apology, stating that "regrettably we stumbled" in regards to the security of the latest updates. After installing the security patch, however, file sharing was broken for users, with Apple releasing a support document with instructions to separately fix that issue. Though Apple publicly stated the promise of "auditing our development processes to help prevent this from happening again", users who installed the security update while running the older 10.13.0 version of the High Sierra operating system rather than the then-newest 10.13.1 release experienced that the "root" security vulnerability was re-introduced, and persisted even after fully updating their systems. On iOS, a date bug caused iOS devices that received local app notifications at 12:15am on December 2, 2017 to repeatedly restart. Users were recommended to turn off notifications for their apps. Apple quickly released an update, done during the nighttime in Cupertino, California time and outside of their usual software release window, with one of the headlining features of the update needing to be delayed for a few days. The combined problems of the week on both macOS and iOS caused "The Verge"s Tom Warren to call it a "nightmare" for Apple's software engineers and described it as a significant lapse in Apple's ability to protect its more than 1 billion devices. "ZDNet"s Adrian Kingsley-Hughes wrote that "it's hard to not come away from the last week with the feeling that Apple is slipping". Kingsley-Hughes also concluded his piece by referencing an earlier article, in which he wrote that "As much as I don't want to bring up the tired old 'Apple wouldn't have done this under Steve Jobs's watch' trope, a lot of what's happening at Apple lately is different from what they came to expect under Jobs. Not to say that things didn't go wrong under his watch, but product announcements and launches felt a lot tighter for sure, as did the overall quality of what Apple was releasing." He did, however, also acknowledge that such failures "may indeed have happened" with Jobs in charge, though returning to the previous praise for his demands of quality, stating "it's almost guaranteed that given his personality that heads would have rolled, which limits future failures". The company's manufacturing, procurement, and logistics enable it to execute massive product launches without having to maintain large, profit-sapping inventories. In 2011, Apple's profit margins were 40 percent, compared with between 10 and 20 percent for most other hardware companies. Cook's catchphrase to describe his focus on the company's operational arm is: "Nobody wants to buy sour milk". During the Mac's early history Apple generally refused to adopt prevailing industry standards for hardware, instead creating their own. This trend was largely reversed in the late 1990s, beginning with Apple's adoption of the PCI bus in the 7500/8500/9500 Power Macs. Apple has since joined the industry standards groups to influence the future direction of technology standards such as USB, AGP, HyperTransport, Wi-Fi, NVMe, PCIe and others in its products. FireWire is an Apple-originated standard that was widely adopted across the industry after it was standardized as IEEE 1394 and is a legally mandated port in all Cable TV boxes in the United States. Apple has gradually expanded its efforts in getting its products into the Indian market. In July 2012, during a conference call with investors, CEO Tim Cook said that he "[loves] India", but that Apple saw larger opportunities outside the region. India's requirement that 30% of products sold be manufactured in the country was described as "really adds cost to getting product to market". In October 2013, Indian Apple executives unveiled a plan for selling devices through instalment plans and store-within-a-store concepts, in an effort to expand further into the market. The news followed Cook's acknowledgment of the country in July when sales results showed that iPhone sales in India grew 400% during the second quarter of 2013. In March 2016, "The Times of India" reported that Apple had sought permission from the Indian government to sell refurbished iPhones in the country. However, two months later, the application was rejected, citing official country policy. In May 2016, Apple opened an iOS app development center in Bangalore and a maps development office for 4,000 staff in Hyderabad. In February 2017, Apple once again requested permission to sell used iPhones in the country. The same month, "Bloomberg" reported that Apple was close to receiving permission to open its first retail store in the country. In March, "The Wall Street Journal" reported that Apple would begin manufacturing iPhone models in India "over the next two months", and in May, the "Journal" wrote that an Apple manufacturer had begun production of iPhone SE in the country, while Apple told "CNBC" that the manufacturing was for a "small number" of units. Reuters reported in December 2017, that Apple and the Indian government were clashing over planned increases to import taxes for components used in mobile phone production, with Apple having engaged in talks with government officials to try to delay the plans, but the Indian government sticking to its policies of no exemptions to its "Make in India" initiative. The import tax increases went into effect a few days later, with Apple being hurt the most out of all phone manufacturers, having nine of out ten phones imported into the country, whereas main smartphone competitor Samsung produces almost all of its devices locally. In April 2019, Apple initiated manufacturing of iPhone 7 at its Bengaluru facility, keeping in mind demand from local customers even as they seek more incentives from the government of India. At the beginning of 2020, Tim Cook announced that Apple schedules the opening of its first physical outlet in India for 2021, while an online store is to be launched by the end of the year. In May 2017, the company announced a $1 billion funding project for "advanced manufacturing" in the United States, and subsequently invested $200 million in Corning Inc., a manufacturer of toughened Gorilla Glass technology used in its iPhone devices. The following December, Apple's chief operating officer, Jeff Williams, told "CNBC" that the "$1 billion" amount was "absolutely not" the final limit on its spending, elaborating that "We're not thinking in terms of a fund limit. ... We're thinking about, where are the opportunities across the U.S. to help nurture companies that are making the advanced technology — and the advanced manufacturing that goes with that — that quite frankly is essential to our innovation". The company advertised its products as being made in America until the late 1990s; however, as a result of outsourcing initiatives in the 2000s, almost all of its manufacturing is now handled abroad. According to a report by "The New York Times", Apple insiders "believe the vast scale of overseas factories, as well as the flexibility, diligence and industrial skills of foreign workers, have so outpaced their American counterparts that "Made in the U.S.A." is no longer a viable option for most Apple products". In 2006, the "Mail on Sunday" reported on the working conditions of the Chinese factories where contract manufacturers Foxconn and Inventec produced the iPod. The article stated that one complex of factories that assembled the iPod and other items had over 200,000 workers living and working within it. Employees regularly worked more than 60 hours per week and made around $100 per month. A little over half of the workers' earnings was required to pay for rent and food from the company. Apple immediately launched an investigation after the 2006 media report, and worked with their manufacturers to ensure acceptable working conditions. In 2007, Apple started yearly audits of all its suppliers regarding worker's rights, slowly raising standards and pruning suppliers that did not comply. Yearly progress reports have been published since 2008. In 2011, Apple admitted that its suppliers' child labor practices in China had worsened. The Foxconn suicides occurred between January and November 2010, when 18 Foxconn (Chinese: 富士康) employees attempted suicide, resulting in 14 deaths—the company was the world's largest contract electronics manufacturer, for clients including Apple, at the time. The suicides drew media attention, and employment practices at Foxconn were investigated by Apple. Apple issued a public statement about the suicides, and company spokesperson Steven Dowling said: The statement was released after the results from the company's probe into its suppliers' labor practices were published in early 2010. Foxconn was not specifically named in the report, but Apple identified a series of serious labor violations of labor laws, including Apple's own rules, and some child labor existed in a number of factories. Apple committed to the implementation of changes following the suicides. Also in 2010, workers in China planned to sue iPhone contractors over poisoning by a cleaner used to clean LCD screens. One worker claimed that he and his coworkers had not been informed of possible occupational illnesses. After a high suicide rate in a Foxconn facility in China making iPads and iPhones, albeit a lower rate than that of China as a whole, workers were forced to sign a legally binding document guaranteeing that they would not kill themselves. Workers in factories producing Apple products have also been exposed to n-hexane, a neurotoxin that is a cheaper alternative than alcohol for cleaning the products. A 2014 BBC investigation found excessive hours and other problems persisted, despite Apple's promise to reform factory practice after the 2010 Foxconn suicides. The Pegatron factory was once again the subject of review, as reporters gained access to the working conditions inside through recruitment as employees. While the BBC maintained that the experiences of its reporters showed that labor violations were continuing since 2010, Apple publicly disagreed with the BBC and stated: "We are aware of no other company doing as much as Apple to ensure fair and safe working conditions". In December 2014, the Institute for Global Labour and Human Rights published a report which documented inhumane conditions for the 15,000 workers at a Zhen Ding Technology factory in Shenzhen, China, which serves as a major supplier of circuit boards for Apple's iPhone and iPad. According to the report, workers are pressured into 65-hour work weeks which leaves them so exhausted that they often sleep during lunch breaks. They are also made to reside in "primitive, dark and filthy dorms" where they sleep "on plywood, with six to ten workers in each crowded room." Omnipresent security personnel also routinely harass and beat the workers. In 2019, there were reports stating that some of Foxconn's managers had used rejected parts to build iPhones, and that Apple was investigating the issue. Apple Energy, LLC is a wholly owned subsidiary of Apple Inc. that sells solar energy. , Apple's solar farms in California and Nevada have been declared to provide 217.9 megawatts of solar generation capacity. In addition to the company's solar energy production, Apple has received regulatory approval to construct a landfill gas energy plant in North Carolina. Apple will use the methane emissions to generate electricity. Apple's North Carolina data center is already powered entirely with energy from renewable sources. Following a Greenpeace protest, Apple released a statement on April 17, 2012, committing to ending its use of coal and shifting to 100% renewable clean energy. By 2013, Apple was using 100% renewable energy to power their data centers. Overall, 75% of the company's power came from clean renewable sources. In 2010, Climate Counts, a nonprofit organization dedicated to directing consumers toward the greenest companies, gave Apple a score of 52 points out of a possible 100, which puts Apple in their top category "Striding". This was an increase from May 2008, when Climate Counts only gave Apple 11 points out of 100, which placed the company last among electronics companies, at which time Climate Counts also labeled Apple with a "stuck icon", adding that Apple at the time was "a choice to avoid for the climate-conscious consumer". In May 2015, Greenpeace evaluated the state of the Green Internet and commended Apple on their environmental practices saying, "Apple's commitment to renewable energy has helped set a new bar for the industry, illustrating in very concrete terms that a 100% renewable Internet is within its reach, and providing several models of intervention for other companies that want to build a sustainable Internet." , Apple states that 100% of its U.S. operations run on renewable energy, 100% of Apple's data centers run on renewable energy and 93% of Apple's global operations run on renewable energy. However, the facilities are connected to the local grid which usually contains a mix of fossil and renewable sources, so Apple carbon offsets its electricity use. The Electronic Product Environmental Assessment Tool (EPEAT) allows consumers to see the effect a product has on the environment. Each product receives a Gold, Silver, or Bronze rank depending on its efficiency and sustainability. Every Apple tablet, notebook, desktop computer, and display that EPEAT ranks achieves a Gold rating, the highest possible. Although Apple's data centers recycle water 35 times, the increased activity in retail, corporate and data centers also increase the amount of water use to in 2015. During an event on March 21, 2016, Apple provided a status update on its environmental initiative to be 100% renewable in all of its worldwide operations. Lisa P. Jackson, Apple's vice president of Environment, Policy and Social Initiatives who reports directly to CEO, Tim Cook, announced that , 93% of Apple's worldwide operations are powered with renewable energy. Also featured was the company's efforts to use sustainable paper in their product packaging; 99% of all paper used by Apple in the product packaging comes from post-consumer recycled paper or sustainably managed forests, as the company continues its move to all paper packaging for all of its products. Apple working in partnership with Conservation Fund, have preserved 36,000 acres of working forests in Maine and North Carolina. Another partnership announced is with the World Wildlife Fund to preserve up to of forests in China. Featured was the company's installation of a 40 MW solar power plant in the Sichuan province of China that was tailor-made to coexist with the indigenous yaks that eat hay produced on the land, by raising the panels to be several feet off of the ground so the yaks and their feed would be unharmed grazing beneath the array. This installation alone compensates for more than all of the energy used in Apple's Stores and Offices in the whole of China, negating the company's energy carbon footprint in the country. In Singapore, Apple has worked with the Singaporean government to cover the rooftops of 800 buildings in the city-state with solar panels allowing Apple's Singapore operations to be run on 100% renewable energy. Liam was introduced to the world, an advanced robotic disassembler and sorter designed by Apple Engineers in California specifically for recycling outdated or broken iPhones. Reuses and recycles parts from traded in products. Apple announced on August 16, 2016, that Lens Technology, one of its major suppliers in China, has committed to power all its glass production for Apple with 100 percent renewable energy by 2018. The commitment is a large step in Apple's efforts to help manufacturers lower their carbon footprint in China. Apple also announced that all 14 of its final assembly sites in China are now compliant with UL's Zero Waste to Landfill validation. The standard, which started in January 2015, certifies that all manufacturing waste is reused, recycled, composted, or converted into energy (when necessary). Since the program began, nearly, 140,000 metric tons of waste have been diverted from landfills. Following further campaigns by Greenpeace, in 2008, Apple became the first electronics manufacturer to fully eliminate all polyvinyl chloride (PVC) and brominated flame retardants (BFRs) in its complete product line. In June 2007, Apple began replacing the cold cathode fluorescent lamp (CCFL) backlit LCD displays in its computers with mercury-free LED-backlit LCD displays and arsenic-free glass, starting with the upgraded MacBook Pro. Apple offers comprehensive and transparent information about the CO2e, emissions, materials, and electrical usage concerning every product they currently produce or have sold in the past (and which they have enough data needed to produce the report), in their portfolio on their homepage. Allowing consumers to make informed purchasing decisions on the products they offer for sale. In June 2009, Apple's iPhone 3GS was free of PVC, arsenic, and BFRs. All Apple products now have mercury-free LED-backlit LCD displays, arsenic-free glass, and non-PVC cables. All Apple products have EPEAT Gold status and beat the latest Energy Star guidelines in each product's respective regulatory category. In November 2011, Apple was featured in Greenpeace's Guide to Greener Electronics, which ranks electronics manufacturers on sustainability, climate and energy policy, and how "green" their products are. The company ranked fourth of fifteen electronics companies (moving up five places from the previous year) with a score of 4.6/10. Greenpeace praises Apple's sustainability, noting that the company exceeded its 70% global recycling goal in 2010. It continues to score well on the products rating with all Apple products now being free of PVC plastic and BFRs. However, the guide criticizes Apple on the Energy criteria for not seeking external verification of its greenhouse gas emissions data and for not setting out any targets to reduce emissions. In January 2012, Apple requested that its cable maker, Volex, begin producing halogen-free USB and power cables. In February 2016, Apple issued a US$1.5 billion green bond (climate bond), the first ever of its kind by a U.S. tech company. The green bond proceeds are dedicated to the financing of environmental projects. Apple is the world's largest information technology company by revenue, the world's largest technology company by total assets, and the world's second-largest mobile phone manufacturer after Samsung. In its fiscal year ending in September 2011, Apple Inc. reported a total of $108 billion in annual revenues—a significant increase from its 2010 revenues of $65 billion—and nearly $82 billion in cash reserves. On March 19, 2012, Apple announced plans for a $2.65-per-share dividend beginning in fourth quarter of 2012, per approval by their board of directors. The company's worldwide annual revenue in 2013 totaled $170 billion. In May 2013, Apple entered the top ten of the "Fortune" 500 list of companies for the first time, rising 11 places above its 2012 ranking to take the sixth position. , Apple has around US$234 billion of cash and marketable securities, of which 90% is located outside the United States for tax purposes. Apple amassed 65% of all profits made by the eight largest worldwide smartphone manufacturers in quarter one of 2014, according to a report by Canaccord Genuity. In the first quarter of 2015, the company garnered 92% of all earnings. On April 30, 2017, "The Wall Street Journal" reported that Apple had cash reserves of $250 billion, officially confirmed by Apple as specifically $256.8 billion a few days later. , Apple was the largest publicly traded corporation in the world by market capitalization. On August 2, 2018, Apple became the first publicly traded U.S. company to reach a $1 trillion market value. Apple was ranked No. 4 on the 2018 "Fortune" 500 rankings of the largest United States corporations by total revenue. Apple has created subsidiaries in low-tax places such as Ireland, the Netherlands, Luxembourg, and the British Virgin Islands to cut the taxes it pays around the world. According to "The New York Times," in the 1980s Apple was among the first tech companies to designate overseas salespeople in high-tax countries in a manner that allowed the company to sell on behalf of low-tax subsidiaries on other continents, sidestepping income taxes. In the late 1980s, Apple was a pioneer of an accounting technique known as the "Double Irish with a Dutch sandwich," which reduces taxes by routing profits through Irish subsidiaries and the Netherlands and then to the Caribbean. British Conservative Party Member of Parliament Charlie Elphicke published research on October 30, 2012, which showed that some multinational companies, including Apple Inc., were making billions of pounds of profit in the UK, but were paying an effective tax rate to the UK Treasury of only 3 percent, well below standard corporation tax. He followed this research by calling on the Chancellor of the Exchequer George Osborne to force these multinationals, which also included Google and The Coca-Cola Company, to state the effective rate of tax they pay on their UK revenues. Elphicke also said that government contracts should be withheld from multinationals who do not pay their fair share of UK tax. Apple Inc. claims to be the single largest taxpayer to the Department of the Treasury of the United States of America with an effective tax rate of approximately of 26% as of the second quarter of the Apple fiscal year 2016. In an interview with the German newspaper FAZ in October 2017, Tim Cook stated, that Apple is the biggest taxpayer worldwide. In 2015, Reuters reported that Apple had earnings abroad of $54.4 billion which were untaxed by the IRS of the United States. Under U.S. tax law governed by the IRC, corporations don't pay income tax on overseas profits unless the profits are repatriated into the United States and as such Apple argues that to benefit its shareholders it will leave it overseas until a repatriation holiday or comprehensive tax reform takes place in the United States. On July 12, 2016 the Central Statistics Office of Ireland announced that 2015 Irish GDP had grown by 26.3%, and 2015 Irish GNP had grown by 18.7%. The figures attracted international scorn, and were labelled by Nobel-prize winning economist, Paul Krugman, as leprechaun economics. It was not until 2018 that Irish economists could definitively prove that the 2015 growth was due to Apple restructuring its controversial double Irish subsidiaries (Apple Sales International), which Apple converted into a new Irish capital allowances for intangible assets tax scheme (expires in January 2020). The affair required the Central Bank of Ireland to create a new measure of Irish economic growth, Modified GNI* to replace Irish GDP, given the distortion of Apple's tax schemes. Irish GDP is 143% of Irish Modified GNI*. On August 30, 2016, after a two-year investigation, the EU Competition Commissioner concluded Apple received "illegal State aid" from Ireland. The EU ordered Apple to pay 13 billion euros ($14.5 billion), plus interest, in unpaid Irish taxes for 2004–2014. It is the largest tax fine in history. The Commission found that Apple had benefitted from a private Irish Revenue Commissioners tax ruling regarding its double Irish tax structure, Apple Sales International (ASI). Instead of using two companies for its double Irish structure, Apple was given a ruling to split ASI into two internal "branches". The Chancellor of Austria, Christian Kern, put this decision into perspective by stating that "every Viennese cafe, every sausage stand pays more tax in Austria than a multinational corporation". , Apple agreed to start paying €13 billion in back taxes to the Irish government, the repayments will be held in an escrow account while Apple and the Irish government continue their appeals in EU courts. the following individuals sit on the board of Apple Inc. the management of Apple Inc. includes: Apple has been a participant in various legal proceedings and claims since it began operation. In particular, Apple is known for and promotes itself as actively and aggressively enforcing its intellectual property interests. Some litigation examples include "Apple v. Samsung", "Apple v. Microsoft", "Motorola Mobility v. Apple Inc.", and "Apple Corps v. Apple Computer". Apple has also had to defend itself against charges on numerous occasions of violating intellectual property rights. Most have been dismissed in the courts as shell companies known as patent trolls, with no evidence of actual use of patents in question. On December 21, 2016, Nokia announced that in the U.S. and Germany, it has filed a suit against Apple, claiming that the latter's products infringe on Nokia's patents. Most recently, in November 2017, the United States International Trade Commission announced an investigation into allegations of patent infringement in regards to Apple's remote desktop technology; Aqua Connect, a company that builds remote desktop software, has claimed that Apple infringed on two of its patents. Apple has a notable pro-privacy stance, actively making privacy-conscious features and settings part of its conferences, promotional campaigns, and public image. With its iOS 8 mobile operating system in 2014, the company started encryption all contents of iOS devices through users' passcodes, making it impossible at the time for the company to provide customer data to law enforcement requests seeking such information. With the popularity rise of cloud storage solutions, Apple began a technique in 2016 to do deep learning scans for facial data in photos on the user's local device and encrypting the content before uploading it to Apple's iCloud storage system. It also introduced "differential privacy", a way to collect crowdsourced data from many users, while keeping individual users anonymous, in a system that "Wired" described as "trying to learn as much as possible about a group while learning as little as possible about any individual in it". Users are explicitly asked if they want to participate, and can actively opt-in or opt-out. However, Apple aids law enforcement in criminal investigations by providing iCloud backups of users' devices, and the company's commitment to privacy has been questioned by its efforts to promote biometric authentication technology in its newer iPhone models, which don't have the same level of constitutional privacy as a passcode in the United States. Apple is a partner of (PRODUCT)RED, a fundraising campaign for AIDS charity. In November 2014, Apple arranged for all App Store revenue in a two-week period to go to the fundraiser, generating more than US$20 million, and in March 2017, it released an iPhone 7 with a red color finish. Apple contributes financially to fundraisers in times of natural disasters. In November 2012, it donated $2.5 million to the American Red Cross to aid relief efforts after Hurricane Sandy, and in 2017 it donated $5 million to relief efforts for both Hurricane Irma and Hurricane Harvey, as well as for the 2017 Central Mexico earthquake. The company has also used its iTunes platform to encourage donations, including, but not limited to, help the American Red Cross in the aftermath of the 2010 Haiti earthquake, followed by similar procedure in the aftermath of the 2011 Japan earthquake, Typhoon Haiyan in the Philippines in November 2013, and European migrant crisis in September 2015. Apple emphasizes that it does not incur any processing or other fees for iTunes donations, sending 100% of the payments directly to relief efforts, though it also acknowledges that the Red Cross does not receive any personal information on the users donating and that the payments may not be tax deductible. On April 14, 2016, Apple and the World Wide Fund for Nature (WWF) announced that they have engaged in a partnership to, "help protect life on our planet." Apple released a special page in the iTunes App Store, Apps for Earth. In the arrangement, Apple has committed that through April 24, WWF will receive 100% of the proceeds from the applications participating in the App Store via both the purchases of any paid apps and the In-App Purchases. Apple and WWF's Apps for Earth campaign raised more than $8 million in total proceeds to support WWF's conservation work. WWF announced the results at WWDC 2016 in San Francisco. During the COVID-19 pandemic, Apple's CEO Cook announced that the company will be donating "millions" of masks to health workers in the United States and Europe. Apple has been criticized for alleged unethical business practices such as anti-competitive behavior, rash litigation, dubious tax tactics, production methods involving the use of sweatshop labor, customer service issues involving allegedly misleading warranties and insufficient data security, and its products' environmental footprint. Critics have claimed that Apple products combine stolen and/or purchased designs that Apple claims are its original creations. It has been criticized for its alleged collaboration with the U.S. surveillance program PRISM. Apple's issues regarding music over the years include those with the European Union regarding iTunes, trouble over updating the Spotify app on Apple devices and collusion with record labels. Apple has faced scrutiny for its tax practices, including using a Double Irish Arrangement to reduce the amount of taxes it pays. A 2013 US Senate report claimed that Apple had not paid corporate taxes for five years due to its deals with the Irish government. In 2016, the European Union ordered Apple to pay a fine for its actions. In 2018–19, Apple faced criticism for its failure to approve NVIDIA web drivers for GPU installed on legacy Mac Pro machines up to mid 2012 5,1 running macOS Mojave 10.14. Without Apple approved NVIDIA web drivers, Apple users are faced with replacing their NVIDIA cards with a competing supported brand, such as AMD Radeon from the list recommended by Apple. In June 2019, Apple issued a recall for its 2015 MacBook Pro Retina 15" affecting 432,000 units after reports of batteries catching fire. The recall was criticized as waiting times for replacements were up to 3 weeks and the company did not provide alternative replacements or repair options. Ireland's Data Protection Commission in Ireland also launched a privacy investigation to examine whether Apple complied with the EU's GDPR law following an investigation into how the company processes personal data with targeted ads on its platform. In July 2019, following a campaign by the "right to repair" movement, challenging Apple's tech repair restrictions on devices, the FTC held a workshop to establish the framework of a future nationwide Right to Repair rule. The movement argues Apple is preventing consumers from legitimately fixing their devices at local repair shops which is having a negative impact on consumers. The United States Department of Justice also began a review of big tech firms to establish whether they could be unlawfully stifling competition in a broad antitrust probe in 2019. In December 2019, a report found that the iPhone 11 Pro continues tracking location and collecting user data even after users have disabled location services. In response, an Apple engineer said the Location Services icon "appears for system services that do not have a switch in settings." In January 2020, US President Donald Trump slammed Apple for refusing to unlock two iPhones of a Saudi national, Mohammed Saeed Alshamrani, who shot and killed three American sailors and injured eight others in the Naval air base. The Pensacola shooting was declared an "act of terrorism" by the FBI, but Apple denied to crack the phones citing its data privacy policy. On 16 March 2020, France fined Apple € 1.1bn for colluding with two wholesalers to stifle competition and keep prices high by handicapping independent resellers. The arrangement created aligned prices for Apple products such as iPads and personal computers for about half the French retail market. According to the French regulators, the abuses occurred between 2005 and 2017, but were first discovered after a complaint by an independent reseller, eBizcuss, in 2012.
https://en.wikipedia.org/wiki?curid=856
Aberdeenshire Aberdeenshire (; ) is one of the 32 council areas of Scotland. It takes its name from the County of Aberdeen which has substantially different boundaries. The Aberdeenshire council area includes all of the area of the historic counties of Aberdeenshire and Kincardineshire (except the area making up the City of Aberdeen), as well as part of Banffshire. The county boundaries are officially used for a few purposes, namely land registration and lieutenancy. Aberdeenshire Council is headquartered at Woodhill House, in Aberdeen, making it the only Scottish council whose headquarters are located outside its jurisdiction. Aberdeen itself forms a different council area (Aberdeen City). Aberdeenshire borders onto Angus and Perth and Kinross to the south, Highland and Moray to the west and Aberdeen City to the east. Traditionally, it has been economically dependent upon the primary sector (agriculture, fishing, and forestry) and related processing industries. Over the last 40 years, the development of the oil and gas industry and associated service sector has broadened Aberdeenshire's economic base, and contributed to a rapid population growth of some 50% since 1975. Its land represents 8% of Scotland's overall territory. It covers an area of . Aberdeenshire has a rich prehistoric and historic heritage. It is the locus of a large number of Neolithic and Bronze Age archaeological sites, including Longman Hill, Kempstone Hill, Catto Long Barrow and Cairn Lee. The area was settled in the Bronze Age by the Beaker culture, who arrived from the south around 2000–1800 BC. Stone circles and cairns were constructed predominantly in this era. In the Iron Age, hill forts were built. Around the 1st century AD, the Taexali people, who have left little history, were believed to have resided along the coast. The Picts were the next documented inhabitants of the area, and were no later than 800–900 AD. The Romans also were in the area during this period, as they left signs at Kintore. Christianity influenced the inhabitants early on, and there were Celtic monasteries at Old Deer and Monymusk. Since medieval times there have been a number of traditional paths that crossed the Mounth (a spur of mountainous land that extends from the higher inland range to the North Sea slightly north of Stonehaven) through present-day Aberdeenshire from the Scottish Lowlands to the Highlands. Some of the most well known and historically important trackways are the Causey Mounth and Elsick Mounth. Aberdeenshire played an important role in the fighting between the Scottish clans. Clan MacBeth and the Clan Canmore were two of the larger clans. Macbeth fell at Lumphanan in 1057. During the Anglo-Norman penetration, other families arrives such as House of Balliol, Clan Bruce, and Clan Cumming (Comyn). When the fighting amongst these newcomers resulted in the Scottish Wars of Independence, the English king Edward I traveled across the area twice, in 1296 and 1303. In 1307, Robert the Bruce was victorious near Inverurie. Along with his victory came new families, namely the Forbeses and the Gordons. These new families set the stage for the upcoming rivalries during the 14th and 15th centuries. This rivalry grew worse during and after the Protestant Reformation, when religion was another reason for conflict between the clans. The Gordon family adhered to Catholicism and the Forbeses to Protestantism. Aberdeenshire was the historic seat of the clan Dempster. Three universities were founded in the area prior to the 17th century, King's College in Old Aberdeen (1494), Marischal College in Aberdeen (1593), and the University of Fraserburgh (1597). After the end of the Revolution of 1688, an extended peaceful period was interrupted only by such fleeting events such as the Rising of 1715 and the Rising of 1745. The latter resulted in the end of the ascendancy of Episcopalianism and the feudal power of landowners. An era began of increased agricultural and industrial progress. During the 17th century, Aberdeenshire was the location of more fighting, centered on the Marquess of Montrose and the English Civil Wars. This period also saw increased wealth due to the increase in trade with Germany, Poland, and the Low Countries. The present council area is named after the historic county of Aberdeenshire, which has different boundaries and was abandoned as an administrative area in 1975 under the Local Government (Scotland) Act 1973. It was replaced by Grampian Regional Council and five district councils: Banff and Buchan, Gordon, Kincardine and Deeside, Moray and the City of Aberdeen. Local government functions were shared between the two levels. In 1996, under the Local Government etc (Scotland) Act 1994, the Banff and Buchan district, Gordon district and Kincardine and Deeside district were merged to form the present Aberdeenshire council area. Moray and the City of Aberdeen were made their own council areas. The present Aberdeenshire council area consists of all of the historic counties of Aberdeenshire and Kincardineshire (except the area of those two counties making up the City of Aberdeen), as well as northeast portions of Banffshire. The population of the council area has risen over 50% since 1971 to approximately , representing 4.7% of Scotland's total. Aberdeenshire's population has increased by 9.1% since 2001, while Scotland's total population grew by 3.8%. The census lists a relatively high proportion of under 16s and slightly fewer people of working-age compared with the Scottish average. Aberdeenshire is one of the most homogeneous/indigenous regions of the UK. In 2011 82.2% of residents identified as 'White Scottish', followed by 12.3% who are 'White British', whilst ethnic minorities constitute only 0.9% of the population. The largest ethnic minority group are Asian Scottish/British at 0.8%. In addition to the English language, 48.8% of residents reported being able to speak and understand the Scots language. The fourteen biggest settlements in Aberdeenshire (with 2011 population estimates) are: Aberdeenshire's Gross Domestic Product (GDP) is estimated at £3,496m (2011), representing 5.2% of the Scottish total. Aberdeenshire's economy is closely linked to Aberdeen City's (GDP £7,906m) and in 2011 the region as a whole was calculated to contribute 16.8% of Scotland's GDP. Between 2012 and 2014 the combined Aberdeenshire and Aberdeen City economic forecast GDP growth rate is 8.6%, the highest growth rate of any local council area in the UK and above the Scottish rate of 4.8%. A significant proportion of Aberdeenshire's working residents commute to Aberdeen City for work, varying from 11.5% from Fraserburgh to 65% from Westhill. Average Gross Weekly Earnings (for full-time employees employed in work places in Aberdeenshire in 2011) are £572.60. This is lower than the Scottish average by £2.10 and a fall of 2.6% on the 2010 figure. The average gross weekly pay of people resident in Aberdeenshire is much higher, at £741.90, as many people commute out of Aberdeenshire, principally into Aberdeen City. Total employment (excluding farm data) in Aberdeenshire is estimated at 93,700 employees (Business Register and Employment Survey 2009). The majority of employees work within the service sector, predominantly in public administration, education and health. Almost 19% of employment is within the public sector. Aberdeenshire's economy remains closely linked to Aberdeen City's and the North Sea oil industry, with many employees in oil related jobs. The average monthly unemployment (claimant count) rate for Aberdeenshire in 2011 was 1.5%. This is lower than the average rates for Aberdeen City (2.3%), Scotland (4.2%) and the UK (3.8%). The council has 70 councillors, elected in 19 multi-member wards by single transferable vote. The 2017 elections resulted in the following representation: The overall political composition of the council, following subsequent defections and by-elections, is as follows: The Council's Revenue Budget for 2012/13 totals approx £548 million. The Education, Learning and Leisure Service takes the largest share of budget (52.3%), followed by Housing and Social Work (24.3%), Infrastructure Services (15.9%), Joint Boards (such as Fire and Police) and Misc services (7.9%) and Trading Activities (0.4%). 21.5% of the revenue is raised locally through the Council Tax. Average Band D Council Tax is £1,141 (2012/13), no change on the previous year. The current chief executive of the Council is Jim Savege and the elected Council Leader is Jim Gifford. Aberdeenshire also has a Provost, who is Councillor Bill Howatson. The council has devolved power to six area committees: Banff and Buchan; Buchan; Formartine; Garioch; Marr; and Kincardine and Mearns. Each area committee takes decisions on local issues such as planning applications, and the split is meant to reflect the diverse circumstances of each area. (Boundary map) The following significant structures or places are within Aberdeenshire: There are numerous rivers and burns in Aberdeenshire, including Cowie Water, Carron Water, Burn of Muchalls, River Dee, River Don, River Ury, River Ythan, Water of Feugh, Burn of Myrehouse, Laeca Burn and Luther Water. Numerous bays and estuaries are found along the seacoast of Aberdeenshire, including Banff Bay, Ythan Estuary, Stonehaven Bay and Thornyhive Bay. Aberdeenshire is in the rain shadow of the Grampians, therefore it is a generally dry climate, with portions of the coast, receiving of moisture annually. Summers are mild and winters are typically cold in Aberdeenshire; Coastal temperatures are moderated by the North Sea such that coastal areas are typically cooler in the summer and warmer in winter than inland locations. Coastal areas are also subject to haar, or coastal fog.
https://en.wikipedia.org/wiki?curid=857
American Civil War The American Civil War (also known by other names) was a civil war in the United States from 1861 to 1865, fought between northern states loyal to the Union and southern states that had seceded from the Union to form the Confederate States of America. The civil war began primarily as a result of the long-standing controversy over the enslavement of black people. War broke out in April 1861 when secessionist forces attacked Fort Sumter in South Carolina just over a month after Abraham Lincoln had been inaugurated as the President of the United States. The loyalists of the Union in the North, which also included some geographically western and southern states, proclaimed support for the Constitution. They faced secessionists of the Confederate States in the South, who advocated for states' rights to uphold slavery. Of the 34 U.S. states in February 1861, seven Southern slave-holding states were declared by their state governments to have seceded from the country, and the Confederate States of America was organized in rebellion against the U.S. constitutional government. The Confederacy grew to control at least a majority of territory in eleven states, and it claimed the additional states of Kentucky and Missouri by assertions from native secessionists fleeing Union authority. These states were given full representation in the Confederate Congress throughout the Civil War. The two remaining slave-holding states, Delaware and Maryland, were invited to join the Confederacy, but nothing substantial developed due to intervention by federal troops. The Confederate states were never diplomatically recognized as a joint entity by the government of the United States, nor by that of any foreign country. The states that remained loyal to the U.S. were known as the Union. The Union and the Confederacy quickly raised volunteer and conscription armies that fought mostly in the South for four years. Intense combat left between 620,000 and 750,000 people dead. The Civil War remains the deadliest military conflict in American history, and accounted for more American military deaths than all other wars combined until the Vietnam War. The war effectively ended on April 9, 1865, when Confederate General Robert E. Lee surrendered to Union General Ulysses S. Grant at the Battle of Appomattox Court House. Confederate generals throughout the Southern states followed suit, the last surrender on land occurring June 23. Much of the South's infrastructure was destroyed, especially its railroads. The Confederacy collapsed, slavery was abolished, and four million black slaves were freed. The war is one of the most studied and written about episodes in U.S. history. In the 1860 presidential election, Republicans, led by Abraham Lincoln, supported banning slavery in all the U.S. territories (parts of the U.S. that are not states). The Southern states viewed this as a violation of their constitutional rights, and as the first step in a grander Republican plan to eventually abolish slavery. The three pro-Union candidates together received an overwhelming 82% majority of the votes cast nationally: Republican Lincoln's votes centered in the north, Democrat Stephen A. Douglas' votes were distributed nationally and Constitutional Unionist John Bell's votes centered in Tennessee, Kentucky, and Virginia. The Republican Party, dominant in the North, secured a plurality of the popular votes and a majority of the electoral votes nationally; thus Lincoln was elected president. He was the first Republican Party candidate to win the presidency. The South was outraged, and before his inauguration, seven slave states with cotton-based economies declared secession and formed the Confederacy. The first six to declare secession had the highest proportions of slaves in their populations, with an average of 49 percent. Of those states whose legislatures resolved for secession, the first seven voted with split majorities for unionist candidates Douglas and Bell (Georgia with 51% and Louisiana with 55%), or with sizable minorities for those unionists (Alabama with 46%, Mississippi with 40%, Florida with 38%, Texas with 25%, and South Carolina, which cast Electoral College votes without a popular vote for president). Eight remaining slave states continued to reject calls for secession. Outgoing Democratic President James Buchanan and the incoming Republicans rejected secession as illegal. Lincoln's March 4, 1861, inaugural address declared that his administration would not initiate a civil war. Speaking directly to the "Southern States", he attempted to calm their fears of any threats to slavery, reaffirming, "I have no purpose, directly or indirectly to interfere with the institution of slavery in the United States where it exists. I believe I have no lawful right to do so, and I have no inclination to do so." After Confederate forces seized numerous federal forts within territory claimed by the Confederacy, efforts at compromise failed and both sides prepared for war. The Confederates assumed that European countries were so dependent on "King Cotton" that they would intervene, but none did, and none recognized the new Confederate States of America. Hostilities began on April 12, 1861, when Confederate forces fired upon Fort Sumter. While in the Western Theater the Union made significant permanent gains, in the Eastern Theater, the battle was inconclusive during 1861–1862. In September 1862, Lincoln issued the Emancipation Proclamation, which made ending slavery a war goal. To the west, the Union destroyed the Confederate river navy by summer of 1862, then much of its western armies, and seized New Orleans. The successful 1863 Union siege of Vicksburg split the Confederacy in two at the Mississippi River. In 1863, Robert E. Lee's Confederate incursion north ended at the Battle of Gettysburg. Western successes led to Ulysses S. Grant's command of all Union armies in 1864. Inflicting an ever-tightening naval blockade of Confederate ports, the Union marshaled resources and manpower to attack the Confederacy from all directions, leading to the fall of Atlanta to William Tecumseh Sherman and his march to the sea. The last significant battles raged around the Siege of Petersburg. Lee's escape attempt ended with his surrender at Appomattox Court House, on April 9, 1865. While the military war was coming to an end, the political reintegration of the nation was to take another 12 years, known as the Reconstruction era. The American Civil War was among the earliest industrial wars. Railroads, the telegraph, steamships and iron-clad ships, and mass-produced weapons were employed extensively. The mobilization of civilian factories, mines, shipyards, banks, transportation, and food supplies all foreshadowed the impact of industrialization in World War I, World War II, and subsequent conflicts. It remains the deadliest war in American history. From 1861 to 1865, it is estimated that 620,000 to 750,000 soldiers died, along with an undetermined number of civilians. By one estimate, the war claimed the lives of 10 percent of all Northern men 20–45 years old, and 30 percent of all Southern white men aged 18–40. The causes of secession were complex and have been controversial since the war began, but most academic scholars identify slavery as a central cause of the war. James C. Bradford wrote that the issue has been further complicated by historical revisionists, who have tried to offer a variety of reasons for the war. Slavery was the central source of escalating political tension in the 1850s. The Republican Party was determined to prevent any spread of slavery, and many Southern leaders had threatened secession if the Republican candidate, Lincoln, won the 1860 election. After Lincoln won, many Southern leaders felt that disunion was their only option, fearing that the loss of representation would hamper their ability to promote pro-slavery acts and policies. Slavery was a major cause of disunion. Although there were opposing views even in the Union States, most Northern soldiers were mostly indifferent on the subject of slavery, while Confederates fought the war mainly to protect a Southern society of which slavery was an integral part. From the anti-slavery perspective, the issue was primarily about whether the system of slavery was an anachronistic evil that was incompatible with republicanism. The strategy of the anti-slavery forces was containment—to stop the expansion and thus put slavery on a path to gradual extinction. The slave-holding interests in the South denounced this strategy as infringing upon their Constitutional rights. Southern whites believed that the emancipation of slaves would destroy the South's economy, due to the large amount of capital invested in slaves and fears of integrating the ex-slave black population. In particular, Southerners feared a repeat of "the horrors of Santo Domingo", in which nearly all white people – including men, women, children, and even many sympathetic to abolition – were killed after the successful slave revolt in Haiti. Historian Thomas Fleming points to the historical phrase "a disease in the public mind" used by critics of this idea, and proposes it contributed to the segregation in the Jim Crow era following emancipation. These fears were exacerbated by the 1859 attempt of John Brown to instigate an armed slave rebellion in the South. Slavery was illegal in much of the North, having been outlawed in the late 18th and early 19th centuries. It was also fading in the border states and Southern cities, but it was expanding in the highly profitable cotton districts of the rural South and Southwest. Subsequent writers on the American Civil War looked to several factors explaining the geographic divide. Between 1803 and 1854, the United States achieved a vast expansion of territory through purchase, negotiation, and conquest. At first, the new states carved out of these territories entering the union were apportioned equally between slave and free states. Pro- and anti-slavery forces collided over the territories west of the Mississippi. With the conquest of northern Mexico west to California in 1848, slaveholding interests looked forward to expanding into these lands and perhaps Cuba and Central America as well. Northern "free soil" interests vigorously sought to curtail any further expansion of slave territory. The Compromise of 1850 over California balanced a free-soil state with stronger fugitive slave laws for a political settlement after four years of strife in the 1840s. But the states admitted following California were all free: Minnesota (1858), Oregon (1859), and Kansas (1861). In the Southern states the question of the territorial expansion of slavery westward again became explosive. Both the South and the North drew the same conclusion: "The power to decide the question of slavery for the territories was the power to determine the future of slavery itself." By 1860, four doctrines had emerged to answer the question of federal control in the territories, and they all claimed they were sanctioned by the Constitution, implicitly or explicitly. The first of these "conservative" theories, represented by the Constitutional Union Party, argued that the Missouri Compromise apportionment of territory north for free soil and south for slavery should become a Constitutional mandate. The Crittenden Compromise of 1860 was an expression of this view. The second doctrine of Congressional preeminence, championed by Abraham Lincoln and the Republican Party, insisted that the Constitution did not bind legislators to a policy of balance—that slavery could be excluded in a territory as it was done in the Northwest Ordinance of 1787 at the discretion of Congress; thus Congress could restrict human bondage, but never establish it. The Wilmot Proviso announced this position in 1846. Senator Stephen A. Douglas proclaimed the doctrine of territorial or "popular" sovereignty—which asserted that the settlers in a territory had the same rights as states in the Union to establish or disestablish slavery as a purely local matter. The Kansas–Nebraska Act of 1854 legislated this doctrine. In the Kansas Territory, years of pro and anti-slavery violence and political conflict erupted; the congressional House of Representatives voted to admit Kansas as a free state in early 1860, but its admission did not pass the Senate until January 1861, after the departure of Southern senators. The fourth theory was advocated by Mississippi Senator Jefferson Davis, one of state sovereignty ("states' rights"), also known as the "Calhoun doctrine", named after the South Carolinian political theorist and statesman John C. Calhoun. Rejecting the arguments for federal authority or self-government, state sovereignty would empower states to promote the expansion of slavery as part of the federal union under the U.S. Constitution. "States' rights" was an ideology formulated and applied as a means of advancing slave state interests through federal authority. As historian Thomas L. Krannawitter points out, the "Southern demand for federal slave protection represented a demand for an unprecedented expansion of federal power." These four doctrines comprised the dominant ideologies presented to the American public on the matters of slavery, the territories, and the U.S. Constitution before the 1860 presidential election. The South argued that just as each state had decided to join the Union, a state had the right to secede—leave the Union—at any time. Northerners (including President Buchanan) rejected that notion as opposed to the will of the Founding Fathers, who said they were setting up a perpetual union. Historian James McPherson writes concerning states' rights and other non-slavery explanations: Sectionalism resulted from the different economies, social structure, customs, and political values of the North and South. Regional tensions came to a head during the War of 1812, resulting in the Hartford Convention, which manifested Northern dissatisfaction with a foreign trade embargo that affected the industrial North disproportionately, the Three-Fifths Compromise, dilution of Northern power by new states, and a succession of Southern presidents. Sectionalism increased steadily between 1800 and 1860 as the North, which phased slavery out of existence, industrialized, urbanized, and built prosperous farms, while the deep South concentrated on plantation agriculture based on slave labor, together with subsistence agriculture for poor whites. In the 1840s and 1850s, the issue of accepting slavery (in the guise of rejecting slave-owning bishops and missionaries) split the nation's largest religious denominations (the Methodist, Baptist, and Presbyterian churches) into separate Northern and Southern denominations. Historians have debated whether economic differences between the mainly industrial North and the mainly agricultural South helped cause the war. Most historians now disagree with the economic determinism of historian Charles A. Beard in the 1920s, and emphasize that Northern and Southern economies were largely complementary. While socially different, the sections economically benefited each other. Slave owners preferred low-cost manual labor with no mechanization. Northern manufacturing interests supported tariffs and protectionism while Southern planters demanded free trade. The Democrats in Congress, controlled by Southerners, wrote the tariff laws in the 1830s, 1840s, and 1850s, and kept reducing rates so that the 1857 rates were the lowest since 1816. The Republicans called for an increase in tariffs in the 1860 election. The increases were only enacted in 1861 after Southerners resigned their seats in Congress. The tariff issue was a Northern grievance. However, neo-Confederate writers have claimed it as a Southern grievance. In 1860–61 none of the groups that proposed compromises to head off secession raised the tariff issue. Pamphleteers North and South rarely mentioned the tariff. Nationalism was a powerful force in the early 19th century, with famous spokesmen such as Andrew Jackson and Daniel Webster. While practically all Northerners supported the Union, Southerners were split between those loyal to the entire United States (called "Unionists") and those loyal primarily to the Southern region and then the Confederacy. Perceived insults to Southern collective honor included the enormous popularity of "Uncle Tom's Cabin" (1852) and the actions of abolitionist John Brown in trying to incite a slave rebellion in 1859. While the South moved towards a Southern nationalism, leaders in the North were also becoming more nationally minded, and they rejected any notion of splitting the Union. The Republican national electoral platform of 1860 warned that Republicans regarded disunion as treason and would not tolerate it. The South ignored the warnings; Southerners did not realize how ardently the North would fight to hold the Union together. The election of Abraham Lincoln in November 1860 was the final trigger for secession. Efforts at compromise, including the Corwin Amendment and the Crittenden Compromise, failed. Southern leaders feared that Lincoln would stop the expansion of slavery and put it on a course toward extinction. The slave states, which had already become a minority in the House of Representatives, were now facing a future as a perpetual minority in the Senate and Electoral College against an increasingly powerful North. Before Lincoln took office in March 1861, seven slave states had declared their secession and joined to form the Confederacy. According to Lincoln, the American people had shown that they had been successful in "establishing" and "administering" a republic, but a third challenge faced the nation, "maintaining" a republic based on the people's vote against an attempt to overthrow it. The election of Lincoln provoked the legislature of South Carolina to call a state convention to consider secession. Before the war, South Carolina did more than any other Southern state to advance the notion that a state had the right to nullify federal laws, and even to secede from the United States. The convention summoned unanimously voted to secede on December 20, 1860, and adopted the "Declaration of the Immediate Causes Which Induce and Justify the Secession of South Carolina from the Federal Union". It argued for states' rights for slave owners in the South, but contained a complaint about states' rights in the North in the form of opposition to the Fugitive Slave Act, claiming that Northern states were not fulfilling their federal obligations under the Constitution. The "cotton states" of Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas followed suit, seceding in January and February 1861. Among the ordinances of secession passed by the individual states, those of three—Texas, Alabama, and Virginia—specifically mentioned the plight of the "slaveholding states" at the hands of Northern abolitionists. The rest make no mention of the slavery issue and are often brief announcements of the dissolution of ties by the legislatures. However, at least four states—South Carolina, Mississippi, Georgia, and Texas—also passed lengthy and detailed explanations of their causes for secession, all of which laid the blame squarely on the movement to abolish slavery and that movement's influence over the politics of the Northern states. The Southern states believed slaveholding was a constitutional right because of the Fugitive Slave Clause of the Constitution. These states agreed to form a new federal government, the Confederate States of America, on February 4, 1861. They took control of federal forts and other properties within their boundaries with little resistance from outgoing President James Buchanan, whose term ended on March 4, 1861. Buchanan said that the Dred Scott decision was proof that the South had no reason for secession, and that the Union "was intended to be perpetual", but that "The power by force of arms to compel a State to remain in the Union" was not among the "enumerated powers granted to Congress". One-quarter of the U.S. Army—the entire garrison in Texas—was surrendered in February 1861 to state forces by its commanding general, David E. Twiggs, who then joined the Confederacy. As Southerners resigned their seats in the Senate and the House, Republicans were able to pass projects that had been blocked by Southern senators before the war. These included the Morrill Tariff, land grant colleges (the Morrill Act), a Homestead Act, a transcontinental railroad (the Pacific Railroad Acts), the National Bank Act, the authorization of United States Notes by the Legal Tender Act of 1862, and the ending of slavery in the District of Columbia. The Revenue Act of 1861 introduced the income tax to help finance the war. On December 18, 1860, the Crittenden Compromise was proposed to re-establish the Missouri Compromise line by constitutionally banning slavery in territories to the north of the line while guaranteeing it to the south. The adoption of this compromise likely would have prevented the secession of every Southern state apart from South Carolina, but Lincoln and the Republicans rejected it. It was then proposed to hold a national referendum on the compromise. The Republicans again rejected the idea, although a majority of both Northerners and Southerners would likely have voted in favor of it. A pre-war February Peace Conference of 1861 met in Washington, proposing a solution similar to that of the Crittenden compromise, it was rejected by Congress. The Republicans proposed an alternative compromise to not interfere with slavery where it existed but the South regarded it as insufficient. Nonetheless, the remaining eight slave states rejected pleas to join the Confederacy following a two-to-one no-vote in Virginia's First Secessionist Convention on April 4, 1861. On March 4, 1861, Abraham Lincoln was sworn in as president. In his inaugural address, he argued that the Constitution was a "more perfect union" than the earlier Articles of Confederation and Perpetual Union, that it was a binding contract, and called any secession "legally void". He had no intent to invade Southern states, nor did he intend to end slavery where it existed, but said that he would use force to maintain possession of Federal property. The government would make no move to recover post offices, and if resisted, mail delivery would end at state lines. Where popular conditions did not allow peaceful enforcement of Federal law, U.S. marshals and judges would be withdrawn. No mention was made of bullion lost from U.S. mints in Louisiana, Georgia, and North Carolina. He stated that it would be U.S. policy to only collect import duties at its ports; there could be no serious injury to the South to justify the armed revolution during his administration. His speech closed with a plea for restoration of the bonds of union, famously calling on "the mystic chords of memory" binding the two regions. The South sent delegations to Washington and offered to pay for the federal properties and enter into a peace treaty with the United States. Lincoln rejected any negotiations with Confederate agents because he claimed the Confederacy was not a legitimate government, and that making any treaty with it would be tantamount to recognition of it as a sovereign government. Secretary of State William Seward, who at the time saw himself as the real governor or "prime minister" behind the throne of the inexperienced Lincoln, engaged in unauthorized and indirect negotiations that failed. President Lincoln was determined to hold all remaining Union-occupied forts in the Confederacy: Fort Monroe in Virginia, Fort Pickens, Fort Jefferson and Fort Taylor in Florida, and Fort Sumter – located at the cockpit of secession in Charleston, South Carolina. Fort Sumter is located in the middle of the harbor of Charleston, South Carolina. Its garrison had recently moved there to avoid incidents with local militias in the streets of the city. Lincoln told its commander, Maj. Anderson to hold on until fired upon. Confederate president Jefferson Davis ordered the surrender of the fort. Anderson gave a conditional reply that the Confederate government rejected, and Davis ordered General P. G. T. Beauregard to attack the fort before a relief expedition could arrive. He bombarded Fort Sumter on April 12–13, forcing its capitulation. The attack on Fort Sumter rallied the North to the defense of American nationalism. Historian Allan Nevins underscored the significance of the event: Union leaders incorrectly assumed that only a minority of Southerners were in favor of secession and that there were large numbers of southern Unionists that could be counted on. Had Northerners realized that most Southerners favored secession, they might have hesitated at attempting the enormous task of conquering a united South. Lincoln called on all the states to send forces to recapture the fort and other federal properties. The scale of the rebellion appeared to be small, so he called for only 75,000 volunteers for 90 days. The governor of Massachusetts had state regiments on trains headed south the next day. In western Missouri, local secessionists seized Liberty Arsenal. On May 3, 1861, Lincoln called for an additional 42,000 volunteers for a period of three years. Four states in the middle and upper South had repeatedly rejected Confederate overtures, but now Virginia, Tennessee, Arkansas, and North Carolina refused to send forces against their neighbors, declared their secession, and joined the Confederacy. To reward Virginia, the Confederate capital was moved to Richmond. Maryland, Delaware, Missouri, and Kentucky were slave states that were opposed to both secession and coercing the South. West Virginia then joined them as an additional border state after it separated from Virginia and became a state of the Union in 1863. Maryland's territory surrounded the United States' capital of Washington, D.C., and could cut it off from the North. It had numerous anti-Lincoln officials who tolerated anti-army rioting in Baltimore and the burning of bridges, both aimed at hindering the passage of troops to the South. Maryland's legislature voted overwhelmingly (53–13) to stay in the Union, but also rejected hostilities with its southern neighbors, voting to close Maryland's rail lines to prevent them from being used for war. Lincoln responded by establishing martial law and unilaterally suspending habeas corpus in Maryland, along with sending in militia units from the North. Lincoln rapidly took control of Maryland and the District of Columbia by seizing many prominent figures, including arresting 1/3 of the members of the Maryland General Assembly on the day it reconvened. All were held without trial, ignoring a ruling by the Chief Justice of the U.S. Supreme Court Roger Taney, a Maryland native, that only Congress (and not the president) could suspend habeas corpus (Ex parte Merryman). Federal troops imprisoned a prominent Baltimore newspaper editor, Frank Key Howard, Francis Scott Key's grandson, after he criticized Lincoln in an editorial for ignoring the Supreme Court Chief Justice's ruling. In Missouri, an elected convention on secession voted decisively to remain within the Union. When pro-Confederate Governor Claiborne F. Jackson called out the state militia, it was attacked by federal forces under General Nathaniel Lyon, who chased the governor and the rest of the State Guard to the southwestern corner of the state ("see also": Missouri secession). In the resulting vacuum, the convention on secession reconvened and took power as the Unionist provisional government of Missouri. Kentucky did not secede; for a time, it declared itself neutral. When Confederate forces entered the state in September 1861, neutrality ended and the state reaffirmed its Union status while trying to maintain slavery. During a brief invasion by Confederate forces in 1861, Confederate sympathizers organized a secession convention, formed the shadow Confederate Government of Kentucky, inaugurated a governor, and gained recognition from the Confederacy. Its jurisdiction extended only as far as Confederate battle lines in the Commonwealth and went into exile for good after October 1862. After Virginia's secession, a Unionist government in Wheeling asked 48 counties to vote on an ordinance to create a new state on October 24, 1861. A voter turnout of 34 percent approved the statehood bill (96 percent approving). The inclusion of 24 secessionist counties in the state and the ensuing guerrilla war engaged about 40,000 Federal troops for much of the war. Congress admitted West Virginia to the Union on June 20, 1863. West Virginia provided about 20,000–22,000 soldiers to both the Confederacy and the Union. A Unionist secession attempt occurred in East Tennessee, but was suppressed by the Confederacy, which arrested over 3,000 men suspected of being loyal to the Union. They were held without trial. The Civil War was a contest marked by the ferocity and frequency of battle. Over four years, 237 named battles were fought, as were many more minor actions and skirmishes, which were often characterized by their bitter intensity and high casualties. In his book "The American Civil War", John Keegan writes that "The American Civil War was to prove one of the most ferocious wars ever fought". In many cases, without geographic objectives, the only target for each side was the enemy's soldier. As the first seven states began organizing a Confederacy in Montgomery, the entire U.S. army numbered 16,000. However, Northern governors had begun to mobilize their militias. The Confederate Congress authorized the new nation up to 100,000 troops sent by governors as early as February. By May, Jefferson Davis was pushing for 100,000 men under arms for one year or the duration, and that was answered in kind by the U.S. Congress. In the first year of the war, both sides had far more volunteers than they could effectively train and equip. After the initial enthusiasm faded, reliance on the cohort of young men who came of age every year and wanted to join was not enough. Both sides used a draft law—conscription—as a device to encourage or force volunteering; relatively few were drafted and served. The Confederacy passed a draft law in April 1862 for young men aged 18 to 35; overseers of slaves, government officials, and clergymen were exempt. The U.S. Congress followed in July, authorizing a militia draft within a state when it could not meet its quota with volunteers. European immigrants joined the Union Army in large numbers, including 177,000 born in Germany and 144,000 born in Ireland. When the Emancipation Proclamation went into effect in January 1863, ex-slaves were energetically recruited by the states and used to meet the state quotas. States and local communities offered higher and higher cash bonuses for white volunteers. Congress tightened the law in March 1863. Men selected in the draft could provide substitutes or, until mid-1864, pay commutation money. Many eligibles pooled their money to cover the cost of anyone drafted. Families used the substitute provision to select which man should go into the army and which should stay home. There was much evasion and overt resistance to the draft, especially in Catholic areas. The draft riot in New York City in July 1863 involved Irish immigrants who had been signed up as citizens to swell the vote of the city's Democratic political machine, not realizing it made them liable for the draft. Of the 168,649 men procured for the Union through the draft, 117,986 were substitutes, leaving only 50,663 who had their services conscripted. In both the North and South, the draft laws were highly unpopular. In the North, some 120,000 men evaded conscription, many of them fleeing to Canada, and another 280,000 soldiers deserted during the war. At least 100,000 Southerners deserted, or about 10 percent; Southern desertion was unusually high, in part, because the highly localized Southern identity meant that many Southern men had little investment in the outcome of the war, with individual soldiers caring more about the fate of their local area than any grand ideal. In the North, "bounty jumpers" enlisted to get the generous bonus, deserted, then went back to a second recruiting station under a different name to sign up again for a second bonus; 141 were caught and executed. From a tiny frontier force in 1860, the Union and Confederate armies had grown into the "largest and most efficient armies in the world" within a few years. European observers at the time dismissed them as amateur and unprofessional, but British historian John Keegan concluded that each outmatched the French, Prussian and Russian armies of the time, and but for the Atlantic, would have threatened any of them with defeat. The number of women who served as soldiers during the war is estimated at between 400 and 750, although an accurate count is impossible because the women had to disguise themselves as men. Women also served on the Union hospital ship "Red Rover" and nursed Union and Confederate troops at field hospitals. Mary Edwards Walker, the only woman to ever receive the Medal of Honor, served in the Union Army and was given the medal for her efforts to treat the wounded during the war. Her name was deleted from the Army Medal of Honor Roll in 1917 (along with over 900 other, male MOH recipients); however, it was restored in 1977. Perman and Taylor (2010) write that historians are of two minds on why millions of men seemed so eager to fight, suffer and die over four years: At the start of the civil war, a system of paroles operated. Captives agreed not to fight until they were officially exchanged. Meanwhile, they were held in camps run by their army. They were paid, but they were not allowed to perform any military duties. The system of exchanges collapsed in 1863 when the Confederacy refused to exchange black prisoners. After that, about 56,000 of the 409,000 POWs died in prisons during the war, accounting for nearly 10 percent of the conflict's fatalities. The small U.S. Navy of 1861 was rapidly enlarged to 6,000 officers and 45,000 men in 1865, with 671 vessels, having a tonnage of 510,396. Its mission was to blockade Confederate ports, take control of the river system, defend against Confederate raiders on the high seas, and be ready for a possible war with the British Royal Navy. Meanwhile, the main riverine war was fought in the West, where a series of major rivers gave access to the Confederate heartland. The U.S. Navy eventually gained control of the Red, Tennessee, Cumberland, Mississippi, and Ohio rivers. In the East, the Navy supplied and moved army forces about and occasionally shelled Confederate installations. The Civil War occurred during the early stages of the industrial revolution. Many naval innovations emerged during this time, most notably the advent of the ironclad warship. It began when the Confederacy, knowing they had to meet or match the Union's naval superiority, responded to the Union blockade by building or converting more than 130 vessels, including twenty-six ironclads and floating batteries. Only half of these saw active service. Many were equipped with ram bows, creating "ram fever" among Union squadrons wherever they threatened. But in the face of overwhelming Union superiority and the Union's ironclad warships, they were unsuccessful. In addition to ocean-going warships coming up the Mississippi, the Union Navy used timberclads, tinclads, and armored gunboats. Shipyards at Cairo, Illinois, and St. Louis built new boats or modified steamboats for action. The Confederacy experimented with the submarine , which did not work satisfactorily, and with building an ironclad ship, , which was based on rebuilding a sunken Union ship, . On its first foray on March 8, 1862, "Virginia" inflicted significant damage to the Union's wooden fleet, but the next day the first Union ironclad, , arrived to challenge it in the Chesapeake Bay. The resulting three hour Battle of Hampton Roads was a draw, but it proved that ironclads were effective warships. Not long after the battle, the Confederacy was forced to scuttle the "Virginia" to prevent its capture, while the Union built many copies of the "Monitor". Lacking the technology and infrastructure to build effective warships, the Confederacy attempted to obtain warships from Britain. By early 1861, General Winfield Scott had devised the Anaconda Plan to win the war with as little bloodshed as possible. Scott argued that a Union blockade of the main ports would weaken the Confederate economy. Lincoln adopted parts of the plan, but he overruled Scott's caution about 90-day volunteers. Public opinion, however, demanded an immediate attack by the army to capture Richmond. In April 1861, Lincoln announced the Union blockade of all Southern ports; commercial ships could not get insurance and regular traffic ended. The South blundered in embargoing cotton exports in 1861 before the blockade was effective; by the time they realized the mistake, it was too late. "King Cotton" was dead, as the South could export less than 10 percent of its cotton. The blockade shut down the ten Confederate seaports with railheads that moved almost all the cotton, especially New Orleans, Mobile, and Charleston. By June 1861, warships were stationed off the principal Southern ports, and a year later nearly 300 ships were in service. British investors built small, fast, steam-driven blockade runners that traded arms and luxuries brought in from Britain through Bermuda, Cuba, and the Bahamas in return for high-priced cotton. Many of the ships were designed for speed and were so small that only a small amount of cotton went out. When the Union Navy seized a blockade runner, the ship and cargo were condemned as a prize of war and sold, with the proceeds given to the Navy sailors; the captured crewmen were mostly British, and they were released. The Southern economy nearly collapsed during the war. There were multiple reasons for this: the severe deterioration of food supplies, especially in cities, the failure of Southern railroads, the loss of control of the main rivers, foraging by Northern armies, and the seizure of animals and crops by Confederate armies. Most historians agree that the blockade was a major factor in ruining the Confederate economy; however, Wise argues that the blockade runners provided just enough of a lifeline to allow Lee to continue fighting for additional months, thanks to fresh supplies of 400,000 rifles, lead, blankets, and boots that the homefront economy could no longer supply. Surdam argues that the blockade was a powerful weapon that eventually ruined the Southern economy, at the cost of few lives in combat. Practically, the entire Confederate cotton crop was useless (although it was sold to Union traders), costing the Confederacy its main source of income. Critical imports were scarce and the coastal trade was largely ended as well. The measure of the blockade's success was not the few ships that slipped through, but the thousands that never tried it. Merchant ships owned in Europe could not get insurance and were too slow to evade the blockade, so they stopped calling at Confederate ports. To fight an offensive war, the Confederacy purchased ships from Britain, converted them to warships, and raided American merchant ships in the Atlantic and Pacific oceans. Insurance rates skyrocketed and the American flag virtually disappeared from international waters. However, the same ships were reflagged with European flags and continued unmolested. After the war, the U.S. demanded that Britain pay for the damage done, and Britain paid the U.S. $15 million in 1871. Although the Confederacy hoped that Britain and France would join them against the Union, this was never likely, and so they instead tried to bring Britain and France in as mediators. The Union, under Lincoln and Secretary of State William H. Seward worked to block this, and threatened war if any country officially recognized the existence of the Confederate States of America. In 1861, Southerners voluntarily embargoed cotton shipments, hoping to start an economic depression in Europe that would force Britain to enter the war to get cotton, but this did not work. Worse, Europe developed other cotton suppliers, which they found superior, hindering the South's recovery after the war. Cotton diplomacy proved a failure as Europe had a surplus of cotton, while the 1860–62 crop failures in Europe made the North's grain exports of critical importance. It also helped to turn European opinion further away from the Confederacy. It was said that "King Corn was more powerful than King Cotton", as U.S. grain went from a quarter of the British import trade to almost half. When Britain did face a cotton shortage, it was temporary, being replaced by increased cultivation in Egypt and India. Meanwhile, the war created employment for arms makers, ironworkers, and British ships to transport weapons. Lincoln's administration failed to appeal to European public opinion. Diplomats explained that the United States was not committed to the ending of slavery, and instead repeated legalistic arguments about the unconstitutionality of secession. Confederate representatives, on the other hand, were much more successful by ignoring slavery and instead focusing on their struggle for liberty, their commitment to free trade, and the essential role of cotton in the European economy. The European aristocracy was "absolutely gleeful in pronouncing the American debacle as proof that the entire experiment in popular government had failed. European government leaders welcomed the fragmentation of the ascendant American Republic." U.S. minister to Britain Charles Francis Adams proved particularly adept and convinced Britain not to boldly challenge the blockade. The Confederacy purchased several warships from commercial shipbuilders in Britain (, , , , , and some others). The most famous, the , did considerable damage and led to serious postwar disputes. However, public opinion against slavery created a political liability for politicians in Britain, where the antislavery movement was powerful. War loomed in late 1861 between the U.S. and Britain over the "Trent" affair, involving the U.S. Navy's boarding of the British ship and seizure of two Confederate diplomats. However, London and Washington were able to smooth over the problem after Lincoln released the two. In 1862, the British considered mediation between North and South, though even such an offer would have risked war with the United States. British Prime Minister Lord Palmerston reportedly read "Uncle Tom's Cabin" three times when deciding on this. The Union victory in the Battle of Antietam caused them to delay this decision. The Emancipation Proclamation over time would reinforce the political liability of supporting the Confederacy. Despite sympathy for the Confederacy, France's seizure of Mexico ultimately deterred them from war with the Union. Confederate offers late in the war to end slavery in return for diplomatic recognition were not seriously considered by London or Paris. After 1863, the Polish revolt against Russia further distracted the European powers, and ensured that they would remain neutral. The Eastern theater refers to the military operations east of the Appalachian Mountains, including the states of Virginia, West Virginia, Maryland, and Pennsylvania, the District of Columbia, and the coastal fortifications and seaports of North Carolina. Maj. Gen. George B. McClellan took command of the Union Army of the Potomac on July 26 (he was briefly general-in-chief of all the Union armies, but was subsequently relieved of that post in favor of Maj. Gen. Henry W. Halleck), and the war began in earnest in 1862. The 1862 Union strategy called for simultaneous advances along four axes: The primary Confederate force in the Eastern theater was the Army of Northern Virginia. The Army originated as the (Confederate) Army of the Potomac, which was organized on June 20, 1861, from all operational forces in northern Virginia. On July 20 and 21, the Army of the Shenandoah and forces from the District of Harpers Ferry were added. Units from the Army of the Northwest were merged into the Army of the Potomac between March 14 and May 17, 1862. The Army of the Potomac was renamed "Army of Northern Virginia" on March 14. The Army of the Peninsula was merged into it on April 12, 1862. When Virginia declared its secession in April 1861, Robert E. Lee chose to follow his home state, despite his desire for the country to remain intact and an offer of a senior Union command. Lee's biographer, Douglas S. Freeman, asserts that the army received its final name from Lee when he issued orders assuming command on June 1, 1862. However, Freeman does admit that Lee corresponded with Brigadier General Joseph E. Johnston, his predecessor in army command, before that date and referred to Johnston's command as the Army of Northern Virginia. Part of the confusion results from the fact that Johnston commanded the Department of Northern Virginia (as of October 22, 1861) and the name Army of Northern Virginia can be seen as an informal consequence of its parent department's name. Jefferson Davis and Johnston did not adopt the name, but it is clear that the organization of units as of March 14 was the same organization that Lee received on June 1, and thus it is generally referred to today as the Army of Northern Virginia, even if that is correct only in retrospect. On July 4 at Harper's Ferry, Colonel Thomas J. Jackson assigned Jeb Stuart to command all the cavalry companies of the Army of the Shenandoah. He eventually commanded the Army of Northern Virginia's cavalry. In one of the first highly visible battles, in July 1861, a march by Union troops under the command of Maj. Gen. Irvin McDowell on the Confederate forces led by Gen. P. G. T. Beauregard near Washington was repulsed at the First Battle of Bull Run (also known as First Manassas). The Union had the upper hand at first, nearly pushing confederate forces holding a defensive position into a rout, but Confederate reinforcements under. Joseph E. Johnston arrived from the Shenandoah Valley by railroad, and the course of the battle quickly changed. A brigade of Virginians under the relatively unknown brigadier general from the Virginia Military Institute, Thomas J. Jackson, stood its ground, which resulted in Jackson receiving his famous nickname, "Stonewall". Upon the strong urging of President Lincoln to begin offensive operations, McClellan attacked Virginia in the spring of 1862 by way of the peninsula between the York River and James River, southeast of Richmond. McClellan's army reached the gates of Richmond in the Peninsula Campaign, Also in the spring of 1862, in the Shenandoah Valley, Stonewall Jackson led his Valley Campaign. Employing audacity and rapid, unpredictable movements on interior lines, Jackson's 17,000 men marched 646 miles (1,040 km) in 48 days and won several minor battles as they successfully engaged three Union armies (52,000 men), including those of Nathaniel P. Banks and John C. Fremont, preventing them from reinforcing the Union offensive against Richmond. The swiftness of Jackson's men earned them the nickname of "foot cavalry". Johnston halted McClellan's advance at the Battle of Seven Pines, but he was wounded in the battle, and Robert E. Lee assumed his position of command. General Lee and top subordinates James Longstreet and Stonewall Jackson defeated McClellan in the Seven Days Battles and forced his retreat. The Northern Virginia Campaign, which included the Second Battle of Bull Run, ended in yet another victory for the South. McClellan resisted General-in-Chief Halleck's orders to send reinforcements to John Pope's Union Army of Virginia, which made it easier for Lee's Confederates to defeat twice the number of combined enemy troops. Emboldened by Second Bull Run, the Confederacy made its first invasion of the North with the Maryland Campaign. General Lee led 45,000 men of the Army of Northern Virginia across the Potomac River into Maryland on September 5. Lincoln then restored Pope's troops to McClellan. McClellan and Lee fought at the Battle of Antietam near Sharpsburg, Maryland, on September 17, 1862, the bloodiest single day in United States military history. Lee's army checked at last, returned to Virginia before McClellan could destroy it. Antietam is considered a Union victory because it halted Lee's invasion of the North and provided an opportunity for Lincoln to announce his Emancipation Proclamation. When the cautious McClellan failed to follow up on Antietam, he was replaced by Maj. Gen. Ambrose Burnside. Burnside was soon defeated at the Battle of Fredericksburg on December 13, 1862, when more than 12,000 Union soldiers were killed or wounded during repeated futile frontal assaults against Marye's Heights. After the battle, Burnside was replaced by Maj. Gen. Joseph Hooker. Hooker, too, proved unable to defeat Lee's army; despite outnumbering the Confederates by more than two to one, his Chancellorsville Campaign proved ineffective and he was humiliated in the Battle of Chancellorsville in May 1863. Chancellorsville is known as Lee's "perfect battle" because his risky decision to divide his army in the presence of a much larger enemy force resulted in a significant Confederate victory. Gen. Stonewall Jackson was shot in the arm by accidental friendly fire during the battle and subsequently died of complications. Lee famously said: "He has lost his left arm, but I have lost my right arm." The fiercest fighting of the battle—and the second bloodiest day of the Civil War—occurred on May 3 as Lee launched multiple attacks against the Union position at Chancellorsville. That same day, John Sedgwick advanced across the Rappahannock River, defeated the small Confederate force at Marye's Heights in the Second Battle of Fredericksburg, and then moved to the west. The Confederates fought a successful delaying action at the Battle of Salem Church. Gen. Hooker was replaced by Maj. Gen. George Meade during Lee's second invasion of the North, in June. Meade defeated Lee at the Battle of Gettysburg (July 1 to 3, 1863). This was the bloodiest battle of the war, and has been called the war's turning point. Pickett's Charge on July 3 is often considered the high-water mark of the Confederacy because it signaled the collapse of serious Confederate threats of victory. Lee's army suffered 28,000 casualties (versus Meade's 23,000). However, Lincoln was angry that Meade failed to intercept Lee's retreat. The Western theater refers to military operations between the Appalachian Mountains and the Mississippi River, including the states of Alabama, Georgia, Florida, Mississippi, North Carolina, Kentucky, South Carolina and Tennessee, as well as parts of Louisiana. The primary Union forces in the Western theater were the Army of the Tennessee and the Army of the Cumberland, named for the two rivers, the Tennessee River and Cumberland River. After Meade's inconclusive fall campaign, Lincoln turned to the Western Theater for new leadership. At the same time, the Confederate stronghold of Vicksburg surrendered, giving the Union control of the Mississippi River, permanently isolating the western Confederacy, and producing the new leader Lincoln needed, Ulysses S. Grant. The primary Confederate force in the Western theater was the Army of Tennessee. The army was formed on November 20, 1862, when General Braxton Bragg renamed the former Army of Mississippi. While the Confederate forces had numerous successes in the Eastern Theater, they were defeated many times in the West. The Union's key strategist and tactician in the West was Ulysses S. Grant, who won victories at Forts Henry (February 6, 1862) and Donelson (February 11 to 16, 1862), earning him the nickname of "Unconditional Surrender" Grant, by which the Union seized control of the Tennessee and Cumberland Rivers. Nathan Bedford Forrest rallied nearly 4,000 Confederate troops and led them to escape across the Cumberland. Nashville and central Tennessee thus fell to the Union, leading to attrition of local food supplies and livestock and a breakdown in social organization. Leonidas Polk's invasion of Columbus ended Kentucky's policy of neutrality and turned it against the Confederacy. Grant used river transport and Andrew Foote's gunboats of the Western Flotilla to threaten the Confederacy's "Gibraltar of the West" at Columbus, Kentucky. Although rebuffed at Belmont, Grant cut off Columbus. The Confederates, lacking their gunboats, were forced to retreat and the Union took control of western Kentucky and opened Tennessee in March 1862. At the Battle of Shiloh (Pittsburg Landing), in Tennessee in April 1862, the Confederates made a surprise attack that pushed Union forces against the river as night fell. Overnight, the Navy landed additional reinforcements, and Grant counter-attacked. Grant and the Union won a decisive victory—the first battle with the high casualty rates that would repeat over and over. The Confederates lost Albert Sidney Johnston, considered their finest general before the emergence of Lee. One of the early Union objectives in the war was the capture of the Mississippi River, to cut the Confederacy in half. The Mississippi River was opened to Union traffic to the southern border of Tennessee with the taking of Island No. 10 and New Madrid, Missouri, and then Memphis, Tennessee. In April 1862, the Union Navy captured New Orleans. "The key to the river was New Orleans, the South's largest port [and] greatest industrial center." U.S. Naval forces under Farragut ran past Confederate defenses south of New Orleans. Confederate forces abandoned the city, giving the Union a critical anchor in the deep South. which allowed Union forces to begin moving up the Mississippi. Memphis fell to Union forces on June 6, 1862, and became a key base for further advances south along the Mississippi River. Only the fortress city of Vicksburg, Mississippi, prevented Union control of the entire river. Bragg's second invasion of Kentucky in the Confederate Heartland Offensive included initial successes such as Kirby Smith's triumph at the Battle of Richmond and the capture of the Kentucky capital of Frankfort on September 3, 1862. However, the campaign ended with a meaningless victory over Maj. Gen. Don Carlos Buell at the Battle of Perryville. Bragg was forced to end his attempt at invading Kentucky and retreat due to lack of logistical support and lack of infantry recruits for the Confederacy in that state. Bragg was narrowly defeated by Maj. Gen. William Rosecrans at the Battle of Stones River in Tennessee, the culmination of the Stones River Campaign. Naval forces assisted Grant in the long, complex Vicksburg Campaign that resulted in the Confederates surrendering at the Battle of Vicksburg in July 1863, which cemented Union control of the Mississippi River and is considered one of the turning points of the war. The one clear Confederate victory in the West was the Battle of Chickamauga. After Rosecrans successful Tullahoma Campaign, Bragg, reinforced by Lt. Gen. James Longstreet's corps (from Lee's army in the east), defeated Rosecrans, despite the heroic defensive stand of Maj. Gen. George Henry Thomas. Rosecrans retreated to Chattanooga, which Bragg then besieged in the Chattanooga Campaign. Grant marched to the relief of Rosecrans and defeated Bragg at the Third Battle of Chattanooga, eventually causing Longstreet to abandon his Knoxville Campaign and driving Confederate forces out of Tennessee and opening a route to Atlanta and the heart of the Confederacy. The Trans-Mississippi theater refers to military operations west of the Mississippi River, not including the areas bordering the Pacific Ocean. The first battle of the Trans-Mississippi theater was the Battle of Wilson's Creek. The Confederates were driven from Missouri early in the war as a result of the Battle of Pea Ridge. Extensive guerrilla warfare characterized the trans-Mississippi region, as the Confederacy lacked the troops and the logistics to support regular armies that could challenge Union control. Roving Confederate bands such as Quantrill's Raiders terrorized the countryside, striking both military installations and civilian settlements. The "Sons of Liberty" and "Order of the American Knights" attacked pro-Union people, elected officeholders, and unarmed uniformed soldiers. These partisans could not be entirely driven out of the state of Missouri until an entire regular Union infantry division was engaged. By 1864, these violent activities harmed the nationwide anti-war movement organizing against the re-election of Lincoln. Missouri not only stayed in the Union but Lincoln took 70 percent of the vote for re-election. Numerous small-scale military actions south and west of Missouri sought to control Indian Territory and New Mexico Territory for the Union. The Battle of Glorieta Pass was the decisive battle of the New Mexico Campaign. The Union repulsed Confederate incursions into New Mexico in 1862, and the exiled Arizona government withdrew into Texas. In the Indian Territory, civil war broke out within tribes. About 12,000 Indian warriors fought for the Confederacy and smaller numbers for the Union. The most prominent Cherokee was Brigadier General Stand Watie, the last Confederate general to surrender. After the fall of Vicksburg in July 1863, General Kirby Smith in Texas was informed by Jefferson Davis that he could expect no further help from east of the Mississippi River. Although he lacked resources to beat Union armies, he built up a formidable arsenal at Tyler, along with his own Kirby Smithdom economy, a virtual "independent fiefdom" in Texas, including railroad construction and international smuggling. The Union, in turn, did not directly engage him. Its 1864 Red River Campaign to take Shreveport, Louisiana, was a failure and Texas remained in Confederate hands throughout the war. The Lower Seaboard theater refers to military and naval operations that occurred near the coastal areas of the Southeast (Alabama, Florida, Louisiana, Mississippi, South Carolina, and Texas) as well as the southern part of the Mississippi River (Port Hudson and south). Union Naval activities were dictated by the Anaconda Plan. One of the earliest battles of the war was fought at Port Royal Sound, south of Charleston. Much of the war along the South Carolina coast concentrated on capturing Charleston. In attempting to capture Charleston, the Union military tried two approaches, by land over James or Morris Islands or through the harbor. However, the Confederates were able to drive back each Union attack. One of the most famous of the land attacks was the Second Battle of Fort Wagner, in which the 54th Massachusetts Infantry took part. The Federals suffered a serious defeat in this battle, losing 1,500 men while the Confederates lost only 175. Fort Pulaski on the Georgia coast was an early target for the Union navy. Following the capture of Port Royal, an expedition was organized with engineer troops under the command of Captain Quincy A. Gillmore, forcing a Confederate surrender. The Union army occupied the fort for the rest of the war after repairing. In April 1862, a Union naval task force commanded by Commander David D. Porter attacked Forts Jackson and St. Philip, which guarded the river approach to New Orleans from the south. While part of the fleet bombarded the forts, other vessels forced a break in the obstructions in the river and enabled the rest of the fleet to steam upriver to the city. A Union army force commanded by Major General Benjamin Butler landed near the forts and forced their surrender. Butler's controversial command of New Orleans earned him the nickname "Beast". The following year, the Union Army of the Gulf commanded by Major General Nathaniel P. Banks laid siege to Port Hudson for nearly eight weeks, the longest siege in US military history. The Confederates attempted to defend with the Bayou Teche Campaign, but surrendered after Vicksburg. These two surrenders gave the Union control over the entire Mississippi. Several small skirmishes were fought in Florida, but no major battles. The biggest was the Battle of Olustee in early 1864. The Pacific Coast theater refers to military operations on the Pacific Ocean and in the states and Territories west of the Continental Divide. At the beginning of 1864, Lincoln made Grant commander of all Union armies. Grant made his headquarters with the Army of the Potomac and put Maj. Gen. William Tecumseh Sherman in command of most of the western armies. Grant understood the concept of total war and believed, along with Lincoln and Sherman, that only the utter defeat of Confederate forces and their economic base would end the war. This was total war not in killing civilians but rather in taking provisions and forage and destroying homes, farms, and railroads, that Grant said "would otherwise have gone to the support of secession and rebellion. This policy I believe exercised a material influence in hastening the end." Grant devised a coordinated strategy that would strike at the entire Confederacy from multiple directions. Generals George Meade and Benjamin Butler were ordered to move against Lee near Richmond, General Franz Sigel (and later Philip Sheridan) were to attack the Shenandoah Valley, General Sherman was to capture Atlanta and march to the sea (the Atlantic Ocean), Generals George Crook and William W. Averell were to operate against railroad supply lines in West Virginia, and Maj. Gen. Nathaniel P. Banks was to capture Mobile, Alabama. Grant's army set out on the Overland Campaign intending to draw Lee into a defense of Richmond, where they would attempt to pin down and destroy the Confederate army. The Union army first attempted to maneuver past Lee and fought several battles, notably at the Wilderness, Spotsylvania, and Cold Harbor. These battles resulted in heavy losses on both sides and forced Lee's Confederates to fall back repeatedly. At the Battle of Yellow Tavern, the Confederates lost Jeb Stuart. An attempt to outflank Lee from the south failed under Butler, who was trapped inside the Bermuda Hundred river bend. Each battle resulted in setbacks for the Union that mirrored what they had suffered under prior generals, though unlike those prior generals, Grant fought on rather than retreat. Grant was tenacious and kept pressing Lee's Army of Northern Virginia back to Richmond. While Lee was preparing for an attack on Richmond, Grant unexpectedly turned south to cross the James River and began the protracted Siege of Petersburg, where the two armies engaged in trench warfare for over nine months. Grant finally found a commander, General Philip Sheridan, aggressive enough to prevail in the Valley Campaigns of 1864. Sheridan was initially repelled at the Battle of New Market by former U.S. vice president and Confederate Gen. John C. Breckinridge. The Battle of New Market was the Confederacy's last major victory of the war and included a charge by teenage VMI cadets. After redoubling his efforts, Sheridan defeated Maj. Gen. Jubal A. Early in a series of battles, including a final decisive defeat at the Battle of Cedar Creek. Sheridan then proceeded to destroy the agricultural base of the Shenandoah Valley, a strategy similar to the tactics Sherman later employed in Georgia. Image:The Peacemakers 1868.jpg|thumb|"The Peacemakers" by George Peter Alexander Healy portrays Sherman, Grant, Lincoln, and Porter discussing plans for the last weeks of the Civil War aboard the steamer "River Queen" in March 1865. "(Clickable image—use cursor to identify.)"|alt=Painting of four men conferring in a ship's cabin, entitled "The Peacemakers". poly 486 710 579 672 663 739 687 863 649 936 579 965 739 1133 782 1037 800 956 910 905 905 1025 855 1305 576 1223 513 1371 620 1388 687 1467 846 1510 1010 2035 1160 2140 869 2135 718 1682 663 1685 634 2000 725 2030 489 2095 504 1800 362 1775 263 1710 225 1516 252 1226 269 1046 324 948 434 913 481 866 446 782 William Sherman poly 1092 794 1182 823 1188 892 1162 970 1220 1023 1330 1179 1429 1153 1478 1223 1388 1498 1440 1548 1397 1568 1397 1788 1429 1826 1385 1867 1301 1838 1275 1571 1243 1533 1243 1478 1095 1492 985 1588 924 1133 970 1028 1046 991 1028 907 1034 837 Ulysses Grant poly 1634 802 1715 770 1794 825 1800 950 1933 962 2194 1072 2188 1185 2116 1347 2029 1396 2032 1547 1669 1544 1740 1947 1527 1945 1579 1875 1620 1852 1599 1730 1565 1883 1394 1930 1408 1872 1454 1817 1559 1333 1568 1260 1599 1182 1672 980 1620 886 Abraham Lincoln poly 2620 780 2740 775 2754 889 2745 991 2855 1064 2864 1316 2800 1420 2766 1516 2806 1600 2772 1942 2664 1919 2577 1791 2508 1780 2496 1719 2455 1745 2496 2052 2348 2145 2308 2128 2345 2038 2293 1759 2229 2081 1977 2076 1997 2035 2058 2009 2093 1965 2174 1537 2461 1429 2453 1290 2357 1382 2287 1336 2290 1287 2354 1203 2412 1179 2499 1133 2540 1072 2560 988 2595 837 David Porter Meanwhile, Sherman maneuvered from Chattanooga to Atlanta, defeating Confederate Generals Joseph E. Johnston and John Bell Hood along the way. The fall of Atlanta on September 2, 1864, guaranteed the reelection of Lincoln as president. Hood left the Atlanta area to swing around and menace Sherman's supply lines and invade Tennessee in the Franklin–Nashville Campaign. Union Maj. Gen. John Schofield defeated Hood at the Battle of Franklin, and George H. Thomas dealt Hood a massive defeat at the Battle of Nashville, effectively destroying Hood's army. Leaving Atlanta, and his base of supplies, Sherman's army marched with an unknown destination, laying waste to about 20 percent of the farms in Georgia in his "March to the Sea". He reached the Atlantic Ocean at Savannah, Georgia, in December 1864. Sherman's army was followed by thousands of freed slaves; there were no major battles along the March. Sherman turned north through South Carolina and North Carolina to approach the Confederate Virginia lines from the south, increasing the pressure on Lee's army. Lee's army, thinned by desertion and casualties, was now much smaller than Grant's. One last Confederate attempt to break the Union hold on Petersburg failed at the decisive Battle of Five Forks (sometimes called "the Waterloo of the Confederacy") on April 1. This meant that the Union now controlled the entire perimeter surrounding Richmond-Petersburg, completely cutting it off from the Confederacy. Realizing that the capital was now lost, Lee decided to evacuate his army. The Confederate capital fell to the Union XXV Corps, composed of black troops. The remaining Confederate units fled west after a defeat at Sayler's Creek. Initially, Lee did not intend to surrender but planned to regroup at the village of Appomattox Court House, where supplies were to be waiting and then continue the war. Grant chased Lee and got in front of him so that when Lee's army reached Appomattox Court House, they were surrounded. After an initial battle, Lee decided that the fight was now hopeless, and surrendered his Army of Northern Virginia on April 9, 1865, at the McLean House. In an untraditional gesture and as a sign of Grant's respect and anticipation of peacefully restoring Confederate states to the Union, Lee was permitted to keep his sword and his horse, Traveller. On April 14, 1865, President Lincoln was shot by John Wilkes Booth, a Southern sympathizer. Lincoln died early the next morning and Lincoln's vice president, Andrew Johnson, became the 17th president. Meanwhile, Confederate forces across the South surrendered as news of Lee's surrender reached them. On April 26, 1865, the same day Boston Corbett killed Booth at a tobacco barn, General Joseph E. Johnston surrendered nearly 90,000 men of the Army of Tennessee to Major General William Tecumseh Sherman at the Bennett Place near present-day Durham, North Carolina. It proved to be the largest surrender of Confederate forces. On May 4th, all remaining Confederate forces in Alabama and Mississippi surrendered. President Johnson officially declared an end to the insurrection on May 9, 1865; Confederate president, Jefferson Davis, was captured the following day. On June 2, Kirby Smith officially surrendered his troops in the Trans-Mississippi Department. On June 23, Cherokee leader Stand Watie became the last Confederate general to surrender his forces. The causes of the war, the reasons for its outcome, and even the name of the war itself are subjects of lingering contention today. The North and West grew rich while the once-rich South became poor for a century. The national political power of the slaveowners and rich Southerners ended. Historians are less sure about the results of the postwar Reconstruction, especially regarding the second-class citizenship of the Freedmen and their poverty. Historians have debated whether the Confederacy could have won the war. Most scholars, including James McPherson, argue that Confederate victory was at least possible. McPherson argues that the North's advantage in population and resources made Northern victory likely but not guaranteed. He also argues that if the Confederacy had fought using unconventional tactics, they would have more easily been able to hold out long enough to exhaust the Union. Confederates did not need to invade and hold enemy territory to win but only needed to fight a defensive war to convince the North that the cost of winning was too high. The North needed to conquer and hold vast stretches of enemy territory and defeat Confederate armies to win. Lincoln was not a military dictator and could continue to fight the war only as long as the American public supported a continuation of the war. The Confederacy sought to win independence by out-lasting Lincoln; however, after Atlanta fell and Lincoln defeated McClellan in the election of 1864, all hope for a political victory for the South ended. At that point, Lincoln had secured the support of the Republicans, War Democrats, the border states, emancipated slaves, and the neutrality of Britain and France. By defeating the Democrats and McClellan, he also defeated the Copperheads and their peace platform. Many scholars argue that the Union held an insurmountable long-term advantage over the Confederacy in industrial strength and population. Confederate actions, they argue, only delayed defeat. Civil War historian Shelby Foote expressed this view succinctly: "I think that the North fought that war with one hand behind its back ... If there had been more Southern victories, and a lot more, the North simply would have brought that other hand out from behind its back. I don't think the South ever had a chance to win that War." A minority view among historians is that the Confederacy lost because, as E. Merton Coulter put it, "people did not will hard enough and long enough to win." According to Charles H. Wilson, in "The Collapse of the Confederacy", "internal conflict should figure prominently in any explanation of Confederate defeat." Marxist historian Armstead Robinson agrees, pointing to class conflict in the Confederate army between the slave owners and the larger number of non-owners. He argues that the non-owner soldiers grew embittered about fighting to preserve slavery and fought less enthusiastically. He attributes the major Confederate defeats in 1863 at Vicksburg and Missionary Ridge to this class conflict. However, most historians reject the argument. James M. McPherson, after reading thousands of letters written by Confederate soldiers, found strong patriotism that continued to the end; they truly believed they were fighting for freedom and liberty. Even as the Confederacy was visibly collapsing in 1864–65, he says most Confederate soldiers were fighting hard. Historian Gary Gallagher cites General Sherman who in early 1864 commented, "The devils seem to have a determination that cannot but be admired." Despite their loss of slaves and wealth, with starvation looming, Sherman continued, "yet I see no sign of let-up—some few deserters—plenty tired of war, but the masses determined to fight it out." Also important were Lincoln's eloquence in rationalizing the national purpose and his skill in keeping the border states committed to the Union cause. The Emancipation Proclamation was an effective use of the President's war powers. The Confederate government failed in its attempt to get Europe involved in the war militarily, particularly Britain and France. Southern leaders needed to get European powers to help break up the blockade the Union had created around the Southern ports and cities. Lincoln's naval blockade was 95 percent effective at stopping trade goods; as a result, imports and exports to the South declined significantly. The abundance of European cotton and Britain's hostility to the institution of slavery, along with Lincoln's Atlantic and Gulf of Mexico naval blockades, severely decreased any chance that either Britain or France would enter the war. Historian Don Doyle has argued that the Union victory had a major impact on the course of world history. The Union victory energized popular democratic forces. A Confederate victory, on the other hand, would have meant a new birth of slavery, not freedom. Historian Fergus Bordewich, following Doyle, argues that: Scholars have debated what the effects of the war were on political and economic power in the South. The prevailing view is that the southern planter elite retained its powerful position in the South. However, a 2017 study challenges this, noting that while some Southern elites retained their economic status, the turmoil of the 1860s created greater opportunities for economic mobility in the South than in the North. The war resulted in at least 1,030,000 casualties (3 percent of the population), including about 620,000 soldier deaths—two-thirds by disease, and 50,000 civilians. Binghamton University historian J. David Hacker believes the number of soldier deaths was approximately 750,000, 20 percent higher than traditionally estimated, and possibly as high as 850,000. The war accounted for more American deaths than in all other U.S. wars combined. Based on 1860 census figures, 8 percent of all white men aged 13 to 43 died in the war, including 6 percent in the North and 18 percent in the South. About 56,000 soldiers died in prison camps during the War. An estimated 60,000 men lost limbs in the war. Union army dead, amounting to 15 percent of the over two million who served, was broken down as follows: In addition there were 4,523 deaths in the Navy (2,112 in battle) and 460 in the Marines (148 in battle). Black troops made up 10 percent of the Union death toll, they amounted to 15 percent of disease deaths but less than 3 percent of those killed in battle. Losses among African Americans were high, in the last year and a half and from all reported casualties, approximately 20 percent of all African Americans enrolled in the military lost their lives during the Civil War. Notably, their mortality rate was significantly higher than white soldiers: Confederate records compiled by historian William F. Fox list 74,524 killed and died of wounds and 59,292 died of disease. Including Confederate estimates of battle losses where no records exist would bring the Confederate death toll to 94,000 killed and died of wounds. Fox complained, however, that records were incomplete, especially during the last year of the war, and that battlefield reports likely under-counted deaths (many men counted as wounded in battlefield reports subsequently died of their wounds). Thomas L. Livermore, using Fox's data, put the number of Confederate non-combat deaths at 166,000, using the official estimate of Union deaths from disease and accidents and a comparison of Union and Confederate enlistment records, for a total of 260,000 deaths. However, this excludes the 30,000 deaths of Confederate troops in prisons, which would raise the minimum number of deaths to 290,000. The United States National Park Service uses the following figures in its official tally of war losses: Union: 853,838 Confederate: 914,660 While the figures of 360,000 army deaths for the Union and 260,000 for the Confederacy remained commonly cited, they are incomplete. In addition to many Confederate records being missing, partly as a result of Confederate widows not reporting deaths due to being ineligible for benefits, both armies only counted troops who died during their service and not the tens of thousands who died of wounds or diseases after being discharged. This often happened only a few days or weeks later. Francis Amasa Walker, superintendent of the 1870 census, used census and surgeon general data to estimate a minimum of 500,000 Union military deaths and 350,000 Confederate military deaths, for a total death toll of 850,000 soldiers. While Walker's estimates were originally dismissed because of the 1870 census's undercounting, it was later found that the census was only off by 6.5% and that the data Walker used would be roughly accurate. Analyzing the number of dead by using census data to calculate the deviation of the death rate of men of fighting age from the norm suggests that at least 627,000 and at most 888,000, but most likely 761,000 soldiers, died in the war. This would break down to approximately 350,000 Confederate and 411,000 Union military deaths, going by the proportion of Union to Confederate battle losses. Deaths among former slaves has proven much harder to estimate, due to the lack of reliable census data at the time, though they were known to be considerable, as former slaves were set free or escaped in massive numbers in an area where the Union army did not have sufficient shelter, doctors, or food for them. University of Connecticut Professor James Downs states that tens to hundreds of thousands of slaves died during the war from disease, starvation, or exposure and that if these deaths are counted in the war's total, the death toll would exceed 1 million. Losses were far higher than during the recent defeat of Mexico, which saw roughly thirteen thousand American deaths, including fewer than two thousand killed in battle, between 1846 and 1848. One reason for the high number of battle deaths during the war was the continued use of tactics similar to those of the Napoleonic Wars at the turn of the century, such as charging. With the advent of more accurate rifled barrels, Minié balls, and (near the end of the war for the Union army) repeating firearms such as the Spencer Repeating Rifle and the Henry Repeating Rifle, soldiers were mowed down when standing in lines in the open. This led to the adoption of trench warfare, a style of fighting that defined much of World War I. The wealth amassed in slaves and slavery for the Confederacy's 3.5 million blacks effectively ended when Union armies arrived; they were nearly all freed by the Emancipation Proclamation. Slaves in the border states and those located in some former Confederate territory occupied before the Emancipation Proclamation were freed by state action or (on December 6, 1865) by the Thirteenth Amendment. The war destroyed much of the wealth that had existed in the South. All accumulated investment Confederate bonds was forfeit; most banks and railroads were bankrupt. The income per person in the South dropped to less than 40 percent of that of the North, a condition that lasted until well into the 20th century. Southern influence in the U.S. federal government, previously considered, was greatly diminished until the latter half of the 20th century. The full restoration of the Union was the work of a highly contentious postwar era known as Reconstruction. During the Reconstruction era, national unity was slowly restored, the national government expanded its power, and civil and political rights were granted to freed black slaves through amendments to the Constitution and federal legislation. While not all Southerners saw themselves as fighting to preserve slavery, most of the officers and over a third of the rank and file in Lee's army had close family ties to slavery. To Northerners, in contrast, the motivation was primarily to preserve the Union, not to abolish slavery. Abraham Lincoln consistently made preserving the Union the central goal of the war, though he increasingly saw slavery as a crucial issue and made ending it an additional goal. Lincoln's decision to issue the Emancipation Proclamation angered both Peace Democrats ("Copperheads") and War Democrats, but energized most Republicans. By warning that free blacks would flood the North, Democrats made gains in the 1862 elections, but they did not gain control of Congress. The Republicans' counterargument that slavery was the mainstay of the enemy steadily gained support, with the Democrats losing decisively in the 1863 elections in the northern state of Ohio when they tried to resurrect anti-black sentiment. The Emancipation Proclamation enabled African-Americans, both free blacks and escaped slaves, to join the Union Army. About 190,000 volunteered, further enhancing the numerical advantage the Union armies enjoyed over the Confederates, who did not dare emulate the equivalent manpower source for fear of fundamentally undermining the legitimacy of slavery. During the Civil War, sentiment concerning slaves, enslavement and emancipation in the United States was divided. In 1861, Lincoln worried that premature attempts at emancipation would mean the loss of the border states, and that "to lose Kentucky is nearly the same as to lose the whole game." Copperheads and some War Democrats opposed emancipation, although the latter eventually accepted it as part of total war needed to save the Union. At first, Lincoln reversed attempts at emancipation by Secretary of War Simon Cameron and Generals John C. Frémont (in Missouri) and David Hunter (in South Carolina, Georgia and Florida) to keep the loyalty of the border states and the War Democrats. Lincoln warned the border states that a more radical type of emancipation would happen if his gradual plan based on compensated emancipation and voluntary colonization was rejected. But only the District of Columbia accepted Lincoln's gradual plan, which was enacted by Congress. When Lincoln told his cabinet about his proposed emancipation proclamation, Seward advised Lincoln to wait for a victory before issuing it, as to do otherwise would seem like "our last shriek on the retreat". Lincoln laid the groundwork for public support in an open letter published in abolitionist Horace Greeley's newspaper. In September 1862, the Battle of Antietam provided this opportunity, and the subsequent War Governors' Conference added support for the proclamation. Lincoln issued his preliminary Emancipation Proclamation on September 22, 1862, and his final Emancipation Proclamation on January 1, 1863. In his letter to Albert G. Hodges, Lincoln explained his belief that "If slavery is not wrong, nothing is wrong ... And yet I have never understood that the Presidency conferred upon me an unrestricted right to act officially upon this judgment and feeling ... I claim not to have controlled events, but confess plainly that events have controlled me." Lincoln's moderate approach succeeded in inducing border states, War Democrats and emancipated slaves to fight for the Union. The Union-controlled border states (Kentucky, Missouri, Maryland, Delaware and West Virginia) and Union-controlled regions around New Orleans, Norfolk and elsewhere, were not covered by the Emancipation Proclamation. All abolished slavery on their own, except Kentucky and Delaware. Since the Emancipation Proclamation was based on the President's war powers, it only included territory held by Confederates at the time. However, the Proclamation became a symbol of the Union's growing commitment to add emancipation to the Union's definition of liberty. The Emancipation Proclamation greatly reduced the Confederacy's hope of getting aid from Britain or France. By late 1864, Lincoln was playing a leading role in getting Congress to vote for the Thirteenth Amendment, which made emancipation universal and permanent. In "Texas v. White", the United States Supreme Court ruled that Texas had remained a state ever since it first joined the Union, despite claims that it joined the Confederate States; the court further held that the Constitution did not permit states to unilaterally secede from the United States, and that the ordinances of secession, and all the acts of the legislatures within seceding states intended to give effect to such ordinances, were "absolutely null", under the constitution. The war had utterly devastated the South, and posed serious questions of how the South would be re-integrated to the Union. Reconstruction began during the war, with the Emancipation Proclamation of January 1, 1863, and it continued until 1877. It comprised multiple complex methods to resolve the outstanding issues of the war's aftermath, the most important of which were the three "Reconstruction Amendments" to the Constitution: the 13th outlawing slavery (1865), the 14th guaranteeing citizenship to slaves (1868) and the 15th ensuring voting rights to slaves (1870). From the Union perspective, the goals of Reconstruction were to consolidate the Union victory on the battlefield by reuniting the Union; to guarantee a "republican form of government" for the ex-Confederate states; and to permanently end slavery—and prevent semi-slavery status. President Johnson took a lenient approach and saw the achievement of the main war goals as realized in 1865, when each ex-rebel state repudiated secession and ratified the Thirteenth Amendment. Radical Republicans demanded proof that Confederate nationalism was dead and that the slaves were truly free. They came to the fore after the 1866 elections and undid much of Johnson's work. In 1872 the "Liberal Republicans" argued that the war goals had been achieved and that Reconstruction should end. They ran a presidential ticket in 1872 but were decisively defeated. In 1874, Democrats, primarily Southern, took control of Congress and opposed any more reconstruction. The Compromise of 1877 closed with a national consensus that the Civil War had finally ended. With the withdrawal of federal troops, however, whites retook control of every Southern legislature; the Jim Crow period of disenfranchisement and legal segregation was ushered in. The Civil War would have a huge impact on American politics in the years to come. Many veterans on the both sides were subsequently elected to political office, including five U. S. Presidents: General Ulysses Grant, Rutherford B. Hayes, James Garfield, Benjamin Harrison, and William McKinley. The Civil War is one of the central events in American collective memory. There are innumerable statues, commemorations, books and archival collections. The memory includes the home front, military affairs, the treatment of soldiers, both living and dead, in the war's aftermath, depictions of the war in literature and art, evaluations of heroes and villains, and considerations of the moral and political lessons of the war. The last theme includes moral evaluations of racism and slavery, heroism in combat and heroism behind the lines, and the issues of democracy and minority rights, as well as the notion of an "Empire of Liberty" influencing the world. Professional historians have paid much more attention to the causes of the war, than to the war itself. Military history has largely developed outside academia, leading to a proliferation of studies by non-scholars who nevertheless are familiar with the primary sources and pay close attention to battles and campaigns, and who write for the general public, rather than the scholarly community. Bruce Catton and Shelby Foote are among the best-known writers. Practically every major figure in the war, both North and South, has had a serious biographical study. Memory of the war in the white South crystallized in the myth of the "Lost Cause": that the Confederate cause was a just and heroic one. The myth shaped regional identity and race relations for generations. Alan T. Nolan notes that the Lost Cause was expressly "a rationalization, a cover-up to vindicate the name and fame" of those in rebellion. Some claims revolve around the insignificance of slavery; some appeals highlight cultural differences between North and South; the military conflict by Confederate actors is idealized; in any case, secession was said to be lawful. Nolan argues that the adoption of the Lost Cause perspective facilitated the reunification of the North and the South while excusing the "virulent racism" of the 19th century, sacrificing African-American progress to a white man's reunification. He also deems the Lost Cause "a caricature of the truth. This caricature wholly misrepresents and distorts the facts of the matter" in every instance. The Lost Cause myth was formalized by Charles A. Beard and Mary R. Beard, whose "The Rise of American Civilization" (1927) spawned "Beardian historiography". The Beards downplayed slavery, abolitionism, and issues of morality. Though this interpretation was abandoned by the Beards in the 1940s, and by historians by the 1950s, Beardian themes still echo among Lost Cause writers. The first efforts at Civil War battlefield preservation and memorialization came during the war itself with the establishment of National Cemeteries at Gettysburg, Mill Springs and Chattanooga. Soldiers began erecting markers on battlefields beginning with the First Battle of Bull Run in July 1861, but the oldest surviving monument is the Hazen Brigade Monument near Murfreesboro, Tennessee, built in the summer of 1863 by soldiers in Union Col. William B. Hazen's brigade to mark the spot where they buried their dead following the Battle of Stones River. In the 1890s, the United States government established five Civil War battlefield parks under the jurisdiction of the War Department, beginning with the creation of the Chickamauga and Chattanooga National Military Park in Tennessee and the Antietam National Battlefield in Maryland in 1890. The Shiloh National Military Park was established in 1894, followed by the Gettysburg National Military Park in 1895 and Vicksburg National Military Park in 1899. In 1933, these five parks and other national monuments were transferred to the jurisdiction of the National Park Service. The modern Civil War battlefield preservation movement began in 1987 with the founding of the Association for the Preservation of Civil War Sites (APCWS), a grassroots organization created by Civil War historians and others to preserve battlefield land by acquiring it. In 1991, the original Civil War Trust was created in the mold of the Statue of Liberty/Ellis Island Foundation, but failed to attract corporate donors and soon helped manage the disbursement of U.S. Mint Civil War commemorative coin revenues designated for battlefield preservation. Although the two non-profit organizations joined forces on several battlefield acquisitions, ongoing conflicts prompted the boards of both organizations to facilitate a merger, which happened in 1999 with the creation of the Civil War Preservation Trust. In 2011, the organization was renamed, again becoming the Civil War Trust. After expanding its mission in 2014 to include battlefields of the Revolutionary War and War of 1812, the non-profit became the American Battlefield Trust in May 2018, operating with two divisions, the Civil War Trust and the Revolutionary War Trust. From 1987 through May 2018, the Trust and its predecessor organizations, along with their partners, preserved 49,893 acres of battlefield land through acquisition of property or conservation easements at more than 130 battlefields in 24 states. The five major Civil War battlefield parks operated by the National Park Service (Gettysburg, Antietam, Shiloh, Chickamauga/Chattanooga and Vicksburg) had a combined 3.1 million visitors in 2018, down 70% from 10.2 million in 1970. Attendance at Gettysburg in 2018 was 950,000, a decline of 86% since 1970. The American Civil War has been commemorated in many capacities ranging from the reenactment of battles to statues and memorial halls erected, to films being produced, to stamps and coins with Civil War themes being issued, all of which helped to shape public memory. This varied advent occurred in greater proportions on the 100th and 150th anniversary. Hollywood's take on the war has been especially influential in shaping public memory, as seen in such film classics as "Birth of a Nation" (1915), "Gone with the Wind" (1939), and more recently "Lincoln" (2012). Ken Burns's PBS television series "The Civil War" (1990) is especially well remembered, though criticized for its historiography. Numerous technological innovations during the Civil War had a great impact on 19th-century science. The Civil War was one of the earliest examples of an "industrial war", in which technological might is used to achieve military supremacy in a war. New inventions, such as the train and telegraph, delivered soldiers, supplies and messages at a time when horses were considered to be the fastest way to travel. It was also in this war when countries first used aerial warfare, in the form of reconnaissance balloons, to a significant effect. It saw the first action involving steam-powered ironclad warships in naval warfare history. Repeating firearms such as the Henry rifle, Spencer rifle, Colt revolving rifle, Triplett & Scott carbine and others, first appeared during the Civil War; they were a revolutionary invention that would soon replace muzzle-loading and single-shot firearms in warfare. The war was also the first appearances of rapid-firing weapons and machine guns such as the Agar gun and the Gatling gun. The Civil War is one of the most studied events in American history, and the collection of cultural works around it is enormous. This section gives an abbreviated overview of the most notable works.
https://en.wikipedia.org/wiki?curid=863
Andy Warhol Andy Warhol (; born Andrew Warhola; August 6, 1928 – February 22, 1987) was an American artist, film director, and producer who was a leading figure in the visual art movement known as pop art. His works explore the relationship between artistic expression, advertising, and celebrity culture that flourished by the 1960s, and span a variety of media, including painting, silkscreening, photography, film, and sculpture. Some of his best known works include the silkscreen paintings "Campbell's Soup Cans" (1962) and "Marilyn Diptych" (1962), the experimental film "Chelsea Girls" (1966), and the multimedia events known as the "Exploding Plastic Inevitable" (1966–67). Born and raised in Pittsburgh, Warhol initially pursued a successful career as a commercial illustrator. After exhibiting his work in several galleries in the late 1950s, he began to receive recognition as an influential and controversial artist. His New York studio, The Factory, became a well-known gathering place that brought together distinguished intellectuals, drag queens, playwrights, Bohemian street people, Hollywood celebrities, and wealthy patrons. He promoted a collection of personalities known as Warhol superstars, and is credited with inspiring the widely used expression "15 minutes of fame". In the late 1960s he managed and produced the experimental rock band The Velvet Underground and founded "Interview" magazine. He authored numerous books, including "The Philosophy of Andy Warhol" and "". He lived openly as a gay man before the gay liberation movement. After gallbladder surgery, Warhol died of cardiac arrhythmia in February 1987 at the age of 58. Warhol has been the subject of numerous retrospective exhibitions, books, and feature and documentary films. The Andy Warhol Museum in his native city of Pittsburgh, which holds an extensive permanent collection of art and archives, is the largest museum in the United States dedicated to a single artist. Many of his creations are very collectible and highly valuable. The highest price ever paid for a Warhol painting is US$105 million for a 1963 canvas titled "Silver Car Crash (Double Disaster)"; his works include some of the most expensive paintings ever sold. A 2009 article in "The Economist" described Warhol as the "bellwether of the art market". Warhol was born on August 6, 1928, in Pittsburgh, Pennsylvania. He was the fourth child of Ondrej Warhola (Americanized as Andrew Warhola, Sr., 1889–1942) and Julia ("née" Zavacká, 1892–1972), whose first child was born in their homeland of Austria-Hungary and died before their move to the U.S. His parents were working-class Lemko emigrants from Mikó, Austria-Hungary (now called Miková, located in today's northeastern Slovakia). Warhol's father emigrated to the United States in 1914, and his mother joined him in 1921, after the death of Warhol's grandparents. Warhol's father worked in a coal mine. The family lived at 55 Beelen Street and later at 3252 Dawson Street in the Oakland neighborhood of Pittsburgh. The family was Ruthenian Catholic and attended St. John Chrysostom Byzantine Catholic Church. Andy Warhol had two elder brothers—Pavol (Paul), the eldest, was born before the family emigrated; Ján was born in Pittsburgh. Pavol's son, James Warhola, became a successful children's book illustrator. In third grade, Warhol had Sydenham's chorea (also known as St. Vitus' Dance), the nervous system disease that causes involuntary movements of the extremities, which is believed to be a complication of scarlet fever which causes skin pigmentation blotchiness. At times when he was confined to bed, he drew, listened to the radio and collected pictures of movie stars around his bed. Warhol later described this period as very important in the development of his personality, skill-set and preferences. When Warhol was 13, his father died in an accident. As a teenager, Warhol graduated from Schenley High School in 1945. Also as a teen, Warhol won a Scholastic Art and Writing Award. After graduating from high school, his intentions were to study art education at the University of Pittsburgh in the hope of becoming an art teacher, but his plans changed and he enrolled in the Carnegie Institute of Technology, now Carnegie Mellon University in Pittsburgh, where he studied commercial art. During his time there, Warhol joined the campus Modern Dance Club and Beaux Arts Society. He also served as art director of the student art magazine, "Cano", illustrating a cover in 1948 and a full-page interior illustration in 1949. These are believed to be his first two published artworks. Warhol earned a Bachelor of Fine Arts in pictorial design in 1949. Later that year, he moved to New York City and began a career in magazine illustration and advertising. Warhol's early career was dedicated to commercial and advertising art, where his first commission had been to draw shoes for "Glamour" magazine in the late 1940s. In the 1950s, Warhol worked as a designer for shoe manufacturer Israel Miller. American photographer John Coplans recalled that Warhol's "whimsical" ink drawings of shoe advertisements figured in some of his earliest showings at the Bodley Gallery in New York. Warhol was an early adopter of the silk screen printmaking process as a technique for making paintings. A young Warhol was taught silk screen printmaking techniques by Max Arthur Cohn at his graphic arts business in Manhattan. While working in the shoe industry, Warhol developed his "blotted line" technique, applying ink to paper and then blotting the ink while still wet, which was akin to a printmaking process on the most rudimentary scale. His use of tracing paper and ink allowed him to repeat the basic image and also to create endless variations on the theme, a method that prefigures his 1960s silk-screen canvas. In his book "", Warhol writes: "When you do something exactly wrong, you always turn up something." Warhol habitually used the expedient of tracing photographs projected with an epidiascope. Using prints by Edward Wallowitch, his 'first boyfriend' the photographs would undergo a subtle transformation during Warhol's often cursory tracing of contours and hatching of shadows. Warhol used Wallowitch's photograph "Young Man Smoking a Cigarette" (c.1956), for a 1958 design for a book cover he submitted to Simon and Schuster for the Walter Ross pulp novel "The Immortal", and later used others for his dollar bill series, and for "Big Campbell's Soup Can with Can Opener (Vegetable)", of 1962 which initiated Warhol's most sustained motif, the soup can. With the rapid expansion of the record industry, RCA Records hired Warhol, along with another freelance artist, Sid Maurer, to design album covers and promotional materials. He began exhibiting his work during the 1950s. He held exhibitions at the Hugo Gallery and the Bodley Gallery in New York City; in California, his first West Coast gallery exhibition was on July 9, 1962, in the Ferus Gallery of Los Angeles with Campbell's Soup Cans. The exhibition marked his West Coast debut of pop art. Andy Warhol's first New York solo pop art exhibition was hosted at Eleanor Ward's Stable Gallery November 6–24, 1962. The exhibit included the works "Marilyn Diptych", "100 Soup Cans", "100 Coke Bottles", and "100 Dollar Bills". At the Stable Gallery exhibit, the artist met for the first time poet John Giorno who would star in Warhol's first film, "Sleep", in 1963. It was during the 1960s that Warhol began to make paintings of iconic American objects such as dollar bills, mushroom clouds, electric chairs, Campbell's Soup Cans, Coca-Cola bottles, celebrities such as Marilyn Monroe, Elvis Presley, Marlon Brando, Troy Donahue, Muhammad Ali, and Elizabeth Taylor, as well as newspaper headlines or photographs of police dogs attacking African-American protesters during the Birmingham campaign in the civil rights movement. During these years, he founded his studio, "The Factory" and gathered about him a wide range of artists, writers, musicians, and underground celebrities. His work became popular and controversial. Warhol had this to say about Coca-Cola: New York City's Museum of Modern Art hosted a Symposium on pop art in December 1962 during which artists such as Warhol were attacked for "capitulating" to consumerism. Critics were scandalized by Warhol's open embrace of market culture. This symposium set the tone for Warhol's reception. A pivotal event was the 1964 exhibit "The American Supermarket", a show held in Paul Bianchini's Upper East Side gallery. The show was presented as a typical U.S. small supermarket environment, except that everything in it—from the produce, canned goods, meat, posters on the wall, etc.—was created by six prominent pop artists of the time, among them the controversial (and like-minded) Billy Apple, Mary Inman, and Robert Watts. Warhol's painting of a can of Campbell's soup cost $1,500 while each autographed can sold for $6. The exhibit was one of the first mass events that directly confronted the general public with both pop art and the perennial question of what art is. As an advertisement illustrator in the 1950s, Warhol used assistants to increase his productivity. Collaboration would remain a defining (and controversial) aspect of his working methods throughout his career; this was particularly true in the 1960s. One of the most important collaborators during this period was Gerard Malanga. Malanga assisted the artist with the production of silkscreens, films, sculpture, and other works at "The Factory", Warhol's aluminum foil-and-silver-paint-lined studio on 47th Street (later moved to Broadway). Other members of Warhol's Factory crowd included Freddie Herko, Ondine, Ronald Tavel, Mary Woronov, Billy Name, and Brigid Berlin (from whom he apparently got the idea to tape-record his phone conversations). During the 1960s, Warhol also groomed a retinue of bohemian and counterculture eccentrics upon whom he bestowed the designation "Superstars", including Nico, Joe Dallesandro, Edie Sedgwick, Viva, Ultra Violet, Holly Woodlawn, Jackie Curtis, and Candy Darling. These people all participated in the Factory films, and some—like Berlin—remained friends with Warhol until his death. Important figures in the New York underground art/cinema world, such as writer John Giorno and film-maker Jack Smith, also appear in Warhol films (many premiering at the New Andy Warhol Garrick Theatre and 55th Street Playhouse) of the 1960s, revealing Warhol's connections to a diverse range of artistic scenes during this time. Less well known was his support and collaboration with several teenagers during this era, who would achieve prominence later in life including writer David Dalton, photographer Stephen Shore and artist Bibbe Hansen (mother of pop musician Beck). On June 3, 1968, radical feminist writer Valerie Solanas shot Warhol and Mario Amaya, art critic and curator, at Warhol's studio. Before the shooting, Solanas had been a marginal figure in the Factory scene. She authored in 1967 the "S.C.U.M. Manifesto", a separatist feminist tract that advocated the elimination of men; and appeared in the 1968 Warhol film "I, a Man". Earlier on the day of the attack, Solanas had been turned away from the Factory after asking for the return of a script she had given to Warhol. The script had apparently been misplaced. Amaya received only minor injuries and was released from the hospital later the same day. Warhol was seriously wounded by the attack and barely survived: surgeons opened his chest and massaged his heart to help stimulate its movement again. He suffered physical effects for the rest of his life, including being required to wear a surgical corset. The shooting had a profound effect on Warhol's life and art. Solanas was arrested the day after the assault, after turning herself in to police. By way of explanation, she said that Warhol "had too much control over my life." She was subsequently diagnosed with paranoid schizophrenia and eventually sentenced to three years under the control of the Department of Corrections. After the shooting the Factory scene heavily increased its security, and for many the "Factory 60s" ended. Warhol had this to say about the attack: "Before I was shot, I always thought that I was more half-there than all-there—I always suspected that I was watching TV instead of living life. People sometimes say that the way things happen in movies is unreal, but actually it's the way things happen in life that's unreal. The movies make emotions look so strong and real, whereas when things really do happen to you, it's like watching television—you don't feel anything. Right when I was being shot and ever since, I knew that I was watching television. The channels switch, but it's all television." Compared to the success and scandal of Warhol's work in the 1960s, the 1970s were a much quieter decade, as he became more entrepreneurial. According to Bob Colacello, Warhol devoted much of his time to rounding up new, rich patrons for portrait commissions—including Shah of Iran Mohammad Reza Pahlavi, his wife Empress Farah Pahlavi, his sister Princess Ashraf Pahlavi, Mick Jagger, Liza Minnelli, John Lennon, Diana Ross, and Brigitte Bardot. Warhol's famous portrait of Chinese Communist leader Mao Zedong was created in 1973. He also founded, with Gerard Malanga, "Interview" magazine, and published "The Philosophy of Andy Warhol" (1975). An idea expressed in the book: "Making money is art, and working is art and good business is the best art." Warhol socialized at various nightspots in New York City, including Max's Kansas City and, later in the 1970s, Studio 54. He was generally regarded as quiet, shy, and a meticulous observer. Art critic Robert Hughes called him "the white mole of Union Square." In 1979, along with his longtime friend Stuart Pivar, Warhol founded the New York Academy of Art. Warhol had a re-emergence of critical and financial success in the 1980s, partially due to his affiliation and friendships with a number of prolific younger artists, who were dominating the "bull market" of 1980s New York art: Jean-Michel Basquiat, Julian Schnabel, David Salle and other so-called Neo-Expressionists, as well as members of the Transavantgarde movement in Europe, including Francesco Clemente and Enzo Cucchi. Before the 1984 Sarajevo Winter Olympics, he teamed with 15 other artists, including David Hockney and Cy Twombly, and contributed a Speed Skater print to the Art and Sport collection. The Speed Skater was used for the official Sarajevo Winter Olympics poster. By this time, graffiti artist Fab Five Freddy paid homage to Warhol when he painted an entire train with Campbell soup cans. This was instrumental in Freddy becoming involved in the underground NYC art scene and becoming an affiliate of Basquiat. By this period, Warhol was being criticized for becoming merely a "business artist". In 1979, reviewers disliked his exhibits of portraits of 1970s personalities and celebrities, calling them superficial, facile and commercial, with no depth or indication of the significance of the subjects. They also criticized his 1980 exhibit of 10 portraits at the Jewish Museum in Manhattan, entitled "Jewish Geniuses", which Warhol—who was uninterested in Judaism and Jews—had described in his diary as "They're going to sell." In hindsight, however, some critics have come to view Warhol's superficiality and commerciality as "the most brilliant mirror of our times," contending that "Warhol had captured something irresistible about the zeitgeist of American culture in the 1970s." Warhol also had an appreciation for intense Hollywood glamour. He once said: "I love Los Angeles. I love Hollywood. They're so beautiful. Everything's plastic, but I love plastic. I want to be plastic." In 1984 Vanity Fair commissioned Warhol to produce a portrait of Prince, in order to accompany an article that celebrated the success of "Purple Rain" and its accompanying movie. Referencing the many celebrity portraits produced by Warhol across his career, "Orange" "Prince (1984)" was created using a similar composition to the Marilyn "Flavors" series from 1962, among some of Warhol's very first celebrity portraits. Prince is depicted in a pop color palette commonly used by Warhol, in bright orange with highlights of bright green and blue. The facial features and hair are screen-printed in black over the orange background. In the "Andy Warhol Diaries", Warhol recorded how excited he was to see Prince and Billy Idol together at a party in the mid 1980s, and he compared them to the Hollywood movie stars of the 1950s and 1960s who also inspired his portraits: "...seeing these two glamour boys, its like "boys" are the new Hollywood glamour girls, like Jean Harlow and Marilyn Monroe". Warhol died in Manhattan at 6:32 a.m. on February 22, 1987, at age 58. According to news reports, he had been making a good recovery from gallbladder surgery at New York Hospital before dying in his sleep from a sudden post-operative irregular heartbeat. Prior to his diagnosis and operation, Warhol delayed having his recurring gallbladder problems checked, as he was afraid to enter hospitals and see doctors. His family sued the hospital for inadequate care, saying that the arrhythmia was caused by improper care and water intoxication. The malpractice case was quickly settled out of court; Warhol's family received an undisclosed sum of money. Shortly before Warhol's death, doctors expected Warhol to survive the surgery, though a re-evaluation of the case about thirty years after his death showed many indications that Warhol's surgery was in fact riskier than originally thought. It was widely reported at the time that Warhol died of a "routine" surgery, though when considering factors such as his age, a family history of gallbladder problems, his previous gunshot wound, and his medical state in the weeks leading up to the procedure, the potential risk of death following the surgery appeared to have been significant. Warhol's brothers took his body back to Pittsburgh, where an open-coffin wake was held at the Thomas P. Kunsak Funeral Home. The solid bronze casket had gold-plated rails and white upholstery. Warhol was dressed in a black cashmere suit, a paisley tie, a platinum wig, and sunglasses. He was laid out holding a small prayer book and a red rose. The funeral liturgy was held at the Holy Ghost Byzantine Catholic Church on Pittsburgh's North Side. The eulogy was given by Monsignor Peter Tay. Yoko Ono and John Richardson were speakers. The coffin was covered with white roses and asparagus ferns. After the liturgy, the coffin was driven to St. John the Baptist Byzantine Catholic Cemetery in Bethel Park, a south suburb of Pittsburgh. At the grave, the priest said a brief prayer and sprinkled holy water on the casket. Before the coffin was lowered, Paige Powell dropped a copy of "Interview" magazine, an "Interview" T-shirt, and a bottle of the Estee Lauder perfume "Beautiful" into the grave. Warhol was buried next to his mother and father. A memorial service was held in Manhattan for Warhol on April 1, 1987, at St. Patrick's Cathedral, New York. By the beginning of the 1960s, pop art was an experimental form that several artists were independently adopting; some of these pioneers, such as Roy Lichtenstein, would later become synonymous with the movement. Warhol, who would become famous as the "Pope of Pop", turned to this new style, where popular subjects could be part of the artist's palette. His early paintings show images taken from cartoons and advertisements, hand-painted with paint drips. Marilyn Monroe was a pop art painting that Warhol had done and it was very popular. Those drips emulated the style of successful abstract expressionists (such as Willem de Kooning). Warhol's first pop art paintings were displayed in April 1961, serving as the backdrop for New York Department Store Bonwit Teller's window display. This was the same stage his Pop Art contemporaries Jasper Johns, James Rosenquist and Robert Rauschenberg had also once graced. It was the gallerist Muriel Latow who came up with the ideas for both the soup cans and Warhol's dollar paintings. On November 23, 1961, Warhol wrote Latow a check for $50 which, according to the 2009 Warhol biography, "Pop, The Genius of Warhol", was payment for coming up with the idea of the soup cans as subject matter. For his first major exhibition, Warhol painted his famous cans of Campbell's soup, which he claimed to have had for lunch for most of his life. A 1964 "Large Campbell's Soup Can" was sold in a 2007 Sotheby's auction to a South American collector for £5.1 million ($7.4 million). He loved celebrities, so he painted them as well. From these beginnings he developed his later style and subjects. Instead of working on a signature subject matter, as he started out to do, he worked more and more on a signature style, slowly eliminating the handmade from the artistic process. Warhol frequently used silk-screening; his later drawings were traced from slide projections. At the height of his fame as a painter, Warhol had several assistants who produced his silk-screen multiples, following his directions to make different versions and variations. In 1979, Warhol was commissioned by BMW to paint a Group-4 race version of the then "elite supercar" BMW M1 for the fourth installment in the BMW Art Car Project. It was reported at the time that, unlike the three artists before him, Warhol opted to paint directly onto the automobile himself instead of letting technicians transfer his scale-model design to the car. It was indicated that Warhol spent only a total of 23 minutes to paint the entire car. Warhol produced both comic and serious works; his subject could be a soup can or an electric chair. Warhol used the same techniques—silkscreens, reproduced serially, and often painted with bright colors—whether he painted celebrities, everyday objects, or images of suicide, car crashes, and disasters, as in the 1962–63 "Death and Disaster" series. The "Death and Disaster" paintings included "Red Car Crash", "Purple Jumping Man", and "Orange Disaster." One of these paintings, the diptych "Silver Car Crash", became the highest priced work of his when it sold at Sotheby's Contemporary Art Auction on Wednesday, November 13, 2013, for $105.4 million. Some of Warhol's work, as well as his own personality, has been described as being Keatonesque. Warhol has been described as playing dumb to the media. He sometimes refused to explain his work. He has suggested that all one needs to know about his work is "already there 'on the surface'." His Rorschach inkblots are intended as pop comments on art and what art could be. His cow wallpaper (literally, wallpaper with a cow motif) and his oxidation paintings (canvases prepared with copper paint that was then oxidized with urine) are also noteworthy in this context. Equally noteworthy is the way these works—and their means of production—mirrored the atmosphere at Andy's New York "Factory". Biographer Bob Colacello provides some details on Andy's "piss paintings": Warhol's first portrait of "Basquiat" (1982) is a black photo-silkscreen over an oxidized copper "piss painting". After many years of silkscreen, oxidation, photography, etc., Warhol returned to painting with a brush in hand in a series of more than 50 large collaborative works done with Jean-Michel Basquiat between 1984 and 1986. Despite negative criticism when these were first shown, Warhol called some of them "masterpieces," and they were influential for his later work. Andy Warhol was commissioned in 1984 by collector and gallerist Alexander Iolas to produce work based on Leonardo da Vinci's "The Last Supper" for an exhibition at the old refectory of the Palazzo delle Stelline in Milan, opposite from the Santa Maria delle Grazie where Leonardo da Vinci's mural can be seen. Warhol exceeded the demands of the commission and produced nearly 100 variations on the theme, mostly silkscreens and paintings, and among them a collaborative sculpture with Basquiat, the "Ten Punching Bags (Last Supper)". The Milan exhibition that opened in January 1987 with a set of 22 silk-screens, was the last exhibition for both the artist and the gallerist. The series of "The Last Supper" was seen by some as "arguably his greatest," but by others as "wishy-washy, religiose" and "spiritless." It is the largest series of religious-themed works by any U.S. artist. Artist Maurizio Cattelan describes that it is difficult to separate daily encounters from the art of Andy Warhol: "That's probably the greatest thing about Warhol: the way he penetrated and summarized our world, to the point that distinguishing between him and our everyday life is basically impossible, and in any case useless." Warhol was an inspiration towards Cattelan's magazine and photography compilations, such as "Permanent Food, Charley", and "Toilet Paper". In the period just before his death, Warhol was working on "Cars", a series of paintings for Mercedes-Benz. A self-portrait by Andy Warhol (1963–64), which sold in New York at the May Post-War and Contemporary evening sale in Christie's, fetched $38.4 million. On May 9, 2012, his classic painting "Double Elvis (Ferus Type)" sold at auction at Sotheby's in New York for US$33 million. With commission, the sale price totaled US$37,042,500, short of the $50 million that Sotheby's had predicted the painting might bring. The piece (silkscreen ink and spray paint on canvas) shows Elvis Presley in a gunslinger pose. It was first exhibited in 1963 at the Ferus Gallery in Los Angeles. Warhol made 22 versions of the "Double Elvis", nine of which are held in museums. In November 2013, his "Silver Car Crash (Double Disaster)" diptych sold at Sotheby's Contemporary Art Auction for $105.4 million, a new record for the pop artist (pre-auction estimates were at $80 million). Created in 1963, this work had rarely been seen in public in the previous years. In November 2014, "Triple Elvis" sold for $81.9m (£51.9m) at an auction in New York. Warhol worked across a wide range of media—painting, photography, drawing, and sculpture. In addition, he was a highly prolific filmmaker. Between 1963 and 1968, he made more than 60 films, plus some 500 short black-and-white "screen test" portraits of Factory visitors. One of his most famous films, "Sleep", monitors poet John Giorno sleeping for six hours. The 35-minute film "Blow Job" is one continuous shot of the face of DeVeren Bookwalter supposedly receiving oral sex from filmmaker Willard Maas, although the camera never tilts down to see this. Another, "Empire" (1964), consists of eight hours of footage of the Empire State Building in New York City at dusk. The film "Eat" consists of a man eating a mushroom for 45 minutes. Warhol attended the 1962 premiere of the static composition by LaMonte Young called "Trio for Strings" and subsequently created his famous series of static films including "Kiss", "Eat", and "Sleep" (for which Young initially was commissioned to provide music). Uwe Husslein cites filmmaker Jonas Mekas, who accompanied Warhol to the Trio premiere, and who claims Warhol's static films were directly inspired by the performance. "Batman Dracula" is a 1964 film that was produced and directed by Warhol, without the permission of DC Comics. It was screened only at his art exhibits. A fan of the "Batman" series, Warhol's movie was an "homage" to the series, and is considered the first appearance of a blatantly campy Batman. The film was until recently thought to have been lost, until scenes from the picture were shown at some length in the 2006 documentary "Jack Smith and the Destruction of Atlantis". Warhol's 1965 film "Vinyl" is an adaptation of Anthony Burgess' popular dystopian novel "A Clockwork Orange". Others record improvised encounters between Factory regulars such as Brigid Berlin, Viva, Edie Sedgwick, Candy Darling, Holly Woodlawn, Ondine, Nico, and Jackie Curtis. Legendary underground artist Jack Smith appears in the film "Camp". His most popular and critically successful film was "Chelsea Girls" (1966). The film was highly innovative in that it consisted of two 16 mm-films being projected simultaneously, with two different stories being shown in tandem. From the projection booth, the sound would be raised for one film to elucidate that "story" while it was lowered for the other. The multiplication of images evoked Warhol's seminal silk-screen works of the early 1960s. Warhol was a fan of filmmaker Radley Metzger film work and commented that Metzger's film, "The Lickerish Quartet", was "an outrageously kinky masterpiece". "Blue Movie"—a film in which Warhol superstar Viva makes love in bed with Louis Waldon, another Warhol superstar—was Warhol's last film as director. The film, a seminal film in the Golden Age of Porn, was, at the time, controversial for its frank approach to a sexual encounter. "Blue Movie" was publicly screened in New York City in 2005, for the first time in more than 30 years. In the wake of the 1968 shooting, a reclusive Warhol relinquished his personal involvement in filmmaking. His acolyte and assistant director, Paul Morrissey, took over the film-making chores for the Factory collective, steering Warhol-branded cinema towards more mainstream, narrative-based, B-movie exploitation fare with "Flesh", "Trash", and "Heat". All of these films, including the later "Andy Warhol's Dracula" and "Andy Warhol's Frankenstein", were far more mainstream than anything Warhol as a director had attempted. These latter "Warhol" films starred Joe Dallesandro—more of a Morrissey star than a true Warhol superstar. In the early 1970s, most of the films directed by Warhol were pulled out of circulation by Warhol and the people around him who ran his business. After Warhol's death, the films were slowly restored by the Whitney Museum and are occasionally projected at museums and film festivals. Few of the Warhol-directed films are available on video or DVD. In the mid-1960s, Warhol adopted the band the Velvet Underground, making them a crucial element of the Exploding Plastic Inevitable multimedia performance art show. Warhol, with Paul Morrissey, acted as the band's manager, introducing them to Nico (who would perform with the band at Warhol's request). While managing The Velvet Underground, Andy would have them dressed in all black to perform in front of movies that he was also presenting. In 1966 he "produced" their first album "The Velvet Underground & Nico", as well as providing its album art. His actual participation in the album's production amounted to simply paying for the studio time. After the band's first album, Warhol and band leader Lou Reed started to disagree more about the direction the band should take, and their artistic friendship ended. In 1989, after Warhol's death, Reed and John Cale re-united for the first time since 1972 to write, perform, record and release the concept album "Songs for Drella", a tribute to Warhol. In October 2019, an audio tape of publicly unknown music by Reed, based on Warhols' 1975 book, “"The Philosophy of Andy Warhol: From A to B and Back Again"”, was reported to have been discovered in an archive at the Andy Warhol Museum in Pittsburgh. Warhol designed many album covers for various artists starting with the photographic cover of John Wallowitch's debut album, "This Is John Wallowitch!!!" (1964). He designed the cover art for The Rolling Stones' albums "Sticky Fingers" (1971) and "Love You Live" (1977), and the John Cale albums "The Academy in Peril" (1972) and "Honi Soit" in 1981. One of Warhol's last works was a portrait of Aretha Franklin for the cover of her 1986 gold album "Aretha", which was done in the style of the "Reigning Queens" series he had completed the year before. Warhol strongly influenced the new wave/punk rock band Devo, as well as David Bowie. Bowie recorded a song called "Andy Warhol" for his 1971 album "Hunky Dory". Lou Reed wrote the song "Andy's Chest", about Valerie Solanas, the woman who shot Warhol, in 1968. He recorded it with the Velvet Underground, and this version was released on the "VU" album in 1985. Bowie would later play Warhol in the 1996 movie, "Basquiat". Bowie recalled how meeting Warhol in real life helped him in the role, and recounted his early meetings with him: The band Triumph also wrote a song about Andy Warhol, "Stranger In A Strange Land" off their 1984 album Thunder Seven. Beginning in the early 1950s, Warhol produced several unbound portfolios of his work. The first of several bound self-published books by Warhol was "25 Cats Name Sam and One Blue Pussy", printed in 1954 by Seymour Berlin on Arches brand watermarked paper using his blotted line technique for the lithographs. The original edition was limited to 190 numbered, hand colored copies, using Dr. Martin's ink washes. Most of these were given by Warhol as gifts to clients and friends. Copy No. 4, inscribed "Jerry" on the front cover and given to Geraldine Stutz, was used for a facsimile printing in 1987, and the original was auctioned in May 2006 for US$35,000 by Doyle New York. Other self-published books by Warhol include: Warhol's book "A La Recherche du Shoe Perdu" (1955) marked his "transition from commercial to gallery artist". (The title is a play on words by Warhol on the title of French author Marcel Proust's "À la recherche du temps perdu".) After gaining fame, Warhol "wrote" several books that were commercially published: Warhol created the fashion magazine "Interview" that is still published today. The loopy title script on the cover is thought to be either his own handwriting or that of his mother, Julia Warhola, who would often do text work for his early commercial pieces. Although Andy Warhol is most known for his paintings and films, he authored works in many different media. He founded the gossip magazine "Interview", a stage for celebrities he "endorsed" and a business staffed by his friends. He collaborated with others on all of his books (some of which were written with Pat Hackett.) He adopted the young painter Jean-Michel Basquiat, and the band The Velvet Underground, presenting them to the public as his latest interest, and collaborating with them. One might even say that he produced people (as in the Warholian "Superstar" and the Warholian portrait). He endorsed products, appeared in commercials, and made frequent celebrity guest appearances on television shows and in films (he appeared in everything from "Love Boat" to "Saturday Night Live" and the Richard Pryor movie "Dynamite Chicken"). In this respect Warhol was a fan of "Art Business" and "Business Art"—he, in fact, wrote about his interest in thinking about art as business in "The Philosophy of Andy Warhol from A to B and Back Again". Warhol was homosexual. In 1980, he told an interviewer that he was still a virgin. Biographer Bob Colacello, who was present at the interview, felt it was probably true and that what little sex he had was probably "a mixture of voyeurism and masturbation—to use [Andy's] word "abstract"". Warhol's assertion of virginity would seem to be contradicted by his hospital treatment in 1960 for condylomata, a sexually transmitted disease. It has also been contradicted by his lovers, including Warhol muse BillyBoy, who has said they had sex to orgasm: "When he wasn't being Andy Warhol and when you were just alone with him he was an incredibly generous and very kind person. What seduced me was the Andy Warhol who I saw alone. In fact when I was with him in public he kind of got on my nerves...I'd say: 'You're just obnoxious, I can't bear you." Billy Name also denied that Warhol was only a voyeur, saying: "He was the essence of sexuality. It permeated everything. Andy exuded it, along with his great artistic creativity...It brought a joy to the whole art world in New York." "But his personality was so vulnerable that it became a defense to put up the blank front." Warhol's lovers included John Giorno, Billy Name, Charles Lisanby, and Jon Gould. His boyfriend of 12 years was Jed Johnson, whom he met in 1968, and who later achieved fame as an interior designer. The fact that Warhol's homosexuality influenced his work and shaped his relationship to the art world is a major subject of scholarship on the artist and is an issue that Warhol himself addressed in interviews, in conversation with his contemporaries, and in his publications ("e.g.", "Popism: The Warhol 1960s"). Throughout his career, Warhol produced erotic photography and drawings of male nudes. Many of his most famous works (portraits of Liza Minnelli, Judy Garland, and Elizabeth Taylor, and films such as "Blow Job", "My Hustler" and "Lonesome Cowboys") draw from gay underground culture or openly explore the complexity of sexuality and desire. As has been addressed by a range of scholars, many of his films premiered in gay porn theaters, including the New Andy Warhol Garrick Theatre and 55th Street Playhouse, in the late 1960s. The first works that Warhol submitted to a fine art gallery, homoerotic drawings of male nudes, were rejected for being too openly gay. In "Popism", furthermore, the artist recalls a conversation with the film maker Emile de Antonio about the difficulty Warhol had being accepted socially by the then-more-famous (but closeted) gay artists Jasper Johns and Robert Rauschenberg. De Antonio explained that Warhol was "too swish and that upsets them." In response to this, Warhol writes, "There was nothing I could say to that. It was all too true. So I decided I just wasn't going to care, because those were all the things that I didn't want to change anyway, that I didn't think I 'should' want to change ... Other people could change their attitudes but not me". In exploring Warhol's biography, many turn to this period—the late 1950s and early 1960s—as a key moment in the development of his persona. Some have suggested that his frequent refusal to comment on his work, to speak about himself (confining himself in interviews to responses like "Um, no" and "Um, yes", and often allowing others to speak for him)—and even the evolution of his pop style—can be traced to the years when Warhol was first dismissed by the inner circles of the New York art world. Warhol was a practicing Ruthenian Catholic. He regularly volunteered at homeless shelters in New York City, particularly during the busier times of the year, and described himself as a religious person. Many of Warhol's later works depicted religious subjects, including two series, "Details of Renaissance Paintings" (1984) and "The Last Supper" (1986). In addition, a body of religious-themed works was found posthumously in his estate. During his life, Warhol regularly attended Liturgy, and the priest at Warhol's church, Saint Vincent Ferrer, said that the artist went there almost daily, although he was not observed taking Communion or going to Confession and sat or knelt in the pews at the back. The priest thought he was afraid of being recognized; Warhol said he was self-conscious about being seen in a Roman Rite church crossing himself "in the Orthodox way" (right to left instead of the reverse). His art is noticeably influenced by the Eastern Christian tradition which was so evident in his places of worship. Warhol's brother has described the artist as "really religious, but he didn't want people to know about that because [it was] private". Despite the private nature of his faith, in Warhol's eulogy John Richardson depicted it as devout: "To my certain knowledge, he was responsible for at least one conversion. He took considerable pride in financing his nephew's studies for the priesthood". Warhol was an avid collector. His friends referred to his numerous collections, which filled not only his four-story townhouse, but also a nearby storage unit, as "Andy's Stuff." The true extent of his collections was not discovered until after his death, when The Andy Warhol Museum in Pittsburgh took in 641 boxes of his "Stuff." Warhol's collections included a Coca-Cola memorabilia sign, and 19th century paintings along with airplane menus, unpaid invoices, pizza dough, pornographic pulp novels, newspapers, stamps, supermarket flyers, and cookie jars, among other eccentricities. It also included significant works of art, such as George Bellows's "Miss Bentham". One of his main collections was his wigs. Warhol owned more than 40 and felt very protective of his hairpieces, which were sewn by a New York wig-maker from hair imported from Italy. In 1985 a girl snatched Warhol's wig off his head. It was later discovered in Warhol's diary entry for that day that he wrote: "I don't know what held me back from pushing her over the balcony." In 1960, he had bought a drawing of a light bulb by Jasper Johns. Another item found in Warhol's boxes at the museum in Pittsburgh was a mummified human foot from Ancient Egypt. The curator of anthropology at Carnegie Museum of Natural History felt that Warhol most likely found it at a flea market. Among Warhol's early collectors and influential supporters were Emily and Burton Tremaine. Among the over 15 artworks purchased, "Marilyn Diptych" (now at Tate Modern, London) and "A boy for Meg" (now at the National Gallery of Art in Washington, DC), were purchased directly out of Warhol's studio in 1962. One Christmas, Warhol left a small "Head of Marilyn Monroe" by the Tremaine's door at their New York apartment in gratitude for their support and encouragement. Warhol's will dictated that his entire estate—with the exception of a few modest legacies to family members—would go to create a foundation dedicated to the "advancement of the visual arts". Warhol had so many possessions that it took Sotheby's nine days to auction his estate after his death; the auction grossed more than US$20 million. In 1987, in accordance with Warhol's will, the Andy Warhol Foundation for the Visual Arts began. The foundation serves as the estate of Andy Warhol, but also has a mission "to foster innovative artistic expression and the creative process" and is "focused primarily on supporting work of a challenging and often experimental nature." The Artists Rights Society is the U.S. copyright representative for the Andy Warhol Foundation for the Visual Arts for all Warhol works with the exception of Warhol film stills. The U.S. copyright representative for Warhol film stills is the Warhol Museum in Pittsburgh. Additionally, the Andy Warhol Foundation for the Visual Arts has agreements in place for its image archive. All digital images of Warhol are exclusively managed by Corbis, while all transparency images of Warhol are managed by Art Resource. The Andy Warhol Foundation released its "20th Anniversary Annual Report" as a three-volume set in 2007: Vol. I, 1987–2007; Vol. II, Grants & Exhibitions; and Vol. III, Legacy Program. The Foundation remains one of the largest grant-giving organizations for the visual arts in the U.S. Many of Warhol's works and possessions are on display at The Andy Warhol Museum in Pittsburgh. The foundation donated more than 3,000 works of art to the museum. Warhol appeared as himself in the film "Cocaine Cowboys" (1979) and in the film "Tootsie" (1982). After his death, Warhol was portrayed by Crispin Glover in Oliver Stone's film "The Doors" (1991), by David Bowie in Julian Schnabel's film "Basquiat" (1996), and by Jared Harris in Mary Harron's film "I Shot Andy Warhol" (1996). Warhol appears as a character in Michael Daugherty's opera "Jackie O" (1997). Actor Mark Bringleson makes a brief cameo as Warhol in "" (1997). Many films by avant-garde cineast Jonas Mekas have caught the moments of Warhol's life. Sean Gregory Sullivan depicted Warhol in the film "54" (1998). Guy Pearce portrayed Warhol in the film "Factory Girl" (2007) about Edie Sedgwick's life. Actor Greg Travis portrays Warhol in a brief scene from the film "Watchmen" (2009). In the movie "Highway to Hell" a group of Andy Warhols are part of the "Good Intentions Paving Company" where good-intentioned souls are ground into pavement. In the film "Men in Black 3" (2012) Andy Warhol turns out to really be undercover MIB Agent W (played by Bill Hader). Warhol is throwing a party at The Factory in 1969, where he is looked up by MIB Agents K and J (J from the future). Agent W is desperate to end his undercover job ("I'm so out of ideas I'm painting soup cans and bananas, for Christ sakes!", "You gotta fake my death, okay? I can't listen to sitar music anymore." and "I can't tell the girls from the boys."). Andy Warhol (portrayed by Tom Meeten) is one of main characters of the 2012 British television show "Noel Fielding's Luxury Comedy". The character is portrayed as having robot-like mannerisms. In the 2017 feature "The Billionaire Boys Club" Cary Elwes portrays Warhol in a film based on the true story about Ron Levin (portrayed by Kevin Spacey) a friend of Warhol's who was murdered in 1986. In September 2016, it was announced that Jared Leto would portray the title character in "Warhol", an upcoming American biographical drama film produced by Michael De Luca and written by Terence Winter, based on the book "Warhol: The Biography" by Victor Bockris. Warhol appeared as a recurring character in TV series "Vinyl", played by John Cameron Mitchell. Warhol was portrayed by Evan Peters in the "" episode "". The episode depicts the attempted assassination of Warhol by Valerie Solanas (Lena Dunham). In early 1969, Andy Warhol was commissioned by Braniff International to appear in two television commercials to promote the luxury Airline's new When You Got It - Flaunt It Campaign. The campaign was created by Braniff's new advertising agency Lois Holland Calloway, which was led by famed advertiser George Lois, creator of a famed series of Esquire Magazine covers. The first commercial series involved pairing the most unlikely people but who shared the fact that they both flew Braniff Airways. Mr. Warhol was paired with boxing legend Sonny Liston. The odd commercial worked as did the others that featured unlikely fellow travelers such as painter Salvador Dali and baseball legend Whitey Ford. Two additional commercials for Braniff were created that featured famous persons entering a Braniff jet and being greeted a Braniff Hostess while espousing their like for flying Braniff. Mr. Warhol was also featured in the first of these commercials that were also produced by Mr. Lois and were released in the summer of 1969. Mr. Lois has incorrectly stated that he was commissioned by Braniff in 1967 for representation during that year but at that time Madison Avenue advertising doyenne Mary Wells Lawrence, who was married to Braniff's charismatic Chairman and President Harding Lawrence, was representing the Dallas-based carrier at that time. Mr. Lois succeeded Wells Rich Greene Agency on December 1, 1968. The rights to Mr. Warhol's films for Braniff and his signed contracts are owned by a private Trust and are administered by Braniff Airways Foundation in Dallas, Texas. A biography of Andy Warhol written by art critic Blake Gopnik was published in 2020 under the title "Warhol". In 2002, the U.S. Postal Service issued an 18-cent stamp commemorating Warhol. Designed by Richard Sheaff of Scottsdale, Arizona, the stamp was unveiled at a ceremony at The Andy Warhol Museum and features Warhol's painting "Self-Portrait, 1964". In March 2011, a chrome statue of Andy Warhol and his Polaroid camera was revealed at Union Square in New York City.
https://en.wikipedia.org/wiki?curid=864
American Film Institute The American Film Institute (AFI) is an American film organization that educates filmmakers and honors the heritage of the motion picture arts in the United States. AFI is supported by private funding and public membership fees. The institute is composed of leaders from the film, entertainment, business, and academic communities. A board of trustees chaired by Kathleen Kennedy_(producer) and a board of directors chaired by Robert A. Daly guide the organization, which is led by President and CEO, film historian Bob Gazzale. Prior leaders were founding director George Stevens, Jr. (from the organization's inception in 1967 until 1980) and Jean Picker Firstenberg (from 1980 to 2007). The American Film Institute was founded by a 1965 presidential mandate announced in the Rose Garden of the White House by Lyndon B. Johnson—to establish a national arts organization to preserve the legacy of American film heritage, educate the next generation of filmmakers, and honor the artists and their work. Two years later, in 1967, AFI was established, supported by the National Endowment for the Arts, the Motion Picture Association of America and the Ford Foundation. The original 22-member Board of Trustees included actor Gregory Peck as chairman and actor Sidney Poitier as vice-chairman, as well as director Francis Ford Coppola, film historian Arthur Schlesinger, Jr., lobbyist Jack Valenti, and other representatives from the arts and academia. The institute established a training program for filmmakers known then as the Center for Advanced Film Studies. Also created in the early years were a repertory film exhibition program at the Kennedy Center for the Performing Arts and the AFI Catalog of Feature Films — a scholarly source for American film history. The institute moved to its current eight-acre Hollywood campus in 1981. The film training program grew into the AFI Conservatory, an accredited graduate school. AFI moved its presentation of first-run and auteur films from the Kennedy Center to the historic AFI Silver Theatre and Cultural Center, which hosts the AFI DOCS film festival, making AFI the largest nonprofit film exhibitor in the world. AFI educates audiences and recognizes artistic excellence through its awards programs and 10 Top 10 Lists. In 2017, then-aspiring filmmaker Ilana Bar-Din Giannini claimed that the AFI expelled her after she accused Dezso Magyar of sexually harassing her in the early 1980s. AFI educational and cultural programs include: In 1969, the institute established the AFI Conservatory for Advanced Film Studies at Greystone, the Doheny Mansion in Beverly Hills, California. The first class included filmmakers Terrence Malick, Caleb Deschanel, and Paul Schrader. That program grew into the AFI Conservatory, an accredited graduate film school located in the hills above Hollywood, California, providing training in six filmmaking disciplines: cinematography, directing, editing, producing, production design, and screenwriting. Mirroring a professional production environment, Fellows collaborate to make more films than any other graduate level program. Admission to AFI Conservatory is highly selective, with a maximum of 140 graduates per year. In 2013, Emmy and Oscar-winning director, producer, and screenwriter James L. Brooks ("As Good as It Gets", "Broadcast News", "Terms of Endearment") joined as the artistic director of the AFI Conservatory where he provides leadership for the film program. Brooks' artistic role at the AFI Conservatory has a rich legacy that includes Daniel Petrie, Jr., Robert Wise, and Frank Pierson. Award-winning director Bob Mandel served as dean of the AFI Conservatory for nine years. Jan Schuette took over as dean in 2014 and served until 2017. Film producer Richard Gladstein was dean from 2017 until 2019, when Susan Ruskin was appointed. AFI Conservatory's alumni have careers in film, television and on the web. They have been recognized with all of the major industry awards—Academy Award, Emmy Award, guild awards, and the Tony Award. Among the alumni of AFI are Andrea Arnold, ("Red Road", "Fish Tank"), Darren Aronofsky ("Requiem for a Dream", "Black Swan"), Carl Colpaert ("Gas Food Lodging", "Hurlyburly", "Swimming with Sharks"), Doug Ellin ("Entourage"), Todd Field ("In the Bedroom", "Little Children"), Jack Fisk ("Badlands", "Days of Heaven", "There Will Be Blood"), Carl Franklin ("One False Move", "Devil in a Blue Dress", "House of Cards"), Patty Jenkins ("Monster", "Wonder Woman"), Janusz Kamiński ("Lincoln", "Schindler's List", "Saving Private Ryan"), Matthew Libatique ("Noah", "Black Swan"), David Lynch ("Mulholland Drive", "Blue Velvet"), Terrence Malick ("Days of Heaven", "The Thin Red Line", "The Tree of Life"), Victor Nuñez, ("Ruby in Paradise", "Ulee's Gold"), Wally Pfister ("Memento", "The Dark Knight", "Inception"), Robert Richardson ("Platoon", "JFK", "Django Unchained"), Ari Aster ("Hereditary", "Midsommar"), and many others. The AFI Catalog, started in 1968, is a web-based filmographic database. A research tool for film historians, the catalog consists of entries on more than 60,000 feature films and 17,000 short films produced from 1893–2011, as well as AFI Awards Outstanding Movies of the Year from 2000 through 2010. Early print copies of this catalog may also be found at local libraries. Created in 2000, the AFI Awards honor the ten outstanding films ("Movies of the Year") and ten outstanding television programs ("TV Programs of the Year"). The awards are a non-competitive acknowledgment of excellence. The awards are announced in December, and a private luncheon for award honorees takes place the following January. The AFI 100 Years... series, which ran from 1998 to 2008 and created jury-selected lists of America's best movies in categories such as Musicals, Laughs and Thrills, prompted new generations to experience classic American films. The juries consisted of over 1,500 artists, scholars, critics, and historians. "Citizen Kane" was voted the greatest American film twice. AFI operates two film festivals: AFI Fest in Los Angeles, and AFI Docs (formally known as Silverdocs) in Silver Spring, Maryland, and Washington, D.C. AFI Fest is the American Film Institute's annual celebration of artistic excellence. It is a showcase for the best festival films of the year and an opportunity for master filmmakers and emerging artists to come together with audiences in the movie capital of the world. It is the only festival of its stature that is free to the public. The Academy of Motion Picture Arts and Sciences recognizes AFI Fest as a qualifying festival for the Short Films category for the annual Academy Awards. The festival has paid tribute to numerous influential filmmakers and artists over the years, including Agnès Varda, Pedro Almodóvar and David Lynch as guest artistic directors, and has screened scores of films that have produced Oscar nominations and wins. The American Film Market (AFM) is the market partner of AFI Fest. Audi is the festival's presenting sponsor. Additional sponsors include American Airlines and Stella Artois. Held annually in June, AFI Docs (formerly Silverdocs) is a documentary festival in Washington, D.C. The festival attracts over 27,000 documentary enthusiasts. The AFI Silver Theatre and Cultural Center is a moving image exhibition, education and cultural center located in Silver Spring, Maryland. Anchored by the restoration of noted architect John Eberson's historic 1938 Silver Theatre, it features 32,000 square feet of new construction housing two stadium theatres, office and meeting space, and reception and exhibit areas. The AFI Silver Theatre and Cultural Center presents film and video programming, augmented by filmmaker interviews, panels, discussions, and musical performances. The Directing Workshop for Women is a training program committed to educating and mentoring participants in an effort to increase the number of women working professionally in screen directing. In this tuition-free program, each participant is required to complete a short film by the end of the year-long program. Alumnae of the program include Maya Angelou, Anne Bancroft, Dyan Cannon, Ellen Burstyn, Jennifer Getzinger, Lesli Linka Glatter, and Nancy Malone. AFI released a set of hour-long programs reviewing the career of acclaimed directors. The Directors Series content was copyrighted in 1997 by Media Entertainment Inc and The American Film Institute, and the VHS and DVDs were released between 1999 and 2001 on Winstar TV and Video. Directors featured included:
https://en.wikipedia.org/wiki?curid=869
Akira Kurosawa Akira Kurosawa ( "Kurosawa Akira"; March 23, 1910 – September 6, 1998) was a Japanese film director and screenwriter, who directed 30 films in a career spanning 57 years. He is regarded as one of the most important and influential filmmakers in the history of cinema. Kurosawa entered the Japanese film industry in 1936, following a brief stint as a painter. After years of working on numerous films as an assistant director and scriptwriter, he made his debut as a director during World War II with the popular action film "Sanshiro Sugata" (a.k.a. "Judo Saga"). After the war, the critically acclaimed "Drunken Angel" (1948), in which Kurosawa cast then-unknown actor Toshiro Mifune in a starring role, cemented the director's reputation as one of the most important young filmmakers in Japan. The two men would go on to collaborate on another 15 films. "Rashomon", which premiered in Tokyo, became the surprise winner of the Golden Lion at the 1951 Venice Film Festival. The commercial and critical success of that film opened up Western film markets for the first time to the products of the Japanese film industry, which in turn led to international recognition for other Japanese filmmakers. Kurosawa directed approximately one film per year throughout the 1950s and early 1960s, including a number of highly regarded (and often adapted) films, such as "Ikiru" (1952), "Seven Samurai" (1954) and "Yojimbo" (1961). After the 1960s he became much less prolific; even so, his later work—including his final two epics, "Kagemusha" (1980) and "Ran" (1985)—continued to win awards, though more often abroad than in Japan. In 1990, he accepted the Academy Award for Lifetime Achievement. Posthumously, he was named "Asian of the Century" in the "Arts, Literature, and Culture" category by "AsianWeek" magazine and CNN, cited there as being among the five people who most prominently contributed to the improvement of Asia in the 20th century. His career has been honored by many retrospectives, critical studies and biographies in both print and video, and by releases in many consumer media formats. Kurosawa was born on March 23, 1910, in Ōimachi in the Ōmori district of Tokyo. His father Isamu (1864–1948), a member of a samurai family from Akita Prefecture, worked as the director of the Army's Physical Education Institute's lower secondary school, while his mother Shima (1870–1952) came from a merchant's family living in Osaka. Akira was the eighth and youngest child of the moderately wealthy family, with two of his siblings already grown up at the time of his birth and one deceased, leaving Kurosawa to grow up with three sisters and a brother. In addition to promoting physical exercise, Isamu Kurosawa was open to Western traditions and considered theater and motion pictures to have educational merit. He encouraged his children to watch films; young Akira viewed his first movies at the age of six. An important formative influence was his elementary school teacher Mr. Tachikawa, whose progressive educational practices ignited in his young pupil first a love of drawing and then an interest in education in general. During this time, the boy also studied calligraphy and Kendo swordsmanship. Another major childhood influence was Heigo Kurosawa, Akira's older brother by four years. In the aftermath of the Great Kantō earthquake of 1923, Heigo took the 13-year-old Akira to view the devastation. When the younger brother wanted to look away from the human corpses and animal carcasses scattered everywhere, Heigo forbade him to do so, encouraging Akira instead to face his fears by confronting them directly. Some commentators have suggested that this incident would influence Kurosawa's later artistic career, as the director was seldom hesitant to confront unpleasant truths in his work. Heigo was academically gifted, but soon after failing to secure a place in Tokyo's foremost high school, he began to detach himself from the rest of the family, preferring to concentrate on his interest in foreign literature. In the late 1920s, Heigo became a benshi (silent film narrator) for Tokyo theaters showing foreign films and quickly made a name for himself. Akira, who at this point planned to become a painter, moved in with him, and the two brothers became inseparable. With Heigo's guidance, Akira devoured not only films but also theater and circus performances, while exhibiting his paintings and working for the left-wing Proletarian Artists' League. However, he was never able to make a living with his art, and, as he began to perceive most of the proletarian movement as "putting unfulfilled political ideals directly onto the canvas", he lost his enthusiasm for painting. With the increasing production of talking pictures in the early 1930s, film narrators like Heigo began to lose work, and Akira moved back in with his parents. In July 1933, Heigo committed suicide. Kurosawa has commented on the lasting sense of loss he felt at his brother's death and the chapter of his autobiography ("Something Like an Autobiography") that describes it—written nearly half a century after the event—is titled, "A Story I Don't Want to Tell". Only four months later, Kurosawa's eldest brother also died, leaving Akira, at age 23, the only one of the Kurosawa brothers still living, together with his three surviving sisters. In 1935, the new film studio Photo Chemical Laboratories, known as P.C.L. (which later became the major studio Toho), advertised for assistant directors. Although he had demonstrated no previous interest in film as a profession, Kurosawa submitted the required essay, which asked applicants to discuss the fundamental deficiencies of Japanese films and find ways to overcome them. His half-mocking view was that if the deficiencies were fundamental, there was no way to correct them. Kurosawa's essay earned him a call to take the follow-up exams, and director Kajirō Yamamoto, who was among the examiners, took a liking to Kurosawa and insisted that the studio hire him. The 25-year-old Kurosawa joined P.C.L. in February 1936. During his five years as an assistant director, Kurosawa worked under numerous directors, but by far the most important figure in his development was Yamamoto. Of his 24 films as A.D., he worked on 17 under Yamamoto, many of them comedies featuring the popular actor Ken'ichi Enomoto, known as "Enoken". Yamamoto nurtured Kurosawa's talent, promoting him directly from third assistant director to chief assistant director after a year. Kurosawa's responsibilities increased, and he worked at tasks ranging from stage construction and film development to location scouting, script polishing, rehearsals, lighting, dubbing, editing, and second-unit directing. In the last of Kurosawa's films as an assistant director for Yamamoto, "Horse" ("Uma", 1941), Kurosawa took over most of the production, as his mentor was occupied with the shooting of another film. Yamamoto advised Kurosawa that a good director needed to master screenwriting. Kurosawa soon realized that the potential earnings from his scripts were much higher than what he was paid as an assistant director. He later wrote or co-wrote all his films, and frequently penned screenplays for other directors such as Satsuo Yamamoto's film, "A Triumph of Wings" ("Tsubasa no gaika", 1942). This outside scriptwriting would serve Kurosawa as a lucrative sideline lasting well into the 1960s, long after he became famous. In the two years following the release of "Horse" in 1941, Kurosawa searched for a story he could use to launch his directing career. Towards the end of 1942, about a year after the Japanese attack on Pearl Harbor, novelist Tsuneo Tomita published his Musashi Miyamoto-inspired judo novel, "Sanshiro Sugata", the advertisements for which intrigued Kurosawa. He bought the book on its publication day, devoured it in one sitting, and immediately asked Toho to secure the film rights. Kurosawa's initial instinct proved correct as, within a few days, three other major Japanese studios also offered to buy the rights. Toho prevailed, and Kurosawa began pre-production on his debut work as director. Shooting of "Sanshiro Sugata" began on location in Yokohama in December 1942. Production proceeded smoothly, but getting the completed film past the censors was an entirely different matter. The censorship office considered the work to be objectionably "British-American" by the standards of wartime Japan, and it was only through the intervention of director Yasujirō Ozu, who championed the film, that "Sanshiro Sugata" was finally accepted for release on March 25, 1943. (Kurosawa had just turned 33.) The movie became both a critical and commercial success. Nevertheless, the censorship office would later decide to cut out some 18 minutes of footage, much of which is now considered lost. He next turned to the subject of wartime female factory workers in "The Most Beautiful", a propaganda film which he shot in a semi-documentary style in early 1944. To coax realistic performances from his actresses, the director had them live in a real factory during the shoot, eat the factory food and call each other by their character names. He would use similar methods with his performers throughout his career. During production, the actress playing the leader of the factory workers, Yōko Yaguchi, was chosen by her colleagues to present their demands to the director. She and Kurosawa were constantly at loggerheads, and it was through these arguments that the two, paradoxically, became close. They married on May 21, 1945, with Yaguchi two months pregnant (she never resumed her acting career), and the couple would remain together until her death in 1985. They had two children, both surviving Kurosawa : a son, Hisao, born December 20, 1945, who served as producer on some of his father's last projects, and Kazuko, a daughter, born April 29, 1954, who became a costume designer. Shortly before his marriage, Kurosawa was pressured by the studio against his will to direct a sequel to his debut film. The often blatantly propagandistic "Sanshiro Sugata Part II", which premiered in May 1945, is generally considered one of his weakest pictures. Kurosawa decided to write the script for a film that would be both censor-friendly and less expensive to produce. "The Men Who Tread on the Tiger's Tail", based on the Kabuki play "Kanjinchō" and starring the comedian Enoken, with whom Kurosawa had often worked during his assistant director days, was completed in September 1945. By this time, Japan had surrendered and the occupation of Japan had begun. The new American censors interpreted the values allegedly promoted in the picture as overly "feudal" and banned the work. It was not released until 1952, the year another Kurosawa film, "Ikiru", was also released. Ironically, while in production, the film had already been savaged by Japanese wartime censors as too Western and "democratic" (they particularly disliked the comic porter played by Enoken), so the movie most probably would not have seen the light of day even if the war had continued beyond its completion. After the war, Kurosawa, influenced by the democratic ideals of the Occupation, sought to make films that would establish a new respect towards the individual and the self. The first such film, "No Regrets for Our Youth" (1946), inspired by both the 1933 Takigawa incident and the Hotsumi Ozaki wartime spy case, criticized Japan's prewar regime for its political oppression. Atypically for the director, the heroic central character is a woman, Yukie (Setsuko Hara), who, born into upper-middle-class privilege, comes to question her values in a time of political crisis. The original script had to be extensively rewritten and, because of its controversial theme and gender of its protagonist, the completed work divided critics. Nevertheless, it managed to win the approval of audiences, who turned variations on the film's title into a postwar catchphrase. His next film, "One Wonderful Sunday" premiered in July 1947 to mixed reviews. It is a relatively uncomplicated and sentimental love story dealing with an impoverished postwar couple trying to enjoy, within the devastation of postwar Tokyo, their one weekly day off. The movie bears the influence of Frank Capra, D. W. Griffith and F. W. Murnau, each of whom was among Kurosawa's favorite directors. Another film released in 1947 with Kurosawa's involvement was the action-adventure thriller, "Snow Trail", directed by Senkichi Taniguchi from Kurosawa's screenplay. It marked the debut of the intense young actor Toshiro Mifune. It was Kurosawa who, with his mentor Yamamoto, had intervened to persuade Toho to sign Mifune, during an audition in which the young man greatly impressed Kurosawa, but managed to alienate most of the other judges. "Drunken Angel" is often considered the director's first major work. Although the script, like all of Kurosawa's occupation-era works, had to go through rewrites due to American censorship, Kurosawa felt that this was the first film in which he was able to express himself freely. A gritty story of a doctor who tries to save a gangster (yakuza) with tuberculosis, it was also the director's first film with Toshiro Mifune, who would proceed to play a major role in all but one ("Ikiru") of the director's next 16 films. While Mifune was not cast as the protagonist in "Drunken Angel", his explosive performance as the gangster so dominates the drama that he shifted the focus from the title character, the alcoholic doctor played by Takashi Shimura, who had already appeared in several Kurosawa movies. However, Kurosawa did not want to smother the young actor's immense vitality, and Mifune's rebellious character electrified audiences in much the way that Marlon Brando's defiant stance would startle American film audiences a few years later. The film premiered in Tokyo in April 1948 to rave reviews and was chosen by the prestigious Kinema Junpo critics poll as the best film of its year, the first of three Kurosawa movies to be so honored. Kurosawa, with producer Sōjirō Motoki and fellow directors and friends Kajiro Yamamoto, Mikio Naruse and Senkichi Taniguchi, formed a new independent production unit called Film Art Association (Eiga Geijutsu Kyōkai). For this organization's debut work, and first film for Daiei studios, Kurosawa turned to a contemporary play by Kazuo Kikuta and, together with Taniguchi, adapted it for the screen. "The Quiet Duel" starred Toshiro Mifune as an idealistic young doctor struggling with syphilis, a deliberate attempt by Kurosawa to break the actor away from being typecast as gangsters. Released in March 1949, it was a box office success, but is generally considered one of the director's lesser achievements. His second film of 1949, also produced by Film Art Association and released by Shintoho, was "Stray Dog". It is a detective movie (perhaps the first important Japanese film in that genre) that explores the mood of Japan during its painful postwar recovery through the story of a young detective, played by Mifune, and his fixation on the recovery of his handgun, which was stolen by a penniless war veteran who proceeds to use it to rob and murder. Adapted from an unpublished novel by Kurosawa in the style of a favorite writer of his, Georges Simenon, it was the director's first collaboration with screenwriter Ryuzo Kikushima, who would later help to script eight other Kurosawa films. A famous, virtually wordless sequence, lasting over eight minutes, shows the detective, disguised as an impoverished veteran, wandering the streets in search of the gun thief; it employed actual documentary footage of war-ravaged Tokyo neighborhoods shot by Kurosawa's friend, Ishirō Honda, the future director of "Godzilla". The film is considered a precursor to the contemporary police procedural and buddy cop film genres. "Scandal", released by Shochiku in April 1950, was inspired by the director's personal experiences with, and anger towards, Japanese yellow journalism. The work is an ambitious mixture of courtroom drama and social problem film about free speech and personal responsibility, but even Kurosawa regarded the finished product as dramatically unfocused and unsatisfactory, and almost all critics agree.
https://en.wikipedia.org/wiki?curid=872
Ancient Egypt Ancient Egypt was a civilization of ancient North Africa, concentrated along the lower reaches of the Nile River, situated in the place that is now the country Egypt. Ancient Egyptian civilization followed prehistoric Egypt and coalesced around 3100BC (according to conventional Egyptian chronology) with the political unification of Upper and Lower Egypt under Menes (often identified with Narmer). The history of ancient Egypt occurred as a series of stable kingdoms, separated by periods of relative instability known as Intermediate Periods: the Old Kingdom of the Early Bronze Age, the Middle Kingdom of the Middle Bronze Age and the New Kingdom of the Late Bronze Age. Egypt reached the pinnacle of its power in the New Kingdom, ruling much of Nubia and a sizable portion of the Near East, after which it entered a period of slow decline. During the course of its history Egypt was invaded or conquered by a number of foreign powers, including the Hyksos, the Libyans, the Nubians, the Assyrians, the Achaemenid Persians, and the Macedonians under the command of Alexander the Great. The Greek Ptolemaic Kingdom, formed in the aftermath of Alexander's death, ruled Egypt until 30BC, when, under Cleopatra, it fell to the Roman Empire and became a Roman province. The success of ancient Egyptian civilization came partly from its ability to adapt to the conditions of the Nile River valley for agriculture. The predictable flooding and controlled irrigation of the fertile valley produced surplus crops, which supported a more dense population, and social development and culture. With resources to spare, the administration sponsored mineral exploitation of the valley and surrounding desert regions, the early development of an independent writing system, the organization of collective construction and agricultural projects, trade with surrounding regions, and a military intended to assert Egyptian dominance. Motivating and organizing these activities was a bureaucracy of elite scribes, religious leaders, and administrators under the control of a pharaoh, who ensured the cooperation and unity of the Egyptian people in the context of an elaborate system of religious beliefs. The many achievements of the ancient Egyptians include the quarrying, surveying and construction techniques that supported the building of monumental pyramids, temples, and obelisks; a system of mathematics, a practical and effective system of medicine, irrigation systems and agricultural production techniques, the first known planked boats, Egyptian faience and glass technology, new forms of literature, and the earliest known peace treaty, made with the Hittites. Ancient Egypt has left a lasting legacy. Its art and architecture were widely copied, and its antiquities carried off to far corners of the world. Its monumental ruins have inspired the imaginations of travelers and writers for centuries. A new-found respect for antiquities and excavations in the early modern period by Europeans and Egyptians led to the scientific investigation of Egyptian civilization and a greater appreciation of its cultural legacy. The Nile has been the lifeline of its region for much of human history. The fertile floodplain of the Nile gave humans the opportunity to develop a settled agricultural economy and a more sophisticated, centralized society that became a cornerstone in the history of human civilization. Nomadic modern human hunter-gatherers began living in the Nile valley through the end of the Middle Pleistocene some 120,000 years ago. By the late Paleolithic period, the arid climate of Northern Africa became increasingly hot and dry, forcing the populations of the area to concentrate along the river region. In Predynastic and Early Dynastic times, the Egyptian climate was much less arid than it is today. Large regions of Egypt were covered in treed savanna and traversed by herds of grazing ungulates. Foliage and fauna were far more prolific in all environs and the Nile region supported large populations of waterfowl. Hunting would have been common for Egyptians, and this is also the period when many animals were first domesticated. By about 5500 BC, small tribes living in the Nile valley had developed into a series of cultures demonstrating firm control of agriculture and animal husbandry, and identifiable by their pottery and personal items, such as combs, bracelets, and beads. The largest of these early cultures in upper (Southern) Egypt was the Badarian culture, which probably originated in the Western Desert; it was known for its high quality ceramics, stone tools, and its use of copper. The Badari was followed by the Naqada culture: the Amratian (Naqada I), the Gerzeh (Naqada II), and Semainean (Naqada III). These brought a number of technological improvements. As early as the Naqada I Period, predynastic Egyptians imported obsidian from Ethiopia, used to shape blades and other objects from flakes. In Naqada II times, early evidence exists of contact with the Near East, particularly Canaan and the Byblos coast. Over a period of about 1,000 years, the Naqada culture developed from a few small farming communities into a powerful civilization whose leaders were in complete control of the people and resources of the Nile valley. Establishing a power center at Nekhen (in Greek, Hierakonpolis), and later at Abydos, Naqada III leaders expanded their control of Egypt northwards along the Nile. They also traded with Nubia to the south, the oases of the western desert to the west, and the cultures of the eastern Mediterranean and Near East to the east, initiating a period of Egypt-Mesopotamia relations. The Naqada culture manufactured a diverse selection of material goods, reflective of the increasing power and wealth of the elite, as well as societal personal-use items, which included combs, small statuary, painted pottery, high quality decorative stone vases, cosmetic palettes, and jewelry made of gold, lapis, and ivory. They also developed a ceramic glaze known as faience, which was used well into the Roman Period to decorate cups, amulets, and figurines. During the last predynastic phase, the Naqada culture began using written symbols that eventually were developed into a full system of hieroglyphs for writing the ancient Egyptian language. The Early Dynastic Period was approximately contemporary to the early Sumerian-Akkadian civilisation of Mesopotamia and of ancient Elam. The third-centuryBC Egyptian priest Manetho grouped the long line of kings from Menes to his own time into 30 dynasties, a system still used today. He began his official history with the king named "Meni" (or "Menes" in Greek) who was believed to have united the two kingdoms of Upper and Lower Egypt. The transition to a unified state happened more gradually than ancient Egyptian writers represented, and there is no contemporary record of Menes. Some scholars now believe, however, that the mythical Menes may have been the king Narmer, who is depicted wearing royal regalia on the ceremonial "Narmer Palette," in a symbolic act of unification. In the Early Dynastic Period, which began about 3000BC, the first of the Dynastic kings solidified control over lower Egypt by establishing a capital at Memphis, from which he could control the labour force and agriculture of the fertile delta region, as well as the lucrative and critical trade routes to the Levant. The increasing power and wealth of the kings during the early dynastic period was reflected in their elaborate mastaba tombs and mortuary cult structures at Abydos, which were used to celebrate the deified king after his death. The strong institution of kingship developed by the kings served to legitimize state control over the land, labour, and resources that were essential to the survival and growth of ancient Egyptian civilization. Major advances in architecture, art, and technology were made during the Old Kingdom, fueled by the increased agricultural productivity and resulting population, made possible by a well-developed central administration. Some of ancient Egypt's crowning achievements, the Giza pyramids and Great Sphinx, were constructed during the Old Kingdom. Under the direction of the vizier, state officials collected taxes, coordinated irrigation projects to improve crop yield, drafted peasants to work on construction projects, and established a justice system to maintain peace and order. With the rising importance of central administration in Egypt a new class of educated scribes and officials arose who were granted estates by the king in payment for their services. Kings also made land grants to their mortuary cults and local temples, to ensure that these institutions had the resources to worship the king after his death. Scholars believe that five centuries of these practices slowly eroded the economic vitality of Egypt, and that the economy could no longer afford to support a large centralized administration. As the power of the kings diminished, regional governors called nomarchs began to challenge the supremacy of the office of king. This, coupled with severe droughts between 2200 and 2150BC, is believed to have caused the country to enter the 140-year period of famine and strife known as the First Intermediate Period. After Egypt's central government collapsed at the end of the Old Kingdom, the administration could no longer support or stabilize the country's economy. Regional governors could not rely on the king for help in times of crisis, and the ensuing food shortages and political disputes escalated into famines and small-scale civil wars. Yet despite difficult problems, local leaders, owing no tribute to the king, used their new-found independence to establish a thriving culture in the provinces. Once in control of their own resources, the provinces became economically richer—which was demonstrated by larger and better burials among all social classes. In bursts of creativity, provincial artisans adopted and adapted cultural motifs formerly restricted to the royalty of the Old Kingdom, and scribes developed literary styles that expressed the optimism and originality of the period. Free from their loyalties to the king, local rulers began competing with each other for territorial control and political power. By 2160BC, rulers in Herakleopolis controlled Lower Egypt in the north, while a rival clan based in Thebes, the Intef family, took control of Upper Egypt in the south. As the Intefs grew in power and expanded their control northward, a clash between the two rival dynasties became inevitable. Around 2055BC the northern Theban forces under Nebhepetre Mentuhotep II finally defeated the Herakleopolitan rulers, reuniting the Two Lands. They inaugurated a period of economic and cultural renaissance known as the Middle Kingdom. The kings of the Middle Kingdom restored the country's stability and prosperity, thereby stimulating a resurgence of art, literature, and monumental building projects. Mentuhotep II and his Eleventh Dynasty successors ruled from Thebes, but the vizier Amenemhat I, upon assuming the kingship at the beginning of the Twelfth Dynasty around 1985BC, shifted the kingdom's capital to the city of Itjtawy, located in Faiyum. From Itjtawy, the kings of the Twelfth Dynasty undertook a far-sighted land reclamation and irrigation scheme to increase agricultural output in the region. Moreover, the military reconquered territory in Nubia that was rich in quarries and gold mines, while laborers built a defensive structure in the Eastern Delta, called the "Walls-of-the-Ruler", to defend against foreign attack. With the kings having secured the country militarily and politically and with vast agricultural and mineral wealth at their disposal, the nation's population, arts, and religion flourished. In contrast to elitist Old Kingdom attitudes towards the gods, the Middle Kingdom displayed an increase in expressions of personal piety. Middle Kingdom literature featured sophisticated themes and characters written in a confident, eloquent style. The relief and portrait sculpture of the period captured subtle, individual details that reached new heights of technical sophistication. The last great ruler of the Middle Kingdom, Amenemhat III, allowed Semitic-speaking Canaanite settlers from the Near East into the Delta region to provide a sufficient labour force for his especially active mining and building campaigns. These ambitious building and mining activities, however, combined with severe Nile floods later in his reign, strained the economy and precipitated the slow decline into the Second Intermediate Period during the later Thirteenth and Fourteenth dynasties. During this decline, the Canaanite settlers began to assume greater control of the Delta region, eventually coming to power in Egypt as the Hyksos. Around 1785BC, as the power of the Middle Kingdom kings weakened, a Western Asian people called the Hyksos, who had already settled in the Delta, seized control of Egypt and established their capital at Avaris, forcing the former central government to retreat to Thebes. The king was treated as a vassal and expected to pay tribute. The Hyksos ("foreign rulers") retained Egyptian models of government and identified as kings, thereby integrating Egyptian elements into their culture. They and other invaders introduced new tools of warfare into Egypt, most notably the composite bow and the horse-drawn chariot. After retreating south, the native Theban kings found themselves trapped between the Canaanite Hyksos ruling the north and the Hyksos' Nubian allies, the Kushites, to the south. After years of vassalage, Thebes gathered enough strength to challenge the Hyksos in a conflict that lasted more than 30 years, until 1555BC. The kings Seqenenre Tao II and Kamose were ultimately able to defeat the Nubians to the south of Egypt, but failed to defeat the Hyksos. That task fell to Kamose's successor, Ahmose I, who successfully waged a series of campaigns that permanently eradicated the Hyksos' presence in Egypt. He established a new dynasty and, in the New Kingdom that followed, the military became a central priority for the kings, who sought to expand Egypt's borders and attempted to gain mastery of the Near East. The New Kingdom pharaohs established a period of unprecedented prosperity by securing their borders and strengthening diplomatic ties with their neighbours, including the Mitanni Empire, Assyria, and Canaan. Military campaigns waged under Tuthmosis I and his grandson Tuthmosis III extended the influence of the pharaohs to the largest empire Egypt had ever seen. Beginning with Merneptah the rulers of Egypt adopted the title of pharaoh. Between their reigns, Hatshepsut, a queen who established herself as pharaoh, launched many building projects, including restoration of temples damaged by the Hyksos, and sent trading expeditions to Punt and the Sinai. When Tuthmosis III died in 1425BC, Egypt had an empire extending from Niya in north west Syria to the Fourth Cataract of the Nile in Nubia, cementing loyalties and opening access to critical imports such as bronze and wood. The New Kingdom pharaohs began a large-scale building campaign to promote the god Amun, whose growing cult was based in Karnak. They also constructed monuments to glorify their own achievements, both real and imagined. The Karnak temple is the largest Egyptian temple ever built. Around 1350BC, the stability of the New Kingdom was threatened when Amenhotep IV ascended the throne and instituted a series of radical and chaotic reforms. Changing his name to Akhenaten, he touted the previously obscure sun deity Aten as the supreme deity, suppressed the worship of most other deities, and moved the capital to the new city of Akhetaten (modern-day Amarna). He was devoted to his new religion and artistic style. After his death, the cult of the Aten was quickly abandoned and the traditional religious order restored. The subsequent pharaohs, Tutankhamun, Ay, and Horemheb, worked to erase all mention of Akhenaten's heresy, now known as the Amarna Period. Around 1279BC, Ramesses II, also known as Ramesses the Great, ascended the throne, and went on to build more temples, erect more statues and obelisks, and sire more children than any other pharaoh in history. A bold military leader, Ramesses II led his army against the Hittites in the Battle of Kadesh (in modern Syria) and, after fighting to a stalemate, finally agreed to the first recorded peace treaty, around 1258BC. Egypt's wealth, however, made it a tempting target for invasion, particularly by the Libyan Berbers to the west, and the Sea Peoples, a conjectured confederation of seafarers from the Aegean Sea. Initially, the military was able to repel these invasions, but Egypt eventually lost control of its remaining territories in southern Canaan, much of it falling to the Assyrians. The effects of external threats were exacerbated by internal problems such as corruption, tomb robbery, and civil unrest. After regaining their power, the high priests at the temple of Amun in Thebes accumulated vast tracts of land and wealth, and their expanded power splintered the country during the Third Intermediate Period. Following the death of Ramesses XI in 1078BC, Smendes assumed authority over the northern part of Egypt, ruling from the city of Tanis. The south was effectively controlled by the High Priests of Amun at Thebes, who recognized Smendes in name only. During this time, Libyans had been settling in the western delta, and chieftains of these settlers began increasing their autonomy. Libyan princes took control of the delta under Shoshenq I in 945BC, founding the so-called Libyan or Bubastite dynasty that would rule for some 200 years. Shoshenq also gained control of southern Egypt by placing his family members in important priestly positions. Libyan control began to erode as a rival dynasty in the delta arose in Leontopolis, and Kushites threatened from the south. Around 727BC the Kushite king Piye invaded northward, seizing control of Thebes and eventually the Delta. Egypt's far-reaching prestige declined considerably toward the end of the Third Intermediate Period. Its foreign allies had fallen under the Assyrian sphere of influence, and by 700BC war between the two states became inevitable. Between 671 and 667BC the Assyrians began the Assyrian conquest of Egypt. The reigns of both Taharqa and his successor, Tanutamun, were filled with constant conflict with the Assyrians, against whom Egypt enjoyed several victories. Ultimately, the Assyrians pushed the Kushites back into Nubia, occupied Memphis, and sacked the temples of Thebes. The Assyrians left control of Egypt to a series of vassals who became known as the Saite kings of the Twenty-Sixth Dynasty. By 653BC, the Saite king Psamtik I was able to oust the Assyrians with the help of Greek mercenaries, who were recruited to form Egypt's first navy. Greek influence expanded greatly as the city-state of Naukratis became the home of Greeks in the Nile Delta. The Saite kings based in the new capital of Sais witnessed a brief but spirited resurgence in the economy and culture, but in 525BC, the powerful Persians, led by Cambyses II, began their conquest of Egypt, eventually capturing the pharaoh Psamtik III at the battle of Pelusium. Cambyses II then assumed the formal title of pharaoh, but ruled Egypt from Iran, leaving Egypt under the control of a satrapy. A few successful revolts against the Persians marked the 5th centuryBC, but Egypt was never able to permanently overthrow the Persians. Following its annexation by Persia, Egypt was joined with Cyprus and Phoenicia in the sixth satrapy of the Achaemenid Persian Empire. This first period of Persian rule over Egypt, also known as the Twenty-Seventh dynasty, ended in 402BC, when Egypt regained independence under a series of native dynasties. The last of these dynasties, the Thirtieth, proved to be the last native royal house of ancient Egypt, ending with the kingship of Nectanebo II. A brief restoration of Persian rule, sometimes known as the Thirty-First Dynasty, began in 343BC, but shortly after, in 332BC, the Persian ruler Mazaces handed Egypt over to Alexander the Great without a fight. In 332BC, Alexander the Great conquered Egypt with little resistance from the Persians and was welcomed by the Egyptians as a deliverer. The administration established by Alexander's successors, the Macedonian Ptolemaic Kingdom, was based on an Egyptian model and based in the new capital city of Alexandria. The city showcased the power and prestige of Hellenistic rule, and became a seat of learning and culture, centered at the famous Library of Alexandria. The Lighthouse of Alexandria lit the way for the many ships that kept trade flowing through the city—as the Ptolemies made commerce and revenue-generating enterprises, such as papyrus manufacturing, their top priority. Hellenistic culture did not supplant native Egyptian culture, as the Ptolemies supported time-honored traditions in an effort to secure the loyalty of the populace. They built new temples in Egyptian style, supported traditional cults, and portrayed themselves as pharaohs. Some traditions merged, as Greek and Egyptian gods were syncretized into composite deities, such as Serapis, and classical Greek forms of sculpture influenced traditional Egyptian motifs. Despite their efforts to appease the Egyptians, the Ptolemies were challenged by native rebellion, bitter family rivalries, and the powerful mob of Alexandria that formed after the death of Ptolemy IV. In addition, as Rome relied more heavily on imports of grain from Egypt, the Romans took great interest in the political situation in the country. Continued Egyptian revolts, ambitious politicians, and powerful opponents from the Near East made this situation unstable, leading Rome to send forces to secure the country as a province of its empire. Egypt became a province of the Roman Empire in 30BC, following the defeat of Marc Antony and Ptolemaic Queen Cleopatra VII by Octavian (later Emperor Augustus) in the Battle of Actium. The Romans relied heavily on grain shipments from Egypt, and the Roman army, under the control of a prefect appointed by the Emperor, quelled rebellions, strictly enforced the collection of heavy taxes, and prevented attacks by bandits, which had become a notorious problem during the period. Alexandria became an increasingly important center on the trade route with the orient, as exotic luxuries were in high demand in Rome. Although the Romans had a more hostile attitude than the Greeks towards the Egyptians, some traditions such as mummification and worship of the traditional gods continued. The art of mummy portraiture flourished, and some Roman emperors had themselves depicted as pharaohs, though not to the extent that the Ptolemies had. The former lived outside Egypt and did not perform the ceremonial functions of Egyptian kingship. Local administration became Roman in style and closed to native Egyptians. From the mid-first century AD, Christianity took root in Egypt and it was originally seen as another cult that could be accepted. However, it was an uncompromising religion that sought to win converts from Egyptian Religion and Greco-Roman religion and threatened popular religious traditions. This led to the persecution of converts to Christianity, culminating in the great purges of Diocletian starting in 303, but eventually Christianity won out. In 391 the Christian Emperor Theodosius introduced legislation that banned pagan rites and closed temples. Alexandria became the scene of great anti-pagan riots with public and private religious imagery destroyed. As a consequence, Egypt's native religious culture was continually in decline. While the native population continued to speak their language, the ability to read hieroglyphic writing slowly disappeared as the role of the Egyptian temple priests and priestesses diminished. The temples themselves were sometimes converted to churches or abandoned to the desert. In the fourth century, as the Roman Empire divided, Egypt found itself in the Eastern Empire with its capital at Constantinople. In the waning years of the Empire, Egypt fell to the Sasanian Persian army (618–628 AD), was recaptured by the Roman Emperor Heraclius (629–639 AD), and then was finally captured by Muslim Rashidun army in 639–641 AD, ending Roman rule. The pharaoh was the absolute monarch of the country and, at least in theory, wielded complete control of the land and its resources. The king was the supreme military commander and head of the government, who relied on a bureaucracy of officials to manage his affairs. In charge of the administration was his second in command, the vizier, who acted as the king's representative and coordinated land surveys, the treasury, building projects, the legal system, and the archives. At a regional level, the country was divided into as many as 42 administrative regions called nomes each governed by a nomarch, who was accountable to the vizier for his jurisdiction. The temples formed the backbone of the economy. Not only were they houses of worship, but were also responsible for collecting and storing the kingdom's wealth in a system of granaries and treasuries administered by overseers, who redistributed grain and goods. Much of the economy was centrally organized and strictly controlled. Although the ancient Egyptians did not use coinage until the Late period, they did use a type of money-barter system, with standard sacks of grain and the "deben", a weight of roughly of copper or silver, forming a common denominator. Workers were paid in grain; a simple laborer might earn 5 sacks (200 kg or 400 lb) of grain per month, while a foreman might earn 7 sacks (250 kg or 550 lb). Prices were fixed across the country and recorded in lists to facilitate trading; for example a shirt cost five copper deben, while a cow cost 140deben. Grain could be traded for other goods, according to the fixed price list. During the fifth centuryBC coined money was introduced into Egypt from abroad. At first the coins were used as standardized pieces of precious metal rather than true money, but in the following centuries international traders came to rely on coinage. Egyptian society was highly stratified, and social status was expressly displayed. Farmers made up the bulk of the population, but agricultural produce was owned directly by the state, temple, or noble family that owned the land. Farmers were also subject to a labor tax and were required to work on irrigation or construction projects in a corvée system. Artists and craftsmen were of higher status than farmers, but they were also under state control, working in the shops attached to the temples and paid directly from the state treasury. Scribes and officials formed the upper class in ancient Egypt, known as the "white kilt class" in reference to the bleached linen garments that served as a mark of their rank. The upper class prominently displayed their social status in art and literature. Below the nobility were the priests, physicians, and engineers with specialized training in their field. It is unclear whether slavery as understood today existed in ancient Egypt, there is difference of opinions among authors. The ancient Egyptians viewed men and women, including people from all social classes, as essentially equal under the law, and even the lowliest peasant was entitled to petition the vizier and his court for redress. Although slaves were mostly used as indentured servants, they were able to buy and sell their servitude, work their way to freedom or nobility, and were usually treated by doctors in the workplace. Both men and women had the right to own and sell property, make contracts, marry and divorce, receive inheritance, and pursue legal disputes in court. Married couples could own property jointly and protect themselves from divorce by agreeing to marriage contracts, which stipulated the financial obligations of the husband to his wife and children should the marriage end. Compared with their counterparts in ancient Greece, Rome, and even more modern places around the world, ancient Egyptian women had a greater range of personal choices and opportunities for achievement. Women such as Hatshepsut and Cleopatra VII even became pharaohs, while others wielded power as Divine Wives of Amun. Despite these freedoms, ancient Egyptian women did not often take part in official roles in the administration, served only secondary roles in the temples, and were not as likely to be as educated as men. The head of the legal system was officially the pharaoh, who was responsible for enacting laws, delivering justice, and maintaining law and order, a concept the ancient Egyptians referred to as Ma'at. Although no legal codes from ancient Egypt survive, court documents show that Egyptian law was based on a common-sense view of right and wrong that emphasized reaching agreements and resolving conflicts rather than strictly adhering to a complicated set of statutes. Local councils of elders, known as "Kenbet" in the New Kingdom, were responsible for ruling in court cases involving small claims and minor disputes. More serious cases involving murder, major land transactions, and tomb robbery were referred to the "Great Kenbet", over which the vizier or pharaoh presided. Plaintiffs and defendants were expected to represent themselves and were required to swear an oath that they had told the truth. In some cases, the state took on both the role of prosecutor and judge, and it could torture the accused with beatings to obtain a confession and the names of any co-conspirators. Whether the charges were trivial or serious, court scribes documented the complaint, testimony, and verdict of the case for future reference. Punishment for minor crimes involved either imposition of fines, beatings, facial mutilation, or exile, depending on the severity of the offense. Serious crimes such as murder and tomb robbery were punished by execution, carried out by decapitation, drowning, or impaling the criminal on a stake. Punishment could also be extended to the criminal's family. Beginning in the New Kingdom, oracles played a major role in the legal system, dispensing justice in both civil and criminal cases. The procedure was to ask the god a "yes" or "no" question concerning the right or wrong of an issue. The god, carried by a number of priests, rendered judgment by choosing one or the other, moving forward or backward, or pointing to one of the answers written on a piece of papyrus or an ostracon. A combination of favorable geographical features contributed to the success of ancient Egyptian culture, the most important of which was the rich fertile soil resulting from annual inundations of the Nile River. The ancient Egyptians were thus able to produce an abundance of food, allowing the population to devote more time and resources to cultural, technological, and artistic pursuits. Land management was crucial in ancient Egypt because taxes were assessed based on the amount of land a person owned. Farming in Egypt was dependent on the cycle of the Nile River. The Egyptians recognized three seasons: "Akhet" (flooding), "Peret" (planting), and "Shemu" (harvesting). The flooding season lasted from June to September, depositing on the river's banks a layer of mineral-rich silt ideal for growing crops. After the floodwaters had receded, the growing season lasted from October to February. Farmers plowed and planted seeds in the fields, which were irrigated with ditches and canals. Egypt received little rainfall, so farmers relied on the Nile to water their crops. From March to May, farmers used sickles to harvest their crops, which were then threshed with a flail to separate the straw from the grain. Winnowing removed the chaff from the grain, and the grain was then ground into flour, brewed to make beer, or stored for later use. The ancient Egyptians cultivated emmer and barley, and several other cereal grains, all of which were used to make the two main food staples of bread and beer. Flax plants, uprooted before they started flowering, were grown for the fibers of their stems. These fibers were split along their length and spun into thread, which was used to weave sheets of linen and to make clothing. Papyrus growing on the banks of the Nile River was used to make paper. Vegetables and fruits were grown in garden plots, close to habitations and on higher ground, and had to be watered by hand. Vegetables included leeks, garlic, melons, squashes, pulses, lettuce, and other crops, in addition to grapes that were made into wine. The Egyptians believed that a balanced relationship between people and animals was an essential element of the cosmic order; thus humans, animals and plants were believed to be members of a single whole. Animals, both domesticated and wild, were therefore a critical source of spirituality, companionship, and sustenance to the ancient Egyptians. Cattle were the most important livestock; the administration collected taxes on livestock in regular censuses, and the size of a herd reflected the prestige and importance of the estate or temple that owned them. In addition to cattle, the ancient Egyptians kept sheep, goats, and pigs. Poultry, such as ducks, geese, and pigeons, were captured in nets and bred on farms, where they were force-fed with dough to fatten them. The Nile provided a plentiful source of fish. Bees were also domesticated from at least the Old Kingdom, and provided both honey and wax. The ancient Egyptians used donkeys and oxen as beasts of burden, and they were responsible for plowing the fields and trampling seed into the soil. The slaughter of a fattened ox was also a central part of an offering ritual. Horses were introduced by the Hyksos in the Second Intermediate Period. Camels, although known from the New Kingdom, were not used as beasts of burden until the Late Period. There is also evidence to suggest that elephants were briefly utilized in the Late Period but largely abandoned due to lack of grazing land. Dogs, cats, and monkeys were common family pets, while more exotic pets imported from the heart of Africa, such as Sub-Saharan African lions, were reserved for royalty. Herodotus observed that the Egyptians were the only people to keep their animals with them in their houses. During the Late Period, the worship of the gods in their animal form was extremely popular, such as the cat goddess Bastet and the ibis god Thoth, and these animals were kept in large numbers for the purpose of ritual sacrifice. Egypt is rich in building and decorative stone, copper and lead ores, gold, and semiprecious stones. These natural resources allowed the ancient Egyptians to build monuments, sculpt statues, make tools, and fashion jewelry. Embalmers used salts from the Wadi Natrun for mummification, which also provided the gypsum needed to make plaster. Ore-bearing rock formations were found in distant, inhospitable wadis in the eastern desert and the Sinai, requiring large, state-controlled expeditions to obtain natural resources found there. There were extensive gold mines in Nubia, and one of the first maps known is of a gold mine in this region. The Wadi Hammamat was a notable source of granite, greywacke, and gold. Flint was the first mineral collected and used to make tools, and flint handaxes are the earliest pieces of evidence of habitation in the Nile valley. Nodules of the mineral were carefully flaked to make blades and arrowheads of moderate hardness and durability even after copper was adopted for this purpose. Ancient Egyptians were among the first to use minerals such as sulfur as cosmetic substances. The Egyptians worked deposits of the lead ore galena at Gebel Rosas to make net sinkers, plumb bobs, and small figurines. Copper was the most important metal for toolmaking in ancient Egypt and was smelted in furnaces from malachite ore mined in the Sinai. Workers collected gold by washing the nuggets out of sediment in alluvial deposits, or by the more labor-intensive process of grinding and washing gold-bearing quartzite. Iron deposits found in upper Egypt were utilized in the Late Period. High-quality building stones were abundant in Egypt; the ancient Egyptians quarried limestone all along the Nile valley, granite from Aswan, and basalt and sandstone from the wadis of the eastern desert. Deposits of decorative stones such as porphyry, greywacke, alabaster, and carnelian dotted the eastern desert and were collected even before the First Dynasty. In the Ptolemaic and Roman Periods, miners worked deposits of emeralds in Wadi Sikait and amethyst in Wadi el-Hudi. The ancient Egyptians engaged in trade with their foreign neighbors to obtain rare, exotic goods not found in Egypt. In the Predynastic Period, they established trade with Nubia to obtain gold and incense. They also established trade with Palestine, as evidenced by Palestinian-style oil jugs found in the burials of the First Dynasty pharaohs. An Egyptian colony stationed in southern Canaan dates to slightly before the First Dynasty. Narmer had Egyptian pottery produced in Canaan and exported back to Egypt. By the Second Dynasty at latest, ancient Egyptian trade with Byblos yielded a critical source of quality timber not found in Egypt. By the Fifth Dynasty, trade with Punt provided gold, aromatic resins, ebony, ivory, and wild animals such as monkeys and baboons. Egypt relied on trade with Anatolia for essential quantities of tin as well as supplementary supplies of copper, both metals being necessary for the manufacture of bronze. The ancient Egyptians prized the blue stone lapis lazuli, which had to be imported from far-away Afghanistan. Egypt's Mediterranean trade partners also included Greece and Crete, which provided, among other goods, supplies of olive oil. In exchange for its luxury imports and raw materials, Egypt mainly exported grain, gold, linen, and papyrus, in addition to other finished goods including glass and stone objects. The Egyptian language is a northern Afro-Asiatic language closely related to the Berber and Semitic languages. It has the second longest known history of any language (after Sumerian), having been written from c. 3200BC to the Middle Ages and remaining as a spoken language for longer. The phases of ancient Egyptian are Old Egyptian, Middle Egyptian (Classical Egyptian), Late Egyptian, Demotic and Coptic. Egyptian writings do not show dialect differences before Coptic, but it was probably spoken in regional dialects around Memphis and later Thebes. Ancient Egyptian was a synthetic language, but it became more analytic later on. Late Egyptian developed prefixal definite and indefinite articles, which replaced the older inflectional suffixes. There was a change from the older verb–subject–object word order to subject–verb–object. The Egyptian hieroglyphic, hieratic, and demotic scripts were eventually replaced by the more phonetic Coptic alphabet. Coptic is still used in the liturgy of the Egyptian Orthodox Church, and traces of it are found in modern Egyptian Arabic. Ancient Egyptian has 25 consonants similar to those of other Afro-Asiatic languages. These include pharyngeal and emphatic consonants, voiced and voiceless stops, voiceless fricatives and voiced and voiceless affricates. It has three long and three short vowels, which expanded in Late Egyptian to about nine. The basic word in Egyptian, similar to Semitic and Berber, is a triliteral or biliteral root of consonants and semiconsonants. Suffixes are added to form words. The verb conjugation corresponds to the person. For example, the triconsonantal skeleton is the semantic core of the word 'hear'; its basic conjugation is ', 'he hears'. If the subject is a noun, suffixes are not added to the verb: ', 'the woman hears'. Adjectives are derived from nouns through a process that Egyptologists call "nisbation" because of its similarity with Arabic. The word order is in verbal and adjectival sentences, and in nominal and adverbial sentences. The subject can be moved to the beginning of sentences if it is long and is followed by a resumptive pronoun. Verbs and nouns are negated by the particle "n", but "nn" is used for adverbial and adjectival sentences. Stress falls on the ultimate or penultimate syllable, which can be open (CV) or closed (CVC). Hieroglyphic writing dates from c. 3000BC, and is composed of hundreds of symbols. A hieroglyph can represent a word, a sound, or a silent determinative; and the same symbol can serve different purposes in different contexts. Hieroglyphs were a formal script, used on stone monuments and in tombs, that could be as detailed as individual works of art. In day-to-day writing, scribes used a cursive form of writing, called hieratic, which was quicker and easier. While formal hieroglyphs may be read in rows or columns in either direction (though typically written from right to left), hieratic was always written from right to left, usually in horizontal rows. A new form of writing, Demotic, became the prevalent writing style, and it is this form of writing—along with formal hieroglyphs—that accompany the Greek text on the Rosetta Stone. Around the first century AD, the Coptic alphabet started to be used alongside the Demotic script. Coptic is a modified Greek alphabet with the addition of some Demotic signs. Although formal hieroglyphs were used in a ceremonial role until the fourth century, towards the end only a small handful of priests could still read them. As the traditional religious establishments were disbanded, knowledge of hieroglyphic writing was mostly lost. Attempts to decipher them date to the Byzantine and Islamic periods in Egypt, but only in the 1820s, after the discovery of the Rosetta Stone and years of research by Thomas Young and Jean-François Champollion, were hieroglyphs substantially deciphered. Writing first appeared in association with kingship on labels and tags for items found in royal tombs. It was primarily an occupation of the scribes, who worked out of the "Per Ankh" institution or the House of Life. The latter comprised offices, libraries (called House of Books), laboratories and observatories. Some of the best-known pieces of ancient Egyptian literature, such as the Pyramid and Coffin Texts, were written in Classical Egyptian, which continued to be the language of writing until about 1300BC. Late Egyptian was spoken from the New Kingdom onward and is represented in Ramesside administrative documents, love poetry and tales, as well as in Demotic and Coptic texts. During this period, the tradition of writing had evolved into the tomb autobiography, such as those of Harkhuf and Weni. The genre known as "Sebayt" ("instructions") was developed to communicate teachings and guidance from famous nobles; the Ipuwer papyrus, a poem of lamentations describing natural disasters and social upheaval, is a famous example. The Story of Sinuhe, written in Middle Egyptian, might be the classic of Egyptian literature. Also written at this time was the Westcar Papyrus, a set of stories told to Khufu by his sons relating the marvels performed by priests. The Instruction of Amenemope is considered a masterpiece of Near Eastern literature. Towards the end of the New Kingdom, the vernacular language was more often employed to write popular pieces like the Story of Wenamun and the Instruction of Any. The former tells the story of a noble who is robbed on his way to buy cedar from Lebanon and of his struggle to return to Egypt. From about 700BC, narrative stories and instructions, such as the popular Instructions of Onchsheshonqy, as well as personal and business documents were written in the demotic script and phase of Egyptian. Many stories written in demotic during the Greco-Roman period were set in previous historical eras, when Egypt was an independent nation ruled by great pharaohs such as Ramesses II. Most ancient Egyptians were farmers tied to the land. Their dwellings were restricted to immediate family members, and were constructed of mud-brick designed to remain cool in the heat of the day. Each home had a kitchen with an open roof, which contained a grindstone for milling grain and a small oven for baking the bread. Walls were painted white and could be covered with dyed linen wall hangings. Floors were covered with reed mats, while wooden stools, beds raised from the floor and individual tables comprised the furniture. The ancient Egyptians placed a great value on hygiene and appearance. Most bathed in the Nile and used a pasty soap made from animal fat and chalk. Men shaved their entire bodies for cleanliness; perfumes and aromatic ointments covered bad odors and soothed skin. Clothing was made from simple linen sheets that were bleached white, and both men and women of the upper classes wore wigs, jewelry, and cosmetics. Children went without clothing until maturity, at about age 12, and at this age males were circumcised and had their heads shaved. Mothers were responsible for taking care of the children, while the father provided the family's income. Music and dance were popular entertainments for those who could afford them. Early instruments included flutes and harps, while instruments similar to trumpets, oboes, and pipes developed later and became popular. In the New Kingdom, the Egyptians played on bells, cymbals, tambourines, drums, and imported lutes and lyres from Asia. The sistrum was a rattle-like musical instrument that was especially important in religious ceremonies. The ancient Egyptians enjoyed a variety of leisure activities, including games and music. Senet, a board game where pieces moved according to random chance, was particularly popular from the earliest times; another similar game was mehen, which had a circular gaming board. “Hounds and Jackals” also known as 58 holes is another example of board games played in ancient Egypt. The first complete set of this game was discovered from a Theban tomb of the Egyptian pharaoh Amenemhat IV that dates to the 13th Dynasty. Juggling and ball games were popular with children, and wrestling is also documented in a tomb at Beni Hasan. The wealthy members of ancient Egyptian society enjoyed hunting and boating as well. The excavation of the workers' village of Deir el-Medina has resulted in one of the most thoroughly documented accounts of community life in the ancient world, which spans almost four hundred years. There is no comparable site in which the organization, social interactions, and working and living conditions of a community have been studied in such detail. Egyptian cuisine remained remarkably stable over time; indeed, the cuisine of modern Egypt retains some striking similarities to the cuisine of the ancients. The staple diet consisted of bread and beer, supplemented with vegetables such as onions and garlic, and fruit such as dates and figs. Wine and meat were enjoyed by all on feast days while the upper classes indulged on a more regular basis. Fish, meat, and fowl could be salted or dried, and could be cooked in stews or roasted on a grill. The architecture of ancient Egypt includes some of the most famous structures in the world: the Great Pyramids of Giza and the temples at Thebes. Building projects were organized and funded by the state for religious and commemorative purposes, but also to reinforce the wide-ranging power of the pharaoh. The ancient Egyptians were skilled builders; using only simple but effective tools and sighting instruments, architects could build large stone structures with great accuracy and precision that is still envied today. The domestic dwellings of elite and ordinary Egyptians alike were constructed from perishable materials such as mud bricks and wood, and have not survived. Peasants lived in simple homes, while the palaces of the elite and the pharaoh were more elaborate structures. A few surviving New Kingdom palaces, such as those in Malkata and Amarna, show richly decorated walls and floors with scenes of people, birds, water pools, deities and geometric designs. Important structures such as temples and tombs that were intended to last forever were constructed of stone instead of mud bricks. The architectural elements used in the world's first large-scale stone building, Djoser's mortuary complex, include post and lintel supports in the papyrus and lotus motif. The earliest preserved ancient Egyptian temples, such as those at Giza, consist of single, enclosed halls with roof slabs supported by columns. In the New Kingdom, architects added the pylon, the open courtyard, and the enclosed hypostyle hall to the front of the temple's sanctuary, a style that was standard until the Greco-Roman period. The earliest and most popular tomb architecture in the Old Kingdom was the mastaba, a flat-roofed rectangular structure of mudbrick or stone built over an underground burial chamber. The step pyramid of Djoser is a series of stone mastabas stacked on top of each other. Pyramids were built during the Old and Middle Kingdoms, but most later rulers abandoned them in favor of less conspicuous rock-cut tombs. The use of the pyramid form continued in private tomb chapels of the New Kingdom and in the royal pyramids of Nubia. The ancient Egyptians produced art to serve functional purposes. For over 3500 years, artists adhered to artistic forms and iconography that were developed during the Old Kingdom, following a strict set of principles that resisted foreign influence and internal change. These artistic standards—simple lines, shapes, and flat areas of color combined with the characteristic flat projection of figures with no indication of spatial depth—created a sense of order and balance within a composition. Images and text were intimately interwoven on tomb and temple walls, coffins, stelae, and even statues. The Narmer Palette, for example, displays figures that can also be read as hieroglyphs. Because of the rigid rules that governed its highly stylized and symbolic appearance, ancient Egyptian art served its political and religious purposes with precision and clarity. Ancient Egyptian artisans used stone as a medium for carving statues and fine reliefs, but used wood as a cheap and easily carved substitute. Paints were obtained from minerals such as iron ores (red and yellow ochres), copper ores (blue and green), soot or charcoal (black), and limestone (white). Paints could be mixed with gum arabic as a binder and pressed into cakes, which could be moistened with water when needed. Pharaohs used reliefs to record victories in battle, royal decrees, and religious scenes. Common citizens had access to pieces of funerary art, such as shabti statues and books of the dead, which they believed would protect them in the afterlife. During the Middle Kingdom, wooden or clay models depicting scenes from everyday life became popular additions to the tomb. In an attempt to duplicate the activities of the living in the afterlife, these models show laborers, houses, boats, and even military formations that are scale representations of the ideal ancient Egyptian afterlife. Despite the homogeneity of ancient Egyptian art, the styles of particular times and places sometimes reflected changing cultural or political attitudes. After the invasion of the Hyksos in the Second Intermediate Period, Minoan-style frescoes were found in Avaris. The most striking example of a politically driven change in artistic forms comes from the Amarna period, where figures were radically altered to conform to Akhenaten's revolutionary religious ideas. This style, known as Amarna art, was quickly abandoned after Akhenaten's death and replaced by the traditional forms. Beliefs in the divine and in the afterlife were ingrained in ancient Egyptian civilization from its inception; pharaonic rule was based on the divine right of kings. The Egyptian pantheon was populated by gods who had supernatural powers and were called on for help or protection. However, the gods were not always viewed as benevolent, and Egyptians believed they had to be appeased with offerings and prayers. The structure of this pantheon changed continually as new deities were promoted in the hierarchy, but priests made no effort to organize the diverse and sometimes conflicting myths and stories into a coherent system. These various conceptions of divinity were not considered contradictory but rather layers in the multiple facets of reality. Gods were worshiped in cult temples administered by priests acting on the king's behalf. At the center of the temple was the cult statue in a shrine. Temples were not places of public worship or congregation, and only on select feast days and celebrations was a shrine carrying the statue of the god brought out for public worship. Normally, the god's domain was sealed off from the outside world and was only accessible to temple officials. Common citizens could worship private statues in their homes, and amulets offered protection against the forces of chaos. After the New Kingdom, the pharaoh's role as a spiritual intermediary was de-emphasized as religious customs shifted to direct worship of the gods. As a result, priests developed a system of oracles to communicate the will of the gods directly to the people. The Egyptians believed that every human being was composed of physical and spiritual parts or "aspects". In addition to the body, each person had a "šwt" (shadow), a "ba" (personality or soul), a "ka" (life-force), and a "name". The heart, rather than the brain, was considered the seat of thoughts and emotions. After death, the spiritual aspects were released from the body and could move at will, but they required the physical remains (or a substitute, such as a statue) as a permanent home. The ultimate goal of the deceased was to rejoin his "ka" and "ba" and become one of the "blessed dead", living on as an "akh", or "effective one". For this to happen, the deceased had to be judged worthy in a trial, in which the heart was weighed against a "feather of truth." If deemed worthy, the deceased could continue their existence on earth in spiritual form. If they were not deemed worthy, their heart was eaten by Ammit the Devourer and they were erased from the Universe. The ancient Egyptians maintained an elaborate set of burial customs that they believed were necessary to ensure immortality after death. These customs involved preserving the body by mummification, performing burial ceremonies, and interring with the body goods the deceased would use in the afterlife. Before the Old Kingdom, bodies buried in desert pits were naturally preserved by desiccation. The arid, desert conditions were a boon throughout the history of ancient Egypt for burials of the poor, who could not afford the elaborate burial preparations available to the elite. Wealthier Egyptians began to bury their dead in stone tombs and use artificial mummification, which involved removing the internal organs, wrapping the body in linen, and burying it in a rectangular stone sarcophagus or wooden coffin. Beginning in the Fourth Dynasty, some parts were preserved separately in canopic jars. By the New Kingdom, the ancient Egyptians had perfected the art of mummification; the best technique took 70 days and involved removing the internal organs, removing the brain through the nose, and desiccating the body in a mixture of salts called natron. The body was then wrapped in linen with protective amulets inserted between layers and placed in a decorated anthropoid coffin. Mummies of the Late Period were also placed in painted cartonnage mummy cases. Actual preservation practices declined during the Ptolemaic and Roman eras, while greater emphasis was placed on the outer appearance of the mummy, which was decorated. Wealthy Egyptians were buried with larger quantities of luxury items, but all burials, regardless of social status, included goods for the deceased. Funerary texts were often included in the grave, and, beginning in the New Kingdom, so were shabti statues that were believed to perform manual labor for them in the afterlife. Rituals in which the deceased was magically re-animated accompanied burials. After burial, living relatives were expected to occasionally bring food to the tomb and recite prayers on behalf of the deceased. The ancient Egyptian military was responsible for defending Egypt against foreign invasion, and for maintaining Egypt's domination in the ancient Near East. The military protected mining expeditions to the Sinai during the Old Kingdom and fought civil wars during the First and Second Intermediate Periods. The military was responsible for maintaining fortifications along important trade routes, such as those found at the city of Buhen on the way to Nubia. Forts also were constructed to serve as military bases, such as the fortress at Sile, which was a base of operations for expeditions to the Levant. In the New Kingdom, a series of pharaohs used the standing Egyptian army to attack and conquer Kush and parts of the Levant. Typical military equipment included bows and arrows, spears, and round-topped shields made by stretching animal skin over a wooden frame. In the New Kingdom, the military began using chariots that had earlier been introduced by the Hyksos invaders. Weapons and armor continued to improve after the adoption of bronze: shields were now made from solid wood with a bronze buckle, spears were tipped with a bronze point, and the Khopesh was adopted from Asiatic soldiers. The pharaoh was usually depicted in art and literature riding at the head of the army; it has been suggested that at least a few pharaohs, such as Seqenenre Tao II and his sons, did do so. However, it has also been argued that "kings of this period did not personally act as frontline war leaders, fighting alongside their troops." Soldiers were recruited from the general population, but during, and especially after, the New Kingdom, mercenaries from Nubia, Kush, and Libya were hired to fight for Egypt. In technology, medicine, and mathematics, ancient Egypt achieved a relatively high standard of productivity and sophistication. Traditional empiricism, as evidenced by the Edwin Smith and Ebers papyri (c. 1600BC), is first credited to Egypt. The Egyptians created their own alphabet and decimal system. Even before the Old Kingdom, the ancient Egyptians had developed a glassy material known as faience, which they treated as a type of artificial semi-precious stone. Faience is a non-clay ceramic made of silica, small amounts of lime and soda, and a colorant, typically copper. The material was used to make beads, tiles, figurines, and small wares. Several methods can be used to create faience, but typically production involved application of the powdered materials in the form of a paste over a clay core, which was then fired. By a related technique, the ancient Egyptians produced a pigment known as Egyptian Blue, also called blue frit, which is produced by fusing (or sintering) silica, copper, lime, and an alkali such as natron. The product can be ground up and used as a pigment. The ancient Egyptians could fabricate a wide variety of objects from glass with great skill, but it is not clear whether they developed the process independently. It is also unclear whether they made their own raw glass or merely imported pre-made ingots, which they melted and finished. However, they did have technical expertise in making objects, as well as adding trace elements to control the color of the finished glass. A range of colors could be produced, including yellow, red, green, blue, purple, and white, and the glass could be made either transparent or opaque. The medical problems of the ancient Egyptians stemmed directly from their environment. Living and working close to the Nile brought hazards from malaria and debilitating schistosomiasis parasites, which caused liver and intestinal damage. Dangerous wildlife such as crocodiles and hippos were also a common threat. The lifelong labors of farming and building put stress on the spine and joints, and traumatic injuries from construction and warfare all took a significant toll on the body. The grit and sand from stone-ground flour abraded teeth, leaving them susceptible to abscesses (though caries were rare). The diets of the wealthy were rich in sugars, which promoted periodontal disease. Despite the flattering physiques portrayed on tomb walls, the overweight mummies of many of the upper class show the effects of a life of overindulgence. Adult life expectancy was about 35 for men and 30 for women, but reaching adulthood was difficult as about one-third of the population died in infancy. Ancient Egyptian physicians were renowned in the ancient Near East for their healing skills, and some, such as Imhotep, remained famous long after their deaths. Herodotus remarked that there was a high degree of specialization among Egyptian physicians, with some treating only the head or the stomach, while others were eye-doctors and dentists. Training of physicians took place at the "Per Ankh" or "House of Life" institution, most notably those headquartered in Per-Bastet during the New Kingdom and at Abydos and Saïs in the Late period. Medical papyri show empirical knowledge of anatomy, injuries, and practical treatments. Wounds were treated by bandaging with raw meat, white linen, sutures, nets, pads, and swabs soaked with honey to prevent infection, while opium, thyme, and belladona were used to relieve pain. The earliest records of burn treatment describe burn dressings that use the milk from mothers of male babies. Prayers were made to the goddess Isis. Moldy bread, honey, and copper salts were also used to prevent infection from dirt in burns. Garlic and onions were used regularly to promote good health and were thought to relieve asthma symptoms. Ancient Egyptian surgeons stitched wounds, set broken bones, and amputated diseased limbs, but they recognized that some injuries were so serious that they could only make the patient comfortable until death occurred. Early Egyptians knew how to assemble planks of wood into a ship hull and had mastered advanced forms of shipbuilding as early as 3000BC. The Archaeological Institute of America reports that the oldest planked ships known are the Abydos boats. A group of 14 discovered ships in Abydos were constructed of wooden planks "sewn" together. Discovered by Egyptologist David O'Connor of New York University, woven straps were found to have been used to lash the planks together, and reeds or grass stuffed between the planks helped to seal the seams. Because the ships are all buried together and near a mortuary belonging to Pharaoh Khasekhemwy, originally they were all thought to have belonged to him, but one of the 14 ships dates to 3000BC, and the associated pottery jars buried with the vessels also suggest earlier dating. The ship dating to 3000BC was long and is now thought to perhaps have belonged to an earlier pharaoh, perhaps one as early as Hor-Aha. Early Egyptians also knew how to assemble planks of wood with treenails to fasten them together, using pitch for caulking the seams. The "Khufu ship", a vessel sealed into a pit in the Giza pyramid complex at the foot of the Great Pyramid of Giza in the Fourth Dynasty around 2500BC, is a full-size surviving example that may have filled the symbolic function of a solar barque. Early Egyptians also knew how to fasten the planks of this ship together with mortise and tenon joints. Large seagoing ships are known to have been heavily used by the Egyptians in their trade with the city states of the eastern Mediterranean, especially Byblos (on the coast of modern-day Lebanon), and in several expeditions down the Red Sea to the Land of Punt. In fact one of the earliest Egyptian words for a seagoing ship is a "Byblos Ship", which originally defined a class of Egyptian seagoing ships used on the Byblos run; however, by the end of the Old Kingdom, the term had come to include large seagoing ships, whatever their destination. In 2011 archaeologists from Italy, the United States, and Egypt excavating a dried-up lagoon known as Mersa Gawasis have unearthed traces of an ancient harbor that once launched early voyages like Hatshepsut's Punt expedition onto the open ocean. Some of the site's most evocative evidence for the ancient Egyptians' seafaring prowess include large ship timbers and hundreds of feet of ropes, made from papyrus, coiled in huge bundles. And in 2013 a team of Franco-Egyptian archaeologists discovered what is believed to be the world's oldest port, dating back about 4500 years, from the time of King Cheops on the Red Sea coast near Wadi el-Jarf (about 110 miles south of Suez). In 1977, an ancient north–south canal dating to the Middle Kingdom of Egypt was discovered extending from Lake Timsah to the Ballah Lakes. It was dated to the Middle Kingdom of Egypt by extrapolating dates of ancient sites constructed along its course. The earliest attested examples of mathematical calculations date to the predynastic Naqada period, and show a fully developed numeral system. The importance of mathematics to an educated Egyptian is suggested by a New Kingdom fictional letter in which the writer proposes a scholarly competition between himself and another scribe regarding everyday calculation tasks such as accounting of land, labor, and grain. Texts such as the Rhind Mathematical Papyrus and the Moscow Mathematical Papyrus show that the ancient Egyptians could perform the four basic mathematical operations—addition, subtraction, multiplication, and division—use fractions, calculate the areas of rectangles, triangles, and circles and compute the volumes of boxes, columns and pyramids. They understood basic concepts of algebra and geometry, and could solve simple sets of simultaneous equations. Mathematical notation was decimal, and based on hieroglyphic signs for each power of ten up to one million. Each of these could be written as many times as necessary to add up to the desired number; so to write the number eighty or eight hundred, the symbol for ten or one hundred was written eight times respectively. Because their methods of calculation could not handle most fractions with a numerator greater than one, they had to write fractions as the sum of several fractions. For example, they resolved the fraction "two-fifths" into the sum of "one-third" + "one-fifteenth". Standard tables of values facilitated this. Some common fractions, however, were written with a special glyph—the equivalent of the modern two-thirds is shown on the right. Ancient Egyptian mathematicians knew the Pythagorean theorem as an empirical formula. They were aware, for example, that a triangle had a right angle opposite the hypotenuse when its sides were in a 3–4–5 ratio. They were able to estimate the area of a circle by subtracting one-ninth from its diameter and squaring the result: a reasonable approximation of the formula . The golden ratio seems to be reflected in many Egyptian constructions, including the pyramids, but its use may have been an unintended consequence of the ancient Egyptian practice of combining the use of knotted ropes with an intuitive sense of proportion and harmony. A team led by Johannes Krause managed the first reliable sequencing of the genomes of 90 mummified individuals in 2017 from northern Egypt (buried near modern-day Cairo), which constituted "the first reliable data set obtained from ancient Egyptians using high-throughput DNA sequencing methods." Whilst not conclusive, because of the non-exhaustive time frame and restricted location that the mummies represent, their study nevertheless showed that these ancient Egyptians "closely resembled ancient and modern Near Eastern populations, especially those in the Levant, and had almost no DNA from sub-Saharan Africa. What's more, the genetics of the mummies remained remarkably consistent even as different powers—including Nubians, Greeks, and Romans—conquered the empire." Later, however, something did alter the genomes of Egyptians. Some 15% to 20% of modern Egyptians' DNA reflects sub-Saharan ancestry, but the ancient mummies had only 6–15% sub-Saharan DNA. They called for additional research to be undertaken. Other genetic studies show much greater levels of sub-Saharan African ancestry in the current-day populations of southern as opposed to northern Egypt, and anticipate that mummies from southern Egypt would contain greater levels of sub-Saharan African ancestry than Lower Egyptian mummies. The culture and monuments of ancient Egypt have left a lasting legacy on the world. The cult of the goddess Isis, for example, became popular in the Roman Empire, as obelisks and other relics were transported back to Rome. The Romans also imported building materials from Egypt to erect Egyptian-style structures. Early historians such as Herodotus, Strabo, and Diodorus Siculus studied and wrote about the land, which Romans came to view as a place of mystery. During the Middle Ages and the Renaissance, Egyptian pagan culture was in decline after the rise of Christianity and later Islam, but interest in Egyptian antiquity continued in the writings of medieval scholars such as Dhul-Nun al-Misri and al-Maqrizi. In the seventeenth and eighteenth centuries, European travelers and tourists brought back antiquities and wrote stories of their journeys, leading to a wave of Egyptomania across Europe. This renewed interest sent collectors to Egypt, who took, purchased, or were given many important antiquities. Although the European colonial occupation of Egypt destroyed a significant portion of the country's historical legacy, some foreigners left more positive marks. Napoleon, for example, arranged the first studies in Egyptology when he brought some 150 scientists and artists to study and document Egypt's natural history, which was published in the "Description de l'Égypte". In the 20th century, the Egyptian Government and archaeologists alike recognized the importance of cultural respect and integrity in excavations. The Supreme Council of Antiquities now approves and oversees all excavations, which are aimed at finding information rather than treasure. The council also supervises museums and monument reconstruction programs designed to preserve the historical legacy of Egypt.
https://en.wikipedia.org/wiki?curid=874
Motor neuron disease Motor neuron diseases or motor neurone diseases (MNDs) are a group of rare neurodegenerative disorders that selectively affect motor neurons, the cells which control voluntary muscles of the body. They include amyotrophic lateral sclerosis (ALS), progressive bulbar palsy (PBP), pseudobulbar palsy, progressive muscular atrophy (PMA), primary lateral sclerosis (PLS), and monomelic amyotrophy (MMA), as well as some rarer variants resembling ALS. Motor neuron diseases affect both children and adults. While each motor neuron disease affects patients differently, they all cause movement-related symptoms, mainly muscle weakness. Most of these diseases seem to occur randomly without known causes, but some forms are inherited. Studies into these inherited forms have led to discoveries of various genes (e.g. "SOD1") that are thought be important in understanding how the disease occurs. Symptoms of motor neuron diseases can be first seen at birth or can come on slowly later in life. Most of these diseases worsen over time; while some, such as ALS, shorten one's life expectancy, others do not. Currently, there are no approved treatments for the majority of motor neuron disorders, and care is mostly symptomatic. Signs and symptoms depend on the specific disease, but motor neuron diseases typically manifest as a group of movement-related symptoms. They come on slowly, and worsen over the course of more than three months. Various patterns of muscle weakness are seen, and muscle cramps and spasms may occur. One can have difficulty breathing with climbing stairs (exertion), difficulty breathing when lying down (orthopnea), or even respiratory failure if breathing muscles become involved. Bulbar symptoms, including difficulty speaking (dysarthria), difficulty swallowing (dysphagia), and excessive saliva production (sialorrhea), can also occur. Sensation, or the ability to feel, is typically not affected. Emotional disturbance (e.g. pseudobulbar affect) and cognitive and behavioural changes (e.g. problems in word fluency, decision-making, and memory) are also seen. There can be lower motor neuron findings (e.g. muscle wasting, muscle twitching), upper motor neuron findings (e.g. brisk reflexes, Babinski reflex, Hoffman's reflex, increased muscle tone), or both. Motor neuron diseases are seen both in children and in adults. Those that affect children tend to be inherited or familial, and their symptoms are either present at birth or appear before learning to walk. Those that affect adults tend to appear after age 40. The clinical course depends on the specific disease, but most progress or worsen over the course of months. Some are fatal (e.g. ALS), while others are not (e.g. PLS). Various patterns of muscle weakness occur in different motor neuron diseases. Weakness can be symmetric or asymmetric, and it can occur in body parts that are distal, proximal, or both... According to Statland et al., there are three main weakness patterns that are seen in motor neuron diseases, which are: Motor neuron diseases are on a spectrum in terms of upper and lower motor neuron involvement. Some have just lower or upper motor neuron findings, while others have a mix of both. Lower motor neuron (LMN) findings include muscle atrophy and fasciculations, and upper motor neuron (UMN) findings include hyperreflexia, spasticity, muscle spasm, and abnormal reflexes. Pure upper motor neuron diseases, or those with just UMN findings, include PLS. Pure lower motor neuron diseases, or those with just LMN findings, include PMA. Motor neuron diseases with both UMN and LMN findings include both familial and sporadic ALS. Most cases are sporadic and their causes are usually not known. It is thought that environmental, toxic, viral, or genetic factors may be involved. TARDBP (TAR DNA-binding protein 43), also referred to as TDP-43, is a critical component of the non-homologous end joining (NHEJ) enzymatic pathway that repairs DNA double-strand breaks in pluripotent stem cell-derived motor neurons. TDP-43 is rapidly recruited to double-strand breaks where it acts as a scaffold for the recruitment of the XRCC4-DNA ligase protein complex that then acts to repair double-strand breaks. About 95% of ALS patients have abnormalities in the nucleus-cytoplasmic localization in spinal motor neurons of TDP43. In TDP-43 depleted human neural stem cell-derived motor neurons, as well as in sporadic ALS patients’ spinal cord specimens there is significant double-strand break accumulation and reduced levels of NHEJ. In adults, men are more commonly affected than women. Differential diagnosis can be challenging due to the number of overlapping symptoms, shared between several motor neuron diseases. Frequently, the diagnosis is based on clinical findings (i.e. LMN vs. UMN signs and symptoms, patterns of weakness), family history of MND, and a variation of tests, many of which are used to rule out disease mimics, which can manifest with identical symptoms. Please refer to individual articles for the diagnostic methods used in each individual motor neuron disease. Motor neuron disease describes a collection of clinical disorders, characterized by progressive muscle weakness and the degeneration of the motor neuron on electrophysiological testing. As discussed above, the term "motor neuron disease" has varying meanings in different countries. Similarly, the literature inconsistently classifies which degenerative motor neuron disorders can be included under the umbrella term "motor neuron disease". The four main types of MND are marked (*) in the table below. All types of MND can be differentiated by two defining characteristics: Sporadic or acquired MNDs occur in patients with no family history of degenerative motor neuron disease. Inherited or genetic MNDs adhere to one of the following inheritance patterns: autosomal dominant, autosomal recessive, or X-linked. Some disorders, like ALS, can occur sporadically (85%) or can have a genetic cause (15%) with the same clinical symptoms and progression of disease. UMNs are motor neurons that project from the cortex down to the brainstem or spinal cord. LMNs originate in the anterior horns of the spinal cord and synapse on peripheral muscles. Both motor neurons are necessary for the strong contraction of a muscle, but damage to an UMN can be distinguished from damage to a LMN by physical exam. There are no known curative treatments for the majority of motor neuron disorders. Please refer to the articles on individual disorders for more details. The table below lists life expectancy for patients who are diagnosed with MND. Please refer to individual articles for more detail. In the United States, the term "motor neuron disease" is often used to denote amyotrophic lateral sclerosis (Lou Gehrig's disease), the most common disorder in the group. In the United Kingdom, the term is spelled "motor neurone disease" and is frequently used for the entire group, but can also refer specifically to ALS. While MND refers to a specific subset of similar diseases, there are numerous other diseases of motor neurons that are referred to collectively as "motor neuron disorders", for instance the diseases belonging to the spinal muscular atrophies group. However, they are not classified as "motor neuron diseases" by the 11th edition of the International Statistical Classification of Diseases and Related Health Problems (ICD-11), which is the definition followed in this article.
https://en.wikipedia.org/wiki?curid=876
Abjad An abjad () is a type of writing system in which (in contrast to true alphabets) each symbol or glyph stands for a consonant, in effect leaving it to readers to infer or otherwise supply an appropriate vowel. So-called impure abjads represent vowels -- with either optional diacritics, a limited number of distinct vowel glyphs, or both. The name "abjad" is based on the Arabic alphabet's first (in its original order) four letters—corresponding to a, b, j, d—to replace the more common terms "consonantary" and "consonantal alphabet", in describing the family of scripts classified as "West Semitic." The name "abjad" (' ) is derived from pronouncing the first letters of the Arabic alphabet order, in its original order. The ordering (') of Arabic letters used to match that of the older Phoenician, Hebrew and Semitic proto-alphabets: specifically, aleph, bet, gimel, dalet. According to the formulations of Peter T. Daniels, abjads differ from alphabets in that only consonants, not vowels, are represented among the basic graphemes. Abjads differ from abugidas, another category defined by Daniels, in that in abjads, the vowel sound is "implied" by phonology, and where vowel marks exist for the system, such as nikkud for Hebrew and ḥarakāt for Arabic, their use is optional and not the dominant (or literate) form. Abugidas mark all vowels (other than the "inherent" vowel) with a diacritic, a minor attachment to the letter, or a standalone glyph. Some abugidas use a special symbol to "suppress" the inherent vowel so that the consonant alone can be properly represented. In a syllabary, a grapheme denotes a complete syllable, that is, either a lone vowel sound or a combination of a vowel sound with one or more consonant sounds. The antagonism of abjad versus alphabet, as it was formulated by Daniels, has been rejected by some other scholars because abjad is also used as a term not only for the Arabic numeral system but, which is most important in terms of historical grammatology, also as term for the alphabetic device (i.e. letter order) of ancient Northwest Semitic scripts in opposition to the 'south Arabian' order. This caused fatal effects on terminology in general and especially in (ancient) Semitic philology. Also, it suggests that consonantal alphabets, in opposition to, for instance, the Greek alphabet, were not yet true alphabets and not yet entirely complete, lacking something important to be a fully working script system. It has also been objected that, as a set of letters, an alphabet is not the mirror of what should be there in a language from a phonological point of view; rather, it is the data stock of what provides maximum efficiency with least effort from a semantic point of view. The first abjad to gain widespread usage was the Phoenician abjad. Unlike other contemporary scripts, such as cuneiform and Egyptian hieroglyphs, the Phoenician script consisted of only a few dozen symbols. This made the script easy to learn, and seafaring Phoenician merchants took the script throughout the then-known world. The Phoenician abjad was a radical simplification of phonetic writing, since hieroglyphics required the writer to pick a hieroglyph starting with the same sound that the writer wanted to write in order to write phonetically, much as "man'yōgana" (Chinese characters used solely for phonetic use) was used to represent Japanese phonetically before the invention of kana. Phoenician gave rise to a number of new writing systems, including the Greek alphabet and Aramaic, a widely used abjad. The Greek alphabet evolved into the modern western alphabets, such as Latin and Cyrillic, while Aramaic became the ancestor of many modern abjads and abugidas of Asia. Impure abjads have characters for some vowels, optional vowel diacritics, or both. The term pure abjad refers to scripts entirely lacking in vowel indicators. However, most modern abjads, such as Arabic, Hebrew, Aramaic, and Pahlavi, are "impure" abjadsthat is, they also contain symbols for some of the vowel phonemes, although the said non-diacritic vowel letters are also used to write certain consonants, particularly approximants that sound similar to long vowels. A "pure" abjad is exemplified (perhaps) by very early forms of ancient Phoenician, though at some point (at least by the 9th century BC) it and most of the contemporary Semitic abjads had begun to overload a few of the consonant symbols with a secondary function as vowel markers, called "matres lectionis". This practice was at first rare and limited in scope but became increasingly common and more developed in later times. In the 9th century BC the Greeks adapted the Phoenician script for use in their own language. The phonetic structure of the Greek language created too many ambiguities when vowels went unrepresented, so the script was modified. They did not need letters for the guttural sounds represented by "aleph", "he", "heth" or "ayin", so these symbols were assigned vocalic values. The letters "waw" and "yod" were also adapted into vowel signs; along with "he", these were already used as "matres lectionis" in Phoenician. The major innovation of Greek was to dedicate these symbols exclusively and unambiguously to vowel sounds that could be combined arbitrarily with consonants (as opposed to syllabaries such as Linear B which usually have vowel symbols but cannot combine them with consonants to form arbitrary syllables). Abugidas developed along a slightly different route. The basic consonantal symbol was considered to have an inherent "a" vowel sound. Hooks or short lines attached to various parts of the basic letter modify the vowel. In this way, the South Arabian alphabet evolved into the Ge'ez alphabet between the 5th century BC and the 5th century AD. Similarly, around the 3rd century BC, the Brāhmī script developed (from the Aramaic abjad, it has been hypothesized). The other major family of abugidas, Canadian Aboriginal syllabics, was initially developed in the 1840s by missionary and linguist James Evans for the Cree and Ojibwe languages. Evans used features of Devanagari script and Pitman shorthand to create his initial abugida. Later in the 19th century, other missionaries adapted Evans' system to other Canadian aboriginal languages. Canadian syllabics differ from other abugidas in that the vowel is indicated by rotation of the consonantal symbol, with each vowel having a consistent orientation. The abjad form of writing is well-adapted to the morphological structure of the Semitic languages it was developed to write. This is because words in Semitic languages are formed from a root consisting of (usually) three consonants, the vowels being used to indicate inflectional or derived forms. For instance, according to Classical Arabic and Modern Standard Arabic, from the Arabic root "Dh-B-Ḥ" (to slaughter) can be derived the forms ' (he slaughtered), ' (you (masculine singular) slaughtered), ' (he slaughters), and ' (slaughterhouse). In most cases, the absence of full glyphs for vowels makes the common root clearer, allowing readers to guess the meaning of unfamiliar words from familiar roots (especially in conjunction with context clues) and improving word recognition while reading for practiced readers. By contrast, the Arabic and Hebrew scripts sometimes perform the role of true alphabets rather than abjads when used to write certain Indo-European languages, including Kurdish, Bosnian, and Yiddish. The Science of Arabic Letters, Abjad and Geometry, by Jorge Lupin
https://en.wikipedia.org/wiki?curid=877
Abugida An abugida (from Ge'ez: አቡጊዳ "’abugida"), or alphasyllabary, is a segmental writing system in which consonant–vowel sequences are written as a unit; each unit is based on a consonant letter, and vowel notation is secondary. This contrasts with a full alphabet, in which vowels have status equal to consonants, and with an abjad, in which vowel marking is absent, partial, or optional (although in less formal contexts, all three types of script may be termed alphabets). The terms also contrast them with a syllabary, in which the symbols cannot be split into separate consonants and vowels. Abugidas include the extensive Brahmic family of scripts of Tibet, South and Southeast Asia, Semitic Ethiopic scripts, and Canadian Aboriginal syllabics. As is the case for syllabaries, the units of the writing system may consist of the representations both of syllables and of consonants. For scripts of the Brahmic family, the term "akshara" is used for the units. "’Äbugida" is an Ethiopian name for the Ge‘ez script, taken from four letters of that script, "ä bu gi da", in much the same way that "abecedary" is derived from Latin "a be ce de", "abjad" is derived from the Arabic "a b j d", and "alphabet" is derived from the names of the two first letters in the Greek alphabet, "alpha" and "beta". "Abugida" as a term in linguistics was proposed by Peter T. Daniels in his 1990 typology of writing systems. As Daniels used the word, an abugida is in contrast with a syllabary, where letters with shared consonants or vowels show no particular resemblance to one another, and also with an alphabet proper, where independent letters are used to denote both consonants and vowels. The term "alphasyllabary" was suggested for the Indic scripts in 1997 by William Bright, following South Asian linguistic usage, to convey the idea that "they share features of both alphabet and syllabary." Abugidas were long considered to be syllabaries, or intermediate between syllabaries and alphabets, and the term "syllabics" is retained in the name of Canadian Aboriginal Syllabics. Other terms that have been used include "neosyllabary" (Février 1959), "pseudo-alphabet" (Householder 1959), "semisyllabary" (Diringer 1968; a word that has other uses) and "syllabic alphabet" (Coulmas 1996; this term is also a synonym for syllabary). The formal definitions given by Daniels and Bright for abugida and alphasyllabary differ; some writing systems are abugidas but not alphasyllabaries, and some are alphasyllabaries but not abugidas. An abugida is defined as "a type of writing system whose basic characters denote consonants followed by a particular vowel, and in which diacritics denote other vowels". (This 'particular vowel' is referred to as the "inherent" or "implicit" vowel, as opposed to the "explicit" vowels marked by the 'diacritics'.) An alphasyllabary is defined as "a type of writing system in which the vowels are denoted by subsidiary symbols not all of which occur in a linear order (with relation to the consonant symbols) that is congruent with their temporal order in speech". Bright did not require that an alphabet explicitly represent all vowels. ʼPhags-pa is an example of an abugida that is not an alphasyllabary, and modern Lao is an example of an alphasyllabary that is not an abugida, for its vowels are always explicit. This description is expressed in terms of an abugida. Formally, an alphasyllabary that is not an abugida can be converted to an abugida by adding a purely formal vowel sound that is never used and declaring that to be the inherent vowel of the letters representing consonants. This may formally make the system ambiguous, but in 'practice' this is not a problem, for then the interpretation with the never used inherent vowel sound will always be a wrong interpretation. Note that the actual pronunciation may be complicated by interactions between the sounds apparently written just as the sounds of the letters in the English words "wan, gem" and "war" are affected by neighbouring letters. The fundamental principles of an abugida apply to words made up of consonant-vowel (CV) syllables. The syllables are written as a linear sequences of the units of the script. Each syllable is either a letter that represents the sound of a consonant and the inherent vowel, or a letter with a modification to indicate the vowel, either by means of diacritics, or by changes in the form of the letter itself. If all modifications are by diacritics and all diacritics follow the direction of the writing of the letters, then the abugida is not an alphasyllabary. However, most languages have words that are more complicated than a sequence of CV syllables, even ignoring tone. The first complication is syllables that consist of just a vowel (V). This issue does not arise in some languages because every syllable starts with a consonant. This is common in Semitic languages and in languages of mainland SE Asia, and for such languages this issue need not arise. For some languages, a zero consonant letter is used as though every syllable began with a consonant. For other languages, each vowel has a separate letter that is used for each syllable consisting of just the vowel. These letters are known as "independent vowels", and are found in most Indic scripts. These letters may be quite different from the corresponding diacritics, which by contrast are known as "dependent vowels". As a result of the spread of writing systems, independent vowels may be used to represent syllables beginning with a glottal stop, even for non-initial syllables. The next two complications are sequences of consonants before a vowel (CCV) and syllables ending in a consonant (CVC). The simplest solution, which is not always available, is to break with the principle of writing words as a sequence of syllables and use a unit representing just a consonant (C). This unit may be represented with: In a true abugida, the lack of distinctive marking may result from the diachronic loss of the inherent vowel, e.g. by syncope and apocope in Hindi. When not handled by decomposition into C + CV, CCV syllables are handled by combining the two consonants. In the Indic scripts, the earliest method was simply to arrange them vertically, but the two consonants may merge as a conjunct consonant letters, where two or more letters are graphically joined in a ligature, or otherwise change their shapes. Rarely, one of the consonants may be replaced by a gemination mark, e.g. the Gurmukhi "". When they are arranged vertically, as in Burmese or Khmer, they are said to be 'stacked'. Often there has been a change to writing the two consonants side by side. In the latter case, the fact of combination may be indicated by a diacritic on one of the consonants or a change in the form of one of the consonants, e.g. the half forms of Devanagari. Generally, the reading order is top to bottom or the general reading order of the script, but sometimes the order is reversed. The division of a word into syllables for the purposes of writing does not always accord with the natural phonetics of the language. For example, Brahmic scripts commonly handle a phonetic sequence CVC-CV as CV-CCV or CV-C-CV. However, sometimes phonetic CVC syllables are handled as single units, and the final consonant may be represented: More complicated unit structures (e.g. CC or CCVC) are handled by combining the various techniques above. There are three principal families of abugidas, depending on whether vowels are indicated by modifying consonants by "diacritics, distortion," or "orientation." Tāna of the Maldives has dependent vowels and a zero vowel sign, but no inherent vowel. Indic scripts originated in India and spread to Southeast Asia. All surviving Indic scripts are descendants of the Brahmi alphabet. Today they are used in most languages of South Asia (although replaced by Perso-Arabic in Urdu, Kashmiri and some other languages of Pakistan and India), mainland Southeast Asia (Myanmar, Thailand, Laos, and Cambodia), and Indonesian archipelago (Javanese, Balinese, Sundanese, etc.). The primary division is into North Indic scripts used in Northern India, Nepal, Tibet and Bhutan, and Southern Indic scripts used in South India, Sri Lanka and Southeast Asia. South Indic letter forms are very rounded; North Indic less so, though Odia, Golmol and Litumol of Nepal script are rounded. Most North Indic scripts' full letters incorporate a horizontal line at the top, with Gujarati and Odia as exceptions; South Indic scripts do not. Indic scripts indicate vowels through dependent vowel signs (diacritics) around the consonants, often including a sign that explicitly indicates the lack of a vowel. If a consonant has no vowel sign, this indicates a default vowel. Vowel diacritics may appear above, below, to the left, to the right, or around the consonant. The most widely used Indic script is Devanagari, shared by Hindi, Bhojpuri, Marathi, Konkani, Nepali, and often Sanskrit. A basic letter such as क in Hindi represents a syllable with the default vowel, in this case "ka" (). In some languages, including Hindi, it becomes a final closing consonant at the end of a word, in this case "k". The inherent vowel may be changed by adding vowel mark (diacritics), producing syllables such as कि "ki," कु "ku," के "ke," को "ko." In many of the Brahmic scripts, a syllable beginning with a cluster is treated as a single character for purposes of vowel marking, so a vowel marker like ि "-i," falling before the character it modifies, may appear several positions before the place where it is pronounced. For example, the game cricket in Hindi is क्रिकेट "cricket;" the diacritic for appears before the consonant cluster , not before the . A more unusual example is seen in the Batak alphabet: Here the syllable "bim" is written "ba-ma-i-(virama)". That is, the vowel diacritic and virama are both written after the consonants for the whole syllable. In many abugidas, there is also a diacritic to suppress the inherent vowel, yielding the bare consonant. In Devanagari, क् is "k," and ल् is "l". This is called the "virāma" or "halantam" in Sanskrit. It may be used to form consonant clusters, or to indicate that a consonant occurs at the end of a word. Thus in Sanskrit, a default vowel consonant such as क does not take on a final consonant sound. Instead, it keeps its vowel. For writing two consonants without a vowel in between, instead of using diacritics on the first consonant to remove its vowel, another popular method of special conjunct forms is used in which two or more consonant characters are merged to express a cluster, such as Devanagari: क्ल "kla." (Note that some fonts display this as क् followed by ल, rather than forming a conjunct. This expedient is used by ISCII and South Asian scripts of Unicode.) Thus a closed syllable such as "kal" requires two "aksharas" to write. The Róng script used for the Lepcha language goes further than other Indic abugidas, in that a single "akshara" can represent a closed syllable: Not only the vowel, but any final consonant is indicated by a diacritic. For example, the syllable [sok] would be written as something like s̥̽, here with an underring representing and an overcross representing the diacritic for final . Most other Indic abugidas can only indicate a very limited set of final consonants with diacritics, such as or , if they can indicate any at all. In Ethiopic (where the term "abugida" originates) the diacritics have been fused to the consonants to the point that they must be considered modifications of the form of the letters. Children learn each modification separately, as in a syllabary; nonetheless, the graphic similarities between syllables with the same consonant is readily apparent, unlike the case in a true syllabary. Though now an abugida, the Ge'ez script, until the advent of Christianity ("ca." AD 350), had originally been what would now be termed an "abjad". In the Ge'ez abugida (or "fidel"), the base form of the letter (also known as "fidel") may be altered. For example, ሀ "hä" (base form), ሁ "hu" (with a right-side diacritic that doesn't alter the letter), ሂ "hi" (with a subdiacritic that compresses the consonant, so it is the same height), ህ "hə" or (where the letter is modified with a kink in the left arm). In the family known as Canadian Aboriginal syllabics, which was inspired by the Devanagari script of India, vowels are indicated by changing the orientation of the syllabogram. Each vowel has a consistent orientation; for example, Inuktitut ᐱ "pi," ᐳ "pu," ᐸ "pa;" ᑎ "ti," ᑐ "tu," ᑕ "ta". Although there is a vowel inherent in each, all rotations have equal status and none can be identified as basic. Bare consonants are indicated either by separate diacritics, or by superscript versions of the "aksharas"; there is no vowel-killer mark. Consonantal scripts ("abjads") are normally written without indication of many vowels. However, in some contexts like teaching materials or scriptures, Arabic and Hebrew are written with full indication of vowels via diacritic marks ("harakat", "niqqud") making them effectively alphasyllabaries. The Brahmic and Ethiopic families are thought to have originated from the Semitic abjads by the addition of vowel marks. The Arabic scripts used for Kurdish in Iraq and for Uyghur in Xinjiang, China, as well as the Hebrew script of Yiddish, are fully vowelled, but because the vowels are written with full letters rather than diacritics (with the exception of distinguishing between /a/ and /o/ in the latter) and there are no inherent vowels, these are considered alphabets, not abugidas. The imperial Mongol script called Phagspa was derived from the Tibetan abugida, but all vowels are written in-line rather than as diacritics. However, it retains the features of having an inherent vowel /a/ and having distinct initial vowel letters. Pahawh Hmong is a non-segmental script that indicates syllable onsets and rimes, such as consonant clusters and vowels with final consonants. Thus it is not segmental and cannot be considered an abugida. However, it superficially resembles an abugida with the roles of consonant and vowel reversed. Most syllables are written with two letters in the order rime–onset (typically vowel-consonant), even though they are pronounced as onset-rime (consonant-vowel), rather like the position of the vowel in Devanagari, which is written before the consonant. Pahawh is also unusual in that, while an inherent rime (with mid tone) is unwritten, it also has an inherent onset . For the syllable , which requires one or the other of the inherent sounds to be overt, it is that is written. Thus it is the rime (vowel) that is basic to the system. It is difficult to draw a dividing line between abugidas and other segmental scripts. For example, the Meroitic script of ancient Sudan did not indicate an inherent "a" (one symbol stood for both "m" and "ma," for example), and is thus similar to Brahmic family of abugidas. However, the other vowels were indicated with full letters, not diacritics or modification, so the system was essentially an alphabet that did not bother to write the most common vowel. Several systems of shorthand use diacritics for vowels, but they do not have an inherent vowel, and are thus more similar to Thaana and Kurdish script than to the Brahmic scripts. The Gabelsberger shorthand system and its derivatives modify the "following" consonant to represent vowels. The Pollard script, which was based on shorthand, also uses diacritics for vowels; the placements of the vowel relative to the consonant indicates tone. Pitman shorthand uses straight strokes and quarter-circle marks in different orientations as the principal "alphabet" of consonants; vowels are shown as light and heavy dots, dashes and other marks in one of 3 possible positions to indicate the various vowel-sounds. However, to increase writing speed, Pitman has rules for "vowel indication" using the positioning or choice of consonant signs so that writing vowel-marks can be dispensed with. As the term "alphasyllabary" suggests, abugidas have been considered an intermediate step between alphabets and syllabaries. Historically, abugidas appear to have evolved from abjads (vowelless alphabets). They contrast with syllabaries, where there is a distinct symbol for each syllable or consonant-vowel combination, and where these have no systematic similarity to each other, and typically develop directly from logographic scripts. Compare the examples above to sets of syllables in the Japanese hiragana syllabary: か "ka", き "ki", く "ku", け "ke", こ "ko" have nothing in common to indicate "k;" while ら "ra", り "ri", る "ru", れ "re", ろ "ro" have neither anything in common for "r", nor anything to indicate that they have the same vowels as the "k" set. Most Indian and Indochinese abugidas appear to have first been developed from abjads with the Kharoṣṭhī and Brāhmī scripts; the abjad in question is usually considered to be the Aramaic one, but while the link between Aramaic and Kharosthi is more or less undisputed, this is not the case with Brahmi. The Kharosthi family does not survive today, but Brahmi's descendants include most of the modern scripts of South and Southeast Asia. Ge'ez derived from a different abjad, the Sabean script of Yemen; the advent of vowels coincided with the introduction of Christianity about AD 350. The Ethiopic script is the elaboration of an abjad. The Cree syllabary was invented with full knowledge of the Devanagari system. The Meroitic script was developed from Egyptian hieroglyphs, within which various schemes of 'group writing' had been used for showing vowels.
https://en.wikipedia.org/wiki?curid=878
ABBA ABBA (, ) is a Swedish pop supergroup formed in Stockholm in 1972 by Agnetha Fältskog, Björn Ulvaeus, Benny Andersson, and Anni-Frid Lyngstad. The group's name is an acronym of the first letters of their first names. They became one of the most commercially successful acts in the history of popular music, topping the charts worldwide from 1974 to 1982. ABBA won the Eurovision Song Contest 1974, giving Sweden its first triumph in the contest. They are the most successful group to have taken part in the competition. During the band's main active years, it was composed of two married couples: Fältskog and Ulvaeus, and Lyngstad and Andersson. With the increase of their popularity, their personal lives suffered, which eventually resulted in the collapse of both marriages. The relationship changes were reflected in the group's music, with latter compositions featuring darker and more introspective lyrics. After ABBA disbanded, Andersson and Ulvaeus achieved success writing music for the stage, while Lyngstad and Fältskog pursued solo careers. Ten years after their disbanding, a compilation, "ABBA Gold" was released, which became a worldwide bestseller. In 1999, ABBA's music was adapted into the successful musical "Mamma Mia!" that toured worldwide. A film of the same name, released in 2008, became the highest-grossing film in the United Kingdom that year. A sequel, "Mamma Mia! Here We Go Again", was released in 2018. That same year it was announced that the band had recorded two new songs after 35 years of being inactive. Estimates of ABBA's total record sales are over 380 million, making them one of the best-selling music artists of all time. ABBA were the first group from a non-English-speaking country to achieve consistent success in the charts of English-speaking countries, including the United Kingdom, Ireland, Canada, Australia, New Zealand, South Africa and the United States. They had eight consecutive number-one albums in the UK. The group also enjoyed significant success in Latin America, and recorded a collection of their hit songs in Spanish. ABBA were honoured at the of the Eurovision Song Contest in 2005, when their hit "Waterloo" was chosen as the best song in the competition's history. The group was inducted into the Rock and Roll Hall of Fame in 2010. In 2015, their song "Dancing Queen" was inducted into the Recording Academy's Grammy Hall of Fame. Benny Andersson (born 16 December 1946 in Stockholm, Sweden) became (at age 18) a member of a popular Swedish pop-rock group, the Hep Stars, that performed covers, amongst other things, of international hits. The Hep Stars were known as "the Swedish Beatles". They also set up Hep House, their equivalent of Apple Corps. Andersson played the keyboard and eventually started writing original songs for his band, many of which became major hits, including "No Response", which hit number three in 1965, and "Sunny Girl", "Wedding", and "Consolation", all of which hit number one in 1966. Andersson also had a fruitful songwriting collaboration with Lasse Berghagen, with whom he wrote his first Svensktoppen entry, "Sagan om lilla Sofie" ("The Story of Little Sophie") in 1968. Björn Ulvaeus (born 25 April 1945 in Gothenburg, Sweden) also began his musical career at the age of 18 (as a singer and guitarist), when he fronted the Hootenanny Singers, a popular Swedish folk–skiffle group. Ulvaeus started writing English-language songs for his group, and even had a brief solo career alongside. The Hootenanny Singers and the Hep Stars sometimes crossed paths while touring. In June 1966, Ulvaeus and Andersson decided to write a song together. Their first attempt was "Isn't It Easy to Say", a song later recorded by the Hep Stars. Stig Anderson was the manager of the Hootenanny Singers and founder of the Polar Music label. He saw potential in the collaboration, and encouraged them to write more. The two also began playing occasionally with the other's bands on stage and on record, although it was not until 1969 that the pair wrote and produced some of their first real hits together: "Ljuva sextital" ("Sweet Sixties"), recorded by Brita Borg, and the Hep Stars' 1969 hit "Speleman" ("Fiddler"). Andersson wrote and submitted the song "Hej, Clown" for Melodifestivalen 1969, the national festival to select the Swedish entry to the Eurovision Song Contest. The song tied for first place, but re-voting relegated Andersson's song to second place. On that occasion Andersson briefly met his future spouse, singer Anni-Frid Lyngstad, who also participated in the contest. A month later, the two had become a couple. As their respective bands began to break up during 1969, Andersson and Ulvaeus teamed up and recorded their first album together in 1970, called "Lycka" ("Happiness"), which included original songs sung by both men. Their partners were often present in the recording studio, and sometimes added backing vocals; Fältskog even co-wrote a song with the two. Ulvaeus still occasionally recorded and performed with the Hootenanny Singers until the middle of 1974, and Andersson took part in producing their records. Agnetha Fältskog (born 5 April 1950 in Jönköping, Sweden) sang with a local dance band headed by Bernt Enghardt who sent a demo recording of the band to Karl Gerhard Lundkvist. The demo tape featured a song written and sung by Agnetha: "Jag var så kär" ("I Was So in Love"). Lundkvist was so impressed with her voice that he was convinced she would be a star. After going through considerable effort to locate the singer, he arranged for Agnetha to come to Stockholm and to record two of her own songs. This led to Agnetha at the age of 18 having a number-one record in Sweden with a self-composed song, which later went on to sell over 80,000 copies. She was soon noticed by the critics and songwriters as a talented singer/songwriter of schlager style songs. Fältskog's main inspiration in her early years were singers such as Connie Francis. Along with her own compositions, she recorded covers of foreign hits and performed them on tours in Swedish folkparks. Most of her biggest hits were self-composed, which was quite unusual for a female singer in the 1960s. Agnetha released four solo LPs between 1968 and 1971. She had many successful singles in the Swedish charts. During filming of a Swedish TV special in May 1969, Fältskog met Ulvaeus and they married on 6 July 1971. Fältskog and Ulvaeus eventually were involved in each other's recording sessions, and soon even Andersson and Lyngstad added backing vocals to Fältskog's third studio album, "Som jag är" ("As I Am") (1970). In 1972, Fältskog starred as Mary Magdalene in the original Swedish production of "Jesus Christ Superstar" and attracted favourable reviews. Between 1967 and 1975, Fältskog released five studio albums. Anni-Frid "Frida" Lyngstad (born 15 November 1945 in Bjørkåsen in Ballangen, Norway) sang from the age of 13 with various dance bands, and worked mainly in a jazz-oriented cabaret style. She also formed her own band, the Anni-Frid Four. In the middle of 1967, she won a national talent competition with "En ledig dag" ("A Day Off") a Swedish version of the bossa nova song "A Day in Portofino", which is included in the EMI compilation "Frida 1967–1972". The first prize was a recording contract with EMI Sweden and to perform live on the most popular TV shows in the country. This TV performance, amongst many others, is included in the 3½-hour documentary "Frida – The DVD". Lyngstad released several schlager style singles on EMI without much success. When Benny Andersson started to produce her recordings in 1971, she had her first number-one single, "Min egen stad" ("My Own Town"), written by Benny and featuring all the future ABBA members on backing vocals. Lyngstad toured and performed regularly in the folkpark circuit and made appearances on radio and TV. She met Ulvaeus briefly in 1963 during a talent contest, and Fältskog during a TV show in early 1968. Lyngstad linked up with her future bandmates in 1969. On 1 March 1969, she participated in the Melodifestival, where she met Andersson for the first time. A few weeks later they met again during a concert tour in southern Sweden and they soon became a couple. Andersson produced her single "Peter Pan" in September 1969—her first collaboration with Benny & Björn, as they had written the song. Andersson would then produce Lyngstad's debut studio album, "Frida", which was released in March 1971. Lyngstad also played in several revues and cabaret shows in Stockholm between 1969 and 1973. After ABBA formed, she recorded another successful album in 1975, "Frida ensam", which included a Swedish rendition of "Fernando", a hit on the Swedish radio charts before the English version was released. An attempt at combining their talents occurred in April 1970 when the two couples went on holiday together to the island of Cyprus. What started as singing for fun on the beach ended up as an improvised live performance in front of the United Nations soldiers stationed on the island. Andersson and Ulvaeus were at this time recording their first album together, "Lycka", which was to be released in September 1970. Fältskog and Lyngstad added backing vocals on several tracks during June, and the idea of their working together saw them launch a stage act, "Festfolket" (which translates from Swedish to "Party People"), on 1 November 1970 in Gothenburg. The cabaret show attracted generally negative reviews, except for the performance of the Andersson and Ulvaeus hit "Hej, gamle man" ("Hello, Old Man")–the first Björn and Benny recording to feature all four. They also performed solo numbers from respective albums, but the lukewarm reception convinced the foursome to shelve plans for working together for the time being, and each soon concentrated on individual projects again. "Hej, gamle man", a song about an old Salvation Army soldier, became the quartet's first hit. The record was credited to Björn & Benny and reached number five on the sales charts and number one on Svensktoppen, staying on the latter chart (which was not a chart linked to sales or airplay) for 15 weeks. It was during 1971 that the four artists began working together more, adding vocals to the others' recordings. Fältskog, Andersson and Ulvaeus toured together in May, while Lyngstad toured on her own. Frequent recording sessions brought the foursome closer together during the summer. After the 1970 release of "Lycka", two more singles credited to "Björn & Benny" were released in Sweden, "Det kan ingen doktor hjälpa" ("No Doctor Can Help with That") and "Tänk om jorden vore ung" ("Imagine If Earth Was Young"), with more prominent vocals by Fältskog and Lyngstad–and moderate chart success. Fältskog and Ulvaeus, now married, started performing together with Andersson on a regular basis at the Swedish folkparks in the middle of 1971. Stig Anderson, founder and owner of Polar Music, was determined to break into the mainstream international market with music by Andersson and Ulvaeus. "One day the pair of you will write a song that becomes a worldwide hit," he predicted. Stig Anderson encouraged Ulvaeus and Andersson to write a song for Melodifestivalen, and after two rejected entries in 1971, Andersson and Ulvaeus submitted their new song "Säg det med en sång" ("Say It with a Song") for the 1972 contest, choosing newcomer Lena Anderson to perform. The song came in third place, encouraging Stig Anderson, and became a hit in Sweden. The first signs of foreign success came as a surprise, as the Andersson and Ulvaeus single "She's My Kind of Girl" was released through Epic Records in Japan in March 1972, giving the duo a Top 10 hit. Two more singles were released in Japan, "En Carousel" ("En Karusell" in Scandinavia, an earlier version of "Merry-Go-Round") and "Love Has Its Ways" (a song they wrote with Kōichi Morita). Ulvaeus and Andersson persevered with their songwriting and experimented with new sounds and vocal arrangements. "People Need Love" was released in June 1972, featuring guest vocals by the women, who were now given much greater prominence. Stig Anderson released it as a single, credited to "Björn & Benny, Agnetha & Anni-Frid". The song peaked at number 17 in the Swedish combined single and album charts, enough to convince them they were on to something. The single also became the first record to chart for the quartet in the United States, where it peaked at number 114 on the "Cashbox" singles chart and number 117 on the "Record World" singles chart. Labeled as "Björn & Benny (with Svenska Flicka)", it was released there through Playboy Records. However, according to Stig Anderson, "People Need Love" could have been a much bigger American hit, but a small label like Playboy Records did not have the distribution resources to meet the demand for the single from retailers and radio programmers. In 1973, the band and their manager Stig Anderson decided to have another try at Melodifestivalen, this time with the song "Ring Ring". The studio sessions were handled by Michael B. Tretow, who experimented with a "wall of sound" production technique that became the wholly new sound. Stig Anderson arranged an English translation of the lyrics by Neil Sedaka and Phil Cody and they thought this would be a surefire winner. However, on 10 February 1973, the song came third in Melodifestivalen; thus it never reached the Eurovision Song Contest itself. Nevertheless, the group released their debut studio album, also called "Ring Ring". The album did well and the "Ring Ring" single was a hit in many parts of Europe and also in South Africa. However, Stig Anderson felt that the true breakthrough could only come with a UK or US hit. When Agnetha Fältskog gave birth to her daughter Linda in 1973, she was replaced for a short period by Inger Brundin on a trip to West Germany. In 1973, Stig Anderson, tired of unwieldy names, started to refer to the group privately and publicly as ABBA (a palindrome). At first, this was a play on words, as Abba is also the name of a well-known fish-canning company in Sweden, and itself an abbreviation. However, since the fish-canners were unknown outside Sweden, Anderson came to believe the name would work in international markets. A competition to find a suitable name for the group was held in a Gothenburg newspaper and it was officially announced in the summer that the group were to be known as "ABBA". The group negotiated with the canners for the rights to the name. Fred Bronson reported for "Billboard" that Fältskog told him in a 1988 interview that "[ABBA] had to ask permission and the factory said, 'O.K., as long as you don't make us feel ashamed for what you're doing. "ABBA" is an acronym formed from the first letters of each group member's first name: Agnetha, Björn, Benny, and Anni-Frid. The earliest known example of "ABBA" written on paper is on a recording session sheet from the Metronome Studio in Stockholm dated 16 October 1973. This was first written as "Björn, Benny, Agnetha & Frida", but was subsequently crossed out with "ABBA" written in large letters on top. Their official logo, distinct with the backward 'B', was designed by Rune Söderqvist, who designed most of ABBA's record sleeves. The ambigram first appeared on the French compilation album, Golden Double Album, released in May 1976 by Disques Vogue, and would henceforth be used for all official releases. The idea for the official logo was made by the German photographer Wolfgang "Bubi" Heilemann () on a velvet jumpsuit photo shoot for the teenage magazine Bravo. On the photo, the ABBA members held a giant initial letter of his/her name. After the pictures were made, Heilemann found out that Benny Andersson reversed his letter "B"; this prompted discussions about the mirrored "B", and the members of ABBA agreed on the mirrored letter. From 1976 onward, the first "B" in the logo version of the name was "mirror-image" reversed on the band's promotional material, thus becoming the group's registered trademark. Following their acquisition of the group's catalogue, PolyGram began using variations of the ABBA logo, employing a different font. In 1992, Polygram added a crown emblem to it for the first release of the "ABBA Gold: Greatest Hits" compilation. After Universal Music purchased PolyGram (and, thus, ABBA's label Polar Music International), control of the group's catalogue returned to Stockholm. Since then, the original logo has been reinstated on all official products. As the group entered the Melodifestivalen with "Ring Ring" but failed to qualify as the 1973 Swedish entry, Stig Anderson immediately started planning for the 1974 contest. Ulvaeus, Andersson and Stig Anderson believed in the possibilities of using the Eurovision Song Contest as a way to make the music business aware of them as songwriters, as well as the band itself. In late 1973, they were invited by Swedish television to contribute a song for the Melodifestivalen 1974 and from a number of new songs, the upbeat song "Waterloo" was chosen; the group was now inspired by the growing glam rock scene in England. ABBA won their nation's hearts on Swedish television on 9 February 1974, and with this third attempt were far more experienced and better prepared for the Eurovision Song Contest. Winning the 1974 Eurovision Song Contest on 6 April 1974 (and singing "Waterloo" in English instead of their native tongue) gave ABBA the chance to tour Europe and perform on major television shows; thus the band saw the "Waterloo" single chart in many European countries. Following their success at the Eurovision Song Contest, ABBA spent an evening of glory partying in the appropriately named first-floor Napoleon suite of The Grand Brighton Hotel. "Waterloo" was ABBA's first number-one single in big markets such as the UK and West Germany. In the United States, the song peaked at number-six on the "Billboard" Hot 100 chart, paving the way for their first album and their first trip as a group there. Albeit a short promotional visit, it included their first performance on American television, "The Mike Douglas Show". The album "Waterloo" only peaked at number 145 on the "Billboard" 200 chart, but received unanimous high praise from the US critics: "Los Angeles Times" called it "a compelling and fascinating debut album that captures the spirit of mainstream pop quite effectively ... an immensely enjoyable and pleasant project", while "Creem" characterised it as "a perfect blend of exceptional, lovable compositions". ABBA's follow-up single, "Honey, Honey", peaked at number 27 on the US "Billboard" Hot 100, and was a number-two hit in West Germany. However, in the United Kingdom, ABBA's British record label, Epic, decided to re-release a remixed version of "Ring Ring" instead of "Honey, Honey", and a cover version of the latter by Sweet Dreams peaked at number 10. Both records debuted on the UK chart within one week of each other. "Ring Ring" failed to reach the Top 30 in the United Kingdom, increasing growing speculation that the group was simply a Eurovision one-hit wonder. In November 1974, ABBA embarked on their first European tour, playing dates in Denmark, West Germany and Austria. It was not as successful as the band had hoped, since most of the venues did not sell out. Due to a lack of demand, they were even forced to cancel a few shows, including a sole concert scheduled in Switzerland. The second leg of the tour, which took them through Scandinavia in January 1975, was very different. They played to full houses everywhere and finally got the reception they had aimed for. Live performances continued in the middle of 1975 when ABBA embarked on a fourteen open-air date tour of Sweden and Finland. Their Stockholm show at the Gröna Lund amusement park had an estimated audience of 19,200. Björn Ulvaeus later said that "If you look at the singles we released straight after Waterloo, we were trying to be more like The Sweet, a semi-glam rock group, which was stupid because we were always a pop group." In late 1974, "So Long" was released as a single in the United Kingdom but it received no airplay from Radio 1 and failed to chart. In the middle of 1975, ABBA released "I Do, I Do, I Do, I Do, I Do", which again received little airplay on Radio 1 but managed to climb the charts, to number 38. Later that year, the release of their self-titled third studio album "ABBA" and single "SOS" brought back their chart presence in the UK, where the single hit number six and the album peaked at number 13. "SOS" also became ABBA's second number-one single in Germany and their third in Australia. Success was further solidified with "Mamma Mia" reaching number-one in the United Kingdom, Germany and Australia. In the United States, "SOS" peaked at number 10 on the Record World Top 100 singles chart and number 15 on the "Billboard" Hot 100 chart, picking up the BMI Award along the way as one of the most played songs on American radio in 1975. The success of the group in the United States had until that time been limited to single releases. By early 1976, the group already had four Top 30 singles on the US charts, but the album market proved to be tough to crack. The eponymous "ABBA " album generated three American hits, but it only peaked at number 165 on the "Cashbox" album chart and number 174 on the "Billboard" 200 chart. Opinions were voiced, by "Creem" in particular, that in the US ABBA had endured "a very sloppy promotional campaign". Nevertheless, the group enjoyed warm reviews from the American press. "Cashbox" went as far as saying that "there is a recurrent thread of taste and artistry inherent in Abba's marketing, creativity and presentation that makes it almost embarrassing to critique their efforts", while "Creem" wrote: "SOS is surrounded on this LP by so many good tunes that the mind boggles." In Australia, the airing of the music videos for "I Do, I Do, I Do, I Do, I Do" and "Mamma Mia" on the nationally broadcast TV pop show "Countdown" (which premiered in November 1974) saw the band rapidly gain enormous popularity, and "Countdown" become a key promoter of the group via their distinctive music videos. This started an immense interest for ABBA in Australia, resulting in both the single and album holding down the No. 1 positions on the charts for months. In March 1976, the band released the compilation album "Greatest Hits". It became their first UK number-one album, and also took ABBA into the Top 50 on the US album charts for the first time, eventually selling more than a million copies there. Also included on "Greatest Hits" was a new single, "Fernando", which went to number-one in at least thirteen countries worldwide, including the United Kingdom, Germany and Australia, and the single went on to sell over 10 million copies worldwide. In Australia, "Fernando" occupied the top position for a then record breaking 14 weeks (and stayed in the chart for 40 weeks), and was the longest-running chart-topper there for over 40 years until it was overtaken by Ed Sheeran's "Shape of You" in May 2017. It still remains as one of the best-selling singles of all time in Australia. Also in 1976, the group received its first international prize, with "Fernando" being chosen as the "Best Studio Recording of 1975". In the United States, "Fernando" reached the Top 10 of the Cashbox Top 100 singles chart and number 13 on the "Billboard" Hot 100. It topped the "Billboard" Adult Contemporary chart, ABBA's first American number-one single on any chart. At the same time, Germany released a compilation named "The Very Best of ABBA", also becoming a number-one album there whereas the "Greatest Hits" compilation followed a few months later to number-two on the German charts, despite all similarities with "The Very Best" album. The group's fourth studio album, "Arrival", a number-one best-seller in Europe and Australia, represented a new level of accomplishment in both songwriting and studio work, prompting rave reviews from more rock-oriented UK music weeklies such as "Melody Maker" and "New Musical Express", and mostly appreciative notices from US critics. Hit after hit flowed from "Arrival": "Money, Money, Money", another number-one in Germany and Australia, and "Knowing Me, Knowing You", ABBA's sixth consecutive German number-one as well as another UK number-one. The real sensation was "Dancing Queen", not only topping the charts in loyal markets the UK, Germany and Australia, but also reaching number-one in the United States. In South Africa, ABBA had astounding success with "Fernando", "Dancing Queen" and "Knowing Me, Knowing You" being among the top 20 best-selling singles for 1976–77. In 1977, "Arrival" was nominated for the inaugural BRIT Award in the category "Best International Album of the Year". By this time ABBA were popular in the United Kingdom, most of Western Europe, Australia and New Zealand. In "Frida – The DVD", Lyngstad explains how she and Fältskog developed as singers, as ABBA's recordings grew more complex over the years. The band's popularity in the United States would remain on a comparatively smaller scale, and "Dancing Queen" became the only "Billboard" Hot 100 number-one single ABBA had there (they did, however, get three more singles to the number-one position on other "Billboard" charts, including "Billboard" Adult Contemporary and Hot Dance Club Play). Nevertheless, "Arrival" finally became a true breakthrough release for ABBA on the US album market where it peaked at number 20 on the "Billboard" 200 chart and was certified gold by RIAA. In January 1977, ABBA embarked on their first major tour. The group's status had changed dramatically and they were clearly regarded as superstars. They opened their much anticipated tour in Oslo, Norway, on 28 January, and mounted a lavishly produced spectacle that included a few scenes from their self-written mini-operetta "The Girl with the Golden Hair". The concert attracted immense media attention from across Europe and Australia. They continued the tour through Western Europe, visiting Gothenburg, Copenhagen, Berlin, Cologne, Amsterdam, Antwerp, Essen, Hanover, and Hamburg and ending with shows in the United Kingdom in Manchester, Birmingham, Glasgow and two sold-out concerts at London's Royal Albert Hall. Tickets for these two shows were available only by mail application and it was later revealed that the box-office received 3.5 million requests for tickets, enough to fill the venue 580 times. Along with praise ("ABBA turn out to be amazingly successful at reproducing their records", wrote "Creem"), there were complaints that "ABBA performed slickly...but with a zero personality coming across from a total of 16 people on stage" ("Melody Maker"). One of the Royal Albert Hall concerts was filmed as a reference for the filming of the Australian tour for what became "", though it is not exactly known how much of the concert was filmed. After the European leg of the tour, in March 1977, ABBA played 11 dates in Australia before a total of 160,000 people. The opening concert in Sydney at the Sydney Showground on 3 March to an audience of 20,000 was marred by torrential rain with Lyngstad slipping on the wet stage during the concert. However, all four members would later recall this concert as the most memorable of their career. Upon their arrival in Melbourne, a civic reception was held at the Melbourne Town Hall and ABBA appeared on the balcony to greet an enthusiastic crowd of 6,000. In Melbourne, the group gave three concerts at the Sidney Myer Music Bowl with 14,500 at each including the Australian Prime Minister Malcolm Fraser and his family. At the first Melbourne concert, an additional 16,000 people gathered outside the fenced-off area to listen to the concert. In Adelaide, the group performed one concert at West Lakes Football Stadium in front of 20,000 people, with another 10,000 listening outside. During the first of five concerts in Perth, there was a bomb scare with everyone having to evacuate the Entertainment Centre. The trip was accompanied by mass hysteria and unprecedented media attention ("Swedish ABBA stirs box-office in Down Under tour...and the media coverage of the quartet rivals that set to cover the upcoming Royal tour of Australia", wrote "Variety"), and is captured on film in "", directed by Lasse Hallström. The Australian tour and its subsequent "ABBA: The Movie" produced some ABBA lore, as well. Fältskog's blonde good looks had long made her the band's "pin-up girl", a role she disdained. During the Australian tour, she performed in a skin-tight white jumpsuit, causing one Australian newspaper to use the headline "Agnetha's bottom tops dull show". When asked about this at a news conference, she replied: "Don't they have bottoms in Australia?" In December 1977, ABBA followed up "Arrival" with the more ambitious fifth album "", released to coincide with the debut of "ABBA: The Movie". Although the album was less well received by UK reviewers, it did spawn more worldwide hits: "The Name of the Game" and "Take a Chance on Me", which both topped the UK charts, and peaked at number 12 and number three, respectively, on the "Billboard" Hot 100 chart in the US. Although "Take a Chance on Me" did not top the American charts, it proved to be ABBA's biggest hit single there, selling more copies than "Dancing Queen". "The Album" also included "Thank You for the Music", the B-side of "Eagle" in countries where the latter had been released as a single, and was belatedly released as an A-side single in the United Kingdom and Ireland in 1983. "Thank You for the Music" has become one of the best loved and best known ABBA songs without being released as a single during the group's lifetime. By 1978 ABBA were one of the biggest bands in the world. They converted a vacant cinema into the Polar Music Studio, a state-of-the-art studio in Stockholm. The studio was used by several other bands; notably Genesis' "Duke" and Led Zeppelin's "In Through the Out Door" were recorded there. During the May 1978, the group went to the United States for a promotional campaign, performing alongside Andy Gibb on Olivia Newton-John's TV show. Recording sessions for the single "Summer Night City" were an uphill struggle, but upon release the song became another hit for the group. The track would set the stage for ABBA's foray into disco with their next album. On 9 January 1979, the group performed "Chiquitita" at the Music for UNICEF Concert held at the United Nations General Assembly to celebrate UNICEF's Year of the Child. ABBA donated the copyright of this worldwide hit to the UNICEF; see Music for UNICEF Concert. The single was released the following week, and reached number-one in ten countries. In mid-January 1979, Ulvaeus and Fältskog announced they were getting divorced. The news caused interest from the media and led to speculation about the band's future. ABBA assured the press and their fan base they were continuing their work as a group and that the divorce would not affect them. Nonetheless, the media continued to confront them with this in interviews. To escape the media swirl and concentrate on their writing, Andersson and Ulvaeus secretly travelled to Compass Point Studios in Nassau, Bahamas, where for two weeks they prepared their next album's songs. The group's sixth studio album, "Voulez-Vous", was released in April 1979, the title track of which was recorded at the famous Criteria Studios in Miami, Florida, with the assistance of recording engineer Tom Dowd amongst others. The album topped the charts across Europe and in Japan and Mexico, hit the Top 10 in Canada and Australia and the Top 20 in the United States. None of the singles from the album reached number one on the UK charts, but "Chiquitita", "Does Your Mother Know", "Angeleyes" (with "Voulez-Vous", released as a double A-side) and "I Have a Dream" were all UK Top 5 hits. In Canada, "I Have a Dream" became ABBA's second number one on the RPM Adult Contemporary chart (after "Fernando" hit the top previously). Also in 1979, the group released their second compilation album, "Greatest Hits Vol. 2", which featured a brand new track: "Gimme! Gimme! Gimme! (A Man After Midnight)", another number-three hit in both the UK and Germany. In Russia during the late 1970s, the group was paid in oil commodities because of an embargo on the ruble. On 13 September 1979, ABBA began at Northlands Coliseum in Edmonton, Canada, with a full house of 14,000. "The voices of the band, Agnetha's high sauciness combined with round, rich lower tones of Anni-Frid, were excellent...Technically perfect, melodically correct and always in perfect pitch...The soft lower voice of Anni-Frid and the high, edgy vocals of Agnetha were stunning", raved "Edmonton Journal". During the next four weeks they played a total of 17 sold-out dates, 13 in the United States and four in Canada. The last scheduled ABBA concert in the United States in Washington, D.C. was cancelled due to Fältskog's emotional distress suffered during the flight from New York to Boston, when the group's private plane was subjected to extreme weather conditions and was unable to land for an extended period. They appeared at the Boston Music Hall for the performance 90 minutes late. The tour ended with a show in Toronto, Canada at Maple Leaf Gardens before a capacity crowd of 18,000. "ABBA plays with surprising power and volume; but although they are loud, they're also clear, which does justice to the signature vocal sound...Anyone who's been waiting five years to see Abba will be well satisfied", wrote "Record World". On 19 October 1979, the tour resumed in Western Europe where the band played 23 sold-out gigs, including six sold-out nights at London's Wembley Arena. In March 1980, ABBA travelled to Japan where upon their arrival at Narita International Airport, they were besieged by thousands of fans. The group performed eleven concerts to full houses, including six shows at Tokyo's Budokan. This tour was the last "on the road" adventure of their career. In July 1980, ABBA released the single "The Winner Takes It All", the group's eighth UK chart topper (and their first since 1978). The song is widely misunderstood as being written about Ulvaeus and Fältskog's marital tribulations; Ulvaeus wrote the lyrics, but has stated they were not about his own divorce; Fältskog has repeatedly stated she was not the loser in their divorce. In the United States, the single peaked at number-eight on the "Billboard" Hot 100 chart and became ABBA's second "Billboard" Adult Contemporary number-one. It was also re-recorded by Andersson and Ulvaeus with a slightly different backing track, by French chanteuse Mireille Mathieu at the end of 1980 – as "Bravo tu as gagné", with French lyrics by Alain Boublil. November the same year saw the release of ABBA's seventh album "Super Trouper", which reflected a certain change in ABBA's style with more prominent use of synthesizers and increasingly personal lyrics. It set a record for the most pre-orders ever received for a UK album after one million copies were ordered before release. The second single from the album, "Super Trouper", also hit number-one in the UK, becoming the group's ninth and final UK chart-topper. Another track from the album, "Lay All Your Love on Me", released in 1981 as a Twelve-inch single only in selected territories, managed to top the "Billboard" Hot Dance Club Play chart and peaked at number-seven on the UK singles chart becoming, at the time, the highest ever charting 12-inch release in UK chart history. Also in 1980, ABBA recorded a compilation of Spanish-language versions of their hits called "Gracias Por La Música". This was released in Spanish-speaking countries as well as in Japan and Australia. The album became a major success, and along with the Spanish version of "Chiquitita", this signalled the group's breakthrough in Latin America. "ABBA Oro: Grandes Éxitos", the Spanish equivalent of "ABBA Gold: Greatest Hits", was released in 1999. In January 1981, Ulvaeus married Lena Källersjö, and manager Stig Anderson celebrated his 50th birthday with a party. For this occasion, ABBA recorded the track "Hovas Vittne" (a pun on the Swedish name for Jehovah's Witness and Anderson's birthplace, Hova) as a tribute to him, and released it only on 200 red vinyl copies, to be distributed to the guests attending the party. This single has become a sought-after collectable. In mid-February 1981, Andersson and Lyngstad announced they were filing for divorce. Information surfaced that their marriage had been an uphill struggle for years, and Benny had already met another woman, Mona Nörklit, whom he married in November 1981. Andersson and Ulvaeus had songwriting sessions in early 1981, and recording sessions began in mid-March. At the end of April, the group recorded a TV special, "Dick Cavett Meets ABBA" with the US talk show host Dick Cavett. "The Visitors", ABBA's eighth and final studio album, showed a songwriting maturity and depth of feeling distinctly lacking from their earlier recordings but still placing the band squarely in the pop genre, with catchy tunes and harmonies. Although not revealed at the time of its release, the album's title track, according to Ulvaeus, refers to the secret meetings held against the approval of totalitarian governments in Soviet-dominated states, while other tracks address topics like failed relationships, the threat of war, aging, and loss of innocence. The album's only major single release, "One of Us", proved to be the last of ABBA's nine number-one singles in Germany, this being in December 1981; and the swansong of their sixteen Top 5 singles on the South African chart. "One of Us" was also ABBA's final Top 3 hit in the UK, reaching number-three on the UK Singles Chart. Although it topped the album charts across most of Europe, including Ireland, the UK and Germany, "The Visitors" was not as commercially successful as its predecessors, showing a commercial decline in previously loyal markets such as France, Australia and Japan. A track from the album, "When All Is Said and Done", was released as a single in North America, Australia and New Zealand, and fittingly became ABBA's final Top 40 hit in the US (debuting on the US charts on 31 December 1981), while also reaching the US Adult Contemporary Top 10, and number-four on the RPM Adult Contemporary chart in Canada. The song's lyrics, as with "The Winner Takes It All" and "One of Us", dealt with the painful experience of separating from a long-term partner, though it looked at the trauma more optimistically. With the now publicised story of Andersson and Lyngstad's divorce, speculation increased of tension within the band. Also released in the United States was the title track of "The Visitors", which hit the Top Ten on the "Billboard" Hot Dance Club Play chart. In the spring of 1982, songwriting sessions had started and the group came together for more recordings. Plans were not completely clear, but a new album was discussed and the prospect of a small tour suggested. The recording sessions in May and June 1982 were a struggle, and only three songs were eventually recorded: "You Owe Me One", "I Am the City" and "Just Like That". Andersson and Ulvaeus were not satisfied with the outcome, so the tapes were shelved and the group took a break for the summer. Back in the studio again in early August, the group had changed plans for the rest of the year: they settled for a Christmas release of a double album compilation of all their past single releases to be named "". New songwriting and recording sessions took place, and during October and December, they released the singles "The Day Before You Came"/"Cassandra" and "Under Attack"/"You Owe Me One", the A-sides of which were included on the compilation album. Neither single made the Top 20 in the United Kingdom, though "The Day Before You Came" became a Top 5 hit in many European countries such as Germany, the Netherlands and Belgium. The album went to number one in the UK and Belgium, Top 5 in the Netherlands and Germany and Top 20 in many other countries. "Under Attack", the group's final release before disbanding, was a Top 5 hit in the Netherlands and Belgium. "I Am the City" and "Just Like That" were left unreleased on "The Singles: The First Ten Years" for possible inclusion on the next projected studio album, though this never came to fruition. "I Am the City" was eventually released on the compilation album "" in 1993, while "Just Like That" has been recycled in new songs with other artists produced by Andersson and Ulvaeus. A reworked version of the verses ended up in the musical "Chess". The chorus section of "Just Like That" was eventually released on a retrospective box set in 1994, as well as in the "ABBA Undeleted" medley featured on disc 9 of "The Complete Studio Recordings". Despite a number of requests from fans, Ulvaeus and Andersson are still refusing to release ABBA's version of "Just Like That" in its entirety, even though the complete version has surfaced on bootlegs. The group travelled to London to promote "The Singles: The First Ten Years" in the first week of November 1982, appearing on "Saturday Superstore" and "The Late, Late Breakfast Show", and also to West Germany in the second week, to perform on Show Express. On 19 November 1982, ABBA appeared for the last time in Sweden on the TV programme Nöjesmaskinen, and on 11 December 1982, they made their last performance ever, transmitted to the UK on Noel Edmonds' "The Late, Late Breakfast Show", through a live link from a TV studio in Stockholm. Andersson and Ulvaeus began collaborating with Tim Rice in early 1983 on writing songs for the musical project "Chess", while Fältskog and Lyngstad both concentrated on international solo careers. While Andersson and Ulvaeus were working on the musical, a further co-operation among the three of them came with the musical "Abbacadabra" that was produced in France for television. It was a children's musical utilising 14 ABBA songs. Alain and Daniel Boublil, who wrote "Les Misérables", had been in touch with Stig Anderson about the project, and the TV musical was aired over Christmas on French TV and later a Dutch version was also broadcast. Boublil previously also wrote the French lyric for Mireille Mathieu's version of "The Winner Takes It All". Lyngstad, who had recently moved to Paris, participated in the French version, and recorded a single, "Belle", a duet with French singer Daniel Balavoine. The song was a cover of ABBA's 1976 instrumental track "Arrival". As the single "Belle" sold well in France, Cameron Mackintosh wanted to stage an English-language version of the show in London, with the French lyrics translated by David Wood and Don Black; Andersson and Ulvaeus got involved in the project, and contributed with one new song, "I Am the Seeker". "Abbacadabra" premiered on 8 December 1983 at the Lyric Hammersmith Theatre in London, to mixed reviews and full houses for eight weeks, closing on 21 January 1984. Lyngstad was also involved in this production, recording "Belle" in English as "Time", a duet with actor and singer B. A. Robertson: the single sold well, and was produced and recorded by Mike Batt. In May 1984, Lyngstad performed "I Have a Dream" with a children's choir at the United Nations Organisation Gala, in Geneva, Switzerland. All four members made their (at the time, final) public appearance as four friends more than as ABBA in January 1986, when they recorded a video of themselves performing an acoustic version of "Tivedshambo" (which was the first song written by their manager Stig Anderson), for a Swedish TV show honouring Anderson on his 55th birthday. The four had not seen each other for more than two years. That same year they also performed privately at another friend's 40th birthday: their old tour manager, Claes af Geijerstam. They sang a self-written song titled "Der Kleine Franz" that was later to resurface in "Chess". Also in 1986, "ABBA Live" was released, featuring selections of live performances from the group's 1977 and 1979 tours. The four members were guests at the 50th birthday of Görel Hanser in 1999. Hanser was a long-time friend of all four, and also former secretary of Stig Anderson. Honouring Görel, ABBA performed a Swedish birthday song "Med En Enkel Tulipan" a cappella. Andersson has on several occasions performed ABBA songs. In June 1992, he and Ulvaeus appeared with U2 at a Stockholm concert, singing the chorus of "Dancing Queen", and a few years later during the final performance of the B & B in Concert in Stockholm, Andersson joined the cast for an encore at the piano. Andersson frequently adds an ABBA song to the playlist when he performs with his BAO band. He also played the piano during new recordings of the ABBA songs "Like an Angel Passing Through My Room" with opera singer Anne Sofie von Otter, and "When All Is Said and Done" with Swede Viktoria Tolstoy. In 2002, Andersson and Ulvaeus both performed an a cappella rendition of the first verse of "Fernando" as they accepted their Ivor Novello award in London. Lyngstad performed and recorded an a cappella version of "Dancing Queen" with the Swedish group the Real Group in 1993, and also re-recorded "I Have a Dream" with Swiss singer Dan Daniell in 2003. ABBA never officially announced the end of the group or an indefinite break, but it was long considered dissolved after their final public performance together in 1982. Their final public performance together as ABBA before their 2016 reunion was on the British TV programme "The Late, Late Breakfast Show" (live from Stockholm) on 11 December 1982. While reminiscing on "The Day Before You Came", Ulvaeus said: "we might have continued for a while longer if that had been a number one". In January 1983, Fältskog started recording sessions for a solo album, as Lyngstad had successfully released her album "Something's Going On" some months earlier. Ulvaeus and Andersson, meanwhile, started songwriting sessions for the musical "Chess". In interviews at the time, Björn and Benny denied the split of ABBA ("Who are we without our ladies? Initials of Brigitte Bardot?"), and Lyngstad and Fältskog kept claiming in interviews that ABBA would come together for a new album repeatedly during 1983 and 1984. Internal strife between the group and their manager escalated and the band members sold their shares in Polar Music during 1983. Except for a TV appearance in 1986, the foursome did not come together publicly again until they were reunited at the Swedish premiere of the "Mamma Mia!" movie on 4 July 2008. The individual members' endeavours shortly before and after their final public performance coupled with the collapse of both marriages and the lack of significant activity in the following few years after that widely suggested that the group had broken up. In an interview with the "Sunday Telegraph", following the premiere, Ulvaeus and Andersson said that there was nothing that could entice them back on stage again. Ulvaeus said: "We will never appear on stage again. [...] There is simply no motivation to re-group. Money is not a factor and we would like people to remember us as we were. Young, exuberant, full of energy and ambition. I remember Robert Plant saying Led Zeppelin were a cover band now because they cover all their own stuff. I think that hit the nail on the head." However, on 3 January 2011, Fältskog, long considered to be the most reclusive member of the group and a major obstacle to any reunion, raised the possibility of reuniting for a one-off engagement. She admitted that she has not yet brought the idea up to the other three members. In April 2013, she reiterated her hopes for reunion during an interview with "Die Zeit", stating: "If they ask me, I'll say yes." In a May 2013 interview, Fältskog, aged 63 at the time, stated that an ABBA reunion would never occur: "I think we have to accept that it will not happen, because we are too old and each one of us has their own life. Too many years have gone by since we stopped, and there's really no meaning in putting us together again." Fältskog further explained that the band members remained on amicable terms: "It's always nice to see each other now and then and to talk a little and to be a little nostalgic." In an April 2014 interview, Fältskog, when asked about whether the band might reunite for a new recording said: "It's difficult to talk about this because then all the news stories will be: 'ABBA is going to record another song!' But as long as we can sing and play, then why not? I would love to, but it's up to Björn and Benny." The same year the members of ABBA went their separate ways, the French production of a "tribute" show (a children's TV musical named "Abbacadabra" using 14 ABBA songs) spawned new interest in the group's music. After receiving little attention during the mid-to-late-1980s, ABBA's music experienced a resurgence in the early 1990s due to the UK synth-pop duo Erasure, who released "Abba-esque", a four track extended play release featuring cover versions of ABBA songs which topped several European charts in 1992. As U2 arrived in Stockholm for a concert in June of that year, the band paid homage to ABBA by inviting Björn Ulvaeus and Benny Andersson to join them on stage for a rendition of "Dancing Queen", playing guitar and keyboards. September 1992 saw the release of "", a new compilation album. The single "Dancing Queen" received radio airplay in the UK in the middle of 1992 to promote the album. The song returned to the Top 20 of the UK singles chart in August that year, this time peaking at number 16. With sales of 30 million, "Gold" is the best-selling ABBA album, as well as one of the best-selling albums worldwide. With sales of 5.5 million copies it is the second-highest selling album of all time in the UK, after Queen's "Greatest Hits". The enormous interest in the "ABBA Gold: Greatest Hits" compilation saw the release of "" in 1993. In 1994, two Australian cult films caught the attention of the world's media, both focusing on admiration for ABBA: "The Adventures of Priscilla, Queen of the Desert" and "Muriel's Wedding". The same year, "Thank You for the Music", a four-disc box set comprising all the group's hits and stand-out album tracks, was released with the involvement of all four members. "By the end of the twentieth century," American critic Chuck Klosterman wrote a decade later, "it was far more contrarian to hate ABBA than to love them." ABBA were soon recognised and embraced by other acts: Evan Dando of the Lemonheads recorded a cover version of "Knowing Me, Knowing You"; Sinéad O'Connor and Boyzone's Stephen Gately have recorded "Chiquitita"; Tanita Tikaram, Blancmange and Steven Wilson paid tribute to "The Day Before You Came". Cliff Richard covered "Lay All Your Love on Me", while Dionne Warwick, Peter Cetera, and Celebrity Skin recorded their versions of "SOS". US alternative-rock musician Marshall Crenshaw has also been known to play a version of "Knowing Me, Knowing You" in concert appearances, while legendary English Latin pop songwriter Richard Daniel Roman has recognised ABBA as a major influence. Swedish metal guitarist Yngwie Malmsteen covered "Gimme! Gimme! Gimme! (A Man After Midnight)" with slightly altered lyrics. Two different compilation albums of ABBA songs have been released. "ABBA: A Tribute" coincided with the 25th anniversary celebration and featured 17 songs, some of which were recorded especially for this release. Notable tracks include Go West's "One of Us", Army of Lovers "Hasta Mañana", Information Society's "Lay All Your Love on Me", Erasure's "Take a Chance on Me" (with MC Kinky), and Lyngstad's a cappella duet with the Real Group of "Dancing Queen". A second 12-track album was released in 1999, entitled "ABBAmania", with proceeds going to the Youth Music charity in England. It featured all new cover versions: notable tracks were by Madness ("Money, Money, Money"), Culture Club ("Voulez-Vous"), the Corrs ("The Winner Takes It All"), Steps ("Lay All Your Love on Me", "I Know Him So Well"), and a medley entitled "Thank ABBA for the Music" performed by several artists and as featured on the Brits Awards that same year. In 1997, an ABBA tribute group was formed, the ABBA Teens, which was subsequently renamed the A-Teens to allow the group some independence. The group's first album, "The ABBA Generation", consisting solely of ABBA covers reimagined as 1990s pop songs, was a worldwide success and so were subsequent albums. The group disbanded in 2004 due to a gruelling schedule and intentions to go solo. In Sweden, the growing recognition of the legacy of Andersson and Ulvaeus resulted in the 1998 "B & B Concerts", a tribute concert (with Swedish singers who had worked with the songwriters through the years) showcasing not only their ABBA years, but hits both before and after ABBA. The concert was a success, and was ultimately released on CD. It later toured Scandinavia and even went to Beijing in the People's Republic of China for two concerts. In 2000, ABBA was reported to have turned down an offer of approximately one billion US dollars to do a reunion tour consisting of 100 concerts. For the 2004 semi-final of the Eurovision Song Contest, staged in Istanbul 30 years after ABBA had won the contest in Brighton, all four members made cameo appearances in a special comedy video made for the interval act, entitled "Our Last Video Ever". Other well-known stars such as Rik Mayall, Cher and Iron Maiden's Eddie also made appearances in the video. It was not included in the official DVD release of the Eurovision Contest, but was issued as a separate DVD release, retitled "The Last Video" at the request of the former ABBA members. The video was made using puppet models of the members of the band. The video has surpassed 8 million views on YouTube as of November 2019. In 2005, all four members of ABBA appeared at the Stockholm premiere of the musical "Mamma Mia!". On 22 October 2005, at the , "Waterloo" was chosen as the best song in the competition's history. On 4 July 2008, all four ABBA members were reunited at the Swedish premiere of the film "Mamma Mia!". It was only the second time all of them had appeared together in public since 1986. During the appearance, they re-emphasised that they intended never to officially reunite, citing the opinion of Robert Plant that the re-formed Led Zeppelin was more like a cover band of itself than the original band. Ulvaeus stated that he wanted the band to be remembered as they were during the peak years of their success. The compilation album "", originally released in 1992, returned to number-one in the UK album charts for the fifth time on 3 August 2008. On 14 August 2008, the "Mamma Mia! The Movie" film soundtrack went to number-one on the US "Billboard" charts, ABBA's first US chart-topping album. During the band's heyday the highest album chart position they had ever achieved in America was number 14. In November 2008, all eight studio albums, together with a ninth of rare tracks, was released as "The Albums". It hit several charts, peaking at number-four in Sweden and reaching the Top 10 in several other European territories. In 2008, Sony Computer Entertainment Europe, in collaboration with Universal Music Group Sweden AB, released "SingStar ABBA" on both the PlayStation 2 and PlayStation 3 games consoles, as part of the SingStar music video games. The PS2 version features 20 ABBA songs, while 25 songs feature on the PS3 version. On 22 January 2009, Fältskog and Lyngstad appeared together on stage to receive the Swedish music award ""Rockbjörnen"" (for "lifetime achievement"). In an interview, the two women expressed their gratitude for the honorary award and thanked their fans. On 25 November 2009, PRS for Music announced that the British public voted ABBA as the band they would most like to see re-form. On 27 January 2010, ABBAWORLD, a 25-room touring exhibition featuring interactive and audiovisual activities, debuted at Earls Court Exhibition Centre in London. According to the exhibition's website, ABBAWORLD is "approved and fully supported" by the band members. "Mamma Mia" was released as one of the first few non-premium song selections for the online RPG game "Bandmaster". On 17 May 2011, "Gimme! Gimme! Gimme!" was added as a non-premium song selection for the Bandmaster Philippines server. On 15 November 2011, Ubisoft released a dancing game called "" for the Wii. In January 2012, Universal Music announced the re-release of ABBA's final album "The Visitors", featuring a previously unheard track "From a Twinkling Star to a Passing Angel". A book entitled "ABBA: The Official Photo Book" was published in early 2014 to mark the 40th anniversary of the band's Eurovision victory. The book reveals that part of the reason for the band's outrageous costumes was that Swedish tax laws at the time allowed the cost of garish outfits that were not suitable for daily wear to be tax deductible. A sequel to the 2008 movie "Mamma Mia!", titled "Mamma Mia! Here We Go Again", was announced in May 2017; the film was released on 20 July 2018. Cher, who appeared in the movie, also released "Dancing Queen", an album full of ABBA covers, in September 2018. In June 2017, a blue plaque outside Brighton Dome was set to commemorate their 1974 Eurovision win. In May 2020, it was announced that ABBA's entire studio discography would be released on coloured vinyl for the first time, in a box set titled "ABBA: The Studio Albums." The initial release sold out in just a few hours. On 20 January 2016, all four members of ABBA made a public appearance at "Mamma Mia! The Party" in Stockholm. On 6 June 2016, the quartet appeared together at a private party at Berns Salonger in Stockholm, which was held to celebrate the 50th anniversary of Andersson and Ulvaeus's first meeting. Fältskog and Lyngstad sang the ABBA song "The Way Old Friends Do" before they were joined on stage by Andersson and Ulvaeus. British manager Simon Fuller announced in a statement in October 2016 that the group would be reuniting to work on a new 'digital entertainment experience'. The project would feature the members in their "life-like" avatar form ("'abbatars"') based on their late 1970s tour and would be set to launch by the spring of 2019. On 27 April 2018, the members announced that they had recorded two new songs, one entitled "I Still Have Faith in You", to feature in a TV special set to air later that year. The other new track is called "Don't Shut Me Down". In September 2018, Ulvaeus revealed that the two new songs, "I Still Have Faith in You" and "Don't Shut Me Down", as well as the aforementioned TV special now called "", would be released no earlier than March 2019. In January 2019, Ulvaeus revealed that neither song was finished yet, hinting at a final mix date of spring 2019 and the possibility of a third song. In June 2019, Ulvaeus announced that the first new song and video containing the Abbatars would be released in November 2019. In September, he stated in an interview that there were now five new ABBA songs to be released in 2020. In early 2020, Andersson confirmed that he was aiming for the songs to be released in September 2020. In April 2020, Ulvaeus gave an interview saying that in the wake of the COVID-19 pandemic, the avatar project has been delayed by six months. As of 2020, five out of the eight original songs written by Benny for the new album have been recorded by the 2 female members, and there is a new music video with new unseen technology that cost £15 million, with the release for the music video being decided. In October 1984, Ulvaeus and Andersson together with lyricist Tim Rice released the musical concept double-album "Chess". The singles "One Night in Bangkok" (with vocals by Murray Head and Anders Glenmark ) and "I Know Him So Well" (a duet by Barbara Dickson and Elaine Paige, and later also recorded by Barbra Streisand and Whitney Houston) were both hugely successful. The former reached number-one in Australia, Germany, Spain and Switzerland; number-two in Austria, France and New Zealand, and number-three in Canada, Norway, Sweden and the US. In May 1986, the musical premiered in London's West End, and ran for almost three years. "Chess" also opened on Broadway in April 1988, but closed within two months due to bad reviews. In Stockholm, the composers staged "Chess på svenska" ("Chess in Swedish") in 2003, with some new material, including the musical numbers ""Han är en man, han är ett barn"" ("He's a Man, He's a Child") and ""Glöm mig om du kan"" ("Forget Me If You Can"). In 2008, the musical was again revived for a successful staging at London's Royal Albert Hall which was subsequently released on DVD, and then in two successful separate touring productions in the United States and United Kingdom, in 2010. Andersson and Ulvaeus' next project, "Kristina från Duvemåla", an epic Swedish musical, premiered in Malmö, in southern Sweden in October 1995. The musical ran for five years in Stockholm, and an English version has been in development for some considerable time. It has been reported that a Broadway production is in its earliest stages of pre-production. In the meantime, following some earlier workshops, a full presentation of the English translation of the musical in concert, now with the shortened name of ""Kristina"", took place to capacity crowds in September 2009 at New York's Carnegie Hall, and in April 2010 at London's Royal Albert Hall, followed by a CD release of the New York recordings. Since 1983, besides "Chess" and "Kristina från Duvemåla", Andersson has continued writing songs with Ulvaeus. The pair produced two English-language pop albums with Swedish duo Gemini in 1985 and 1987. In 1987, Andersson also released his first solo album on his own label, Mono Music, called ""Klinga mina klockor"" ("Ring My Bells"), containing material inspired by Swedish folk music – and followed it with a second album titled "November 1989". During the 1990s, Andersson wrote music for the popular Swedish cabaret quartet Ainbusk Singers, giving them two hits: "Lassie" and ""Älska mig"" ("Love me"), and later produced "Shapes", an English-language album by group's Josefin Nilsson with all-new material by Andersson and Ulvaeus. Andersson has also regularly written music for films (most notably to Roy Andersson's "Songs from the Second Floor"). In 2001, Andersson formed his own band, Benny Anderssons Orkester (BAO), which released three successful albums in 2001, 2004 and 2007. Andersson has the distinction of remaining the longest in the Swedish Radio Svensktoppen charts; the song ""Du är min man"" ("You Are My Man"), sung by Helen Sjöholm, spent 278 weeks there between 2004 and 2009. Andersson released his third album BAO 3 in October 2007, of new material with his band BAO and vocalists Helen Sjöholm and Tommy Körberg, as well as playing to full houses at two of Sweden's largest concert venues in October and November 2007, with an audience of 14,000. Andersson and Ulvaeus have been highly involved in the worldwide productions of the musical "Mamma Mia!", alongside Lyngstad who attends premieres. They were also involved in the production of the successful film version of the musical, which opened in July 2008. Andersson produced the soundtrack utilising many of the musicians ABBA used on their albums and tours. Andersson made a cameo appearance in the movie as a "fisherman" piano player in the "Dancing Queen" scene, while Ulvaeus is seen as a Greek god playing a lyre during the closing credits. Andersson and Ulvaeus have continuously been writing new material; most recently they wrote seven songs for Andersson's 2011 BAO album "O Klang Och Jubeltid", performed as usual by vocalists Sjöholm, Körberg and Moreus. In July 2009, BAO (now renamed the Benny Andersson Band) released their first international album, "The Story of a Heart". The album was a compilation of 14 tracks from Andersson's five Swedish-language releases between 1987 and 2007, including five songs now recorded with lyrics by Ulvaeus in English; the new title song premiered on BBC2's "Ken Bruce Show". A Swedish-language version of the title track, ""Sommaren Du Fick"" ("The Summer You Got"), was released as a single in Sweden prior to the English version, with vocals by Helen Sjöholm. In May 2009, Andersson released a single recorded by the staff at his privately owned Stockholm hotel "Hotel Rival", titled "2nd Best to None", accompanied by a video showing the staff at work. In 2008, Andersson and Ulvaeus wrote a song for Swedish singer Sissela Kyle, titled ""Jag vill bli gammal"" ("I Wanna Grow Old"), for her Stockholm stage show ""Your Days Are Numbered"", which was never recorded and released, but was performed on television. Ulvaeus also contributed lyrics to ABBA's 1976 instrumental track "Arrival" for Sarah Brightman's cover version recorded for her 2008 album "Winter Symphony". New English lyrics have also been written for Andersson's 1999 song ""Innan Gryningen"" (then also named "Millennium Hymn"), with the new title "The Silence of the Dawn" for Barbara Dickson (performed live, but not yet recorded and released). In 2007, they wrote the song ""Han som har vunnit allt"" ("He Who's Won It All") for actor/singer Anders Ekborg. Ulvaeus also wrote English lyrics for two older songs from Andersson's solo albums: "After the Rain" ("Efter regnet", 1987) for opera singer Anne Sofie von Otter, for her Andersson tribute album "I Let the Music Speak", and "I Walk with You Mama" ("Stockholm by Night", 1989). Barbara Dickson recorded (but not yet released) a Björn & Benny song entitled "The Day The Wall Came Tumbling Down"; the track was eventually released by Australian "Mamma Mia!" musical star Anne Wood on her album of ABBA covers, "Divine Discontent". , Ulvaeus has mentioned writing new material with Andersson for a BAO Christmas release (also mentioned as a BAO 'box'). Andersson (together with Kristina Lugn and Lars Rudolfsson) composed music for a Swedish language obscure musical, "Hjälp Sökes" ("Help Wanted"), which premiered in February 2013. Andersson has also written music for a documentary film about Olof Palme, re-recording the track "Sorgmarsch" ("Dirge"). In 1980, Fältskog and her then 7-year-old daughter Linda recorded "Nu tändas tusen juleljus", a Swedish Christmas album. Released in 1981, it was Fältskog's first Swedish-language recording for the Polar Music label after she left CBS-Cupol. It peaked at No. 6 on the Swedish album chart in January 1982, and has since been re-released on CD by Polar Music/PolyGram/Universal Music. The album title is derived from one of Scandinavia's best-known Christmas carols, "Nu tändas tusen juleljus" ("Now a thousand Christmas candles are lit"). In 1983, Fältskog released the solo album "Wrap Your Arms Around Me", which achieved platinum sales in Sweden. This included the single "The Heat Is On", which was a hit in Europe and Scandinavia. It reached number-one in Sweden and Norway and peaked at number-two in the Netherlands and Belgium. In the United States, Fältskog earned a "Billboard" Top 30 hit with "Can't Shake Loose". The title track of the album was another successful hit, topping the charts in Belgium and Denmark, reaching the Top 5 in the Netherlands, South Africa and Sweden, and the Top 20 in Germany and France. The album sold 1.2 million copies worldwide. The album was produced Mike Chapman, also known for his work with The Sweet, Mud, Suzi Quatro, Blondie, Pat Benatar and The Knack. Fältskog's second English-language solo album, "Eyes of a Woman" (produced by Eric Stewart of 10cc), was released in March 1985. It peaked at number two in Sweden (becoming a platinum seller). The first single from the album was her self-penned "I Won't Let You Go". Her duet with Ola Håkansson, "The Way You Are", was a number-one hit in Sweden in 1986 and was awarded double platinum status. In early 1987, Fältskog recorded a Swedish-language album, "Kom följ med i vår karusell" ("Come Join Us on Our Carousel") with her son Christian and a children's choir. The single "På Söndag" ("On Sunday") received significant airplay on Swedish radio and even made the Swedish Top 10, unique for a song made for kids to enjoy. Also in 1987, Fältskog released her third English-language solo album, the Peter Cetera-produced "I Stand Alone", which also included the "Billboard" Adult Contemporary duet with Cetera, "I Wasn't the One (Who Said Goodbye)", as well as the European charting singles "The Last Time" and "Let It Shine". The album was extremely successful in Sweden, where it spent eight weeks at number-one and was awarded double-platinum status. Shortly after some minor European promotion for the album in early 1988, Fältskog withdrew from public life and halted her music career. In 1996, she released her autobiography, "As I Am", and a compilation album featuring her solo hits alongside some ABBA classics. In 2004, Fältskog made a successful comeback, with the release of the critically acclaimed album "My Colouring Book", containing covers of songs that had the most impact on her teenage years in the 1960s. It debuted at number-one in Sweden (achieving triple-platinum status), and was a Top 10 hit in Denmark, Finland and Germany. It also became Fältskog's second solo album to reach the UK Top 20, peaking at number 12. The single "If I Thought You'd Ever Change Your Mind" (a cover of the song recorded by Cilla Black) became Fältskog's biggest solo hit in the UK, reaching number 11, while peaking at number-two in her native Sweden. A second single, "When You Walk in the Room", was released but met with less success. In January 2007, Fältskog sang a live duet on stage with Swedish singer Tommy Körberg at the after party for the final performance of the musical, "Mamma Mia!", in Stockholm, at which Andersson and Ulvaeus were also present. In May 2013, Fältskog released a solo album entitled "A" through Universal International. In a promotional interview, Fältskog explained that the album was unplanned and it was after she heard the first three songs that she felt that she "had to do this [record the album]". She also revealed that she completed singing lessons prior to recording the album, as she felt her throat was "a bit rusty". Fältskog stated that she would not be undertaking any tours or live performances in support of the album, explaining: "I'm not that young anymore. I don't have the energy to do that, and also I don't want to travel too much." The title of the album was conceived of by the studio production team. "A" proved successful upon release, reaching the Top 10 in many European countries, including Germany, Sweden and the UK (where it peaked at number-six and is Fältskog's highest-charting solo album to date), as well as Australia. Both female members of ABBA pursued solo careers on the international scene after their work with the group. In 1982, Lyngstad chose Genesis drummer and vocalist Phil Collins to produce the album "Something's Going On" and unveiled the hit single and video "I Know There's Something Going On" in August of that year. The single became a number-one hit in Belgium and Switzerland and was a Top 10 hit in Australia, Austria, Finland, France, Germany, Italy, the Netherlands, Norway, Poland, South Africa and Sweden. The track also proved successful in the US, peaking at No. 13 (and spending almost four months on the "Billboard" Hot 100). Sveriges Television documented this historical event, by filming the whole recording process. The result became a one-hour TV documentary, including interviews with Lyngstad, Collins, Ulvaeus and Andersson as well as all the musicians. This documentary and the promotion videos from the album are included in "Frida – The DVD". Lyngstad's second international solo album, "Shine" (produced by Steve Lillywhite), was recorded in Paris and released in 1984. This would be Lyngstad's final studio album release for twelve years. It featured "Slowly", the last known Andersson-Ulvaeus composition to have been recorded by one of the former female ABBA vocalists to date. The promotional videos and clips for "Shine" are included in "Frida – The DVD". In 1992, Lyngstad was chosen to be the chairperson for the environmental organisation ""Artister för miljön"" ("Artists for the Environment") in Sweden. She was chairperson for this organisation until 1995. To mark her interests for the environment, she recorded the Julian Lennon song "Saltwater" and performed it live in Stockholm. She arranged and financed summer camps for poor children in Sweden, focusing on environmental and ecological issues. Her environmental work for the organisation led to the decision to record again. The album "Djupa andetag" ("Deep Breaths") was released in 1996 and became a number-one success in Sweden. The lyrics for the single, "Även en blomma" ("Even a Flower"), deal with environmental issues. In 2004, Lyngstad recorded a song entitled "The Sun Will Shine Again", written especially for her and released with former Deep Purple member Jon Lord. The couple made several TV performances with the song in Germany. On 5 December 2005, Universal released her box set, Frida – 4xCD 1xDVD, consisting of the solo albums she recorded for the Polar Label and the 3-hour documentary "Frida – The DVD". On this DVD, which covers her entire singing career, the viewer is guided by Lyngstad herself through the years from her TV debut in Sweden in 1967 to the TV performances she made in Germany in 2004. Many rare clips are included in the set and each performance is explained by Lyngstad herself. The interview with Lyngstad was filmed in the Swiss Alps in mid-2005. Lyngstad returned to the recording studio in 2010 to record vocals for the Cat Stevens song "Morning Has Broken", for Swedish guitarist Georg Wadenius's album "Reconnection". The album, which featured other guest vocalists, reached number 17 in Sweden. In 2018, Lyngstad and multi-Grammy winning Jazz trumpeter Arturo Sandoval released a reworking of the ABBA song "Andante, Andante" as a single, which is also featured on Sandoval's album "Ultimate Duets". ABBA were perfectionists in the studio, working on tracks until they got them right rather than leaving them to come back to later on. The band created a basic rhythm track with a drummer, guitarist and bass player, and overlaid other arrangements and instruments. Vocals were then added, and orchestra overdubs were usually left until last. Fältskog and Lyngstad contributed ideas at the studio stage. Andersson and Ulvaeus played them the backing tracks and they made comments and suggestions. According to Fältskog, she and Lyngstad had the final say in how the lyrics were shaped. Their single "S.O.S." was "heavily influenced by Phil Spector's Wall of Sound and the melodies of the Beach Boys", according to "Billboard" writer Fred Bronson, who also reported that Ulvaeus had said, "Because there was the Latin-American influence, the German, the Italian, the English, the American, all of that. I suppose we were a bit exotic in every territory in an acceptable way." ABBA was widely noted for the colourful and trend-setting costumes its members wore. The reason for the wild costumes was Swedish tax law. The cost of the clothes was deductible only if they could not be worn other than for performances. Choreography by Graham Tainton also contributed to their performance style. The videos that accompanied some of the band's biggest hits are often cited as being among the earliest examples of the genre. Most of ABBA's videos (and "ABBA: The Movie") were directed by Lasse Hallström, who would later direct the films "My Life as a Dog", "The Cider House Rules" and "Chocolat". ABBA made videos because their songs were hits in many different countries and personal appearances were not always possible. This was also done in an effort to minimise travelling, particularly to countries that would have required extremely long flights. Fältskog and Ulvaeus had two young children and Fältskog, who was also afraid of flying, was very reluctant to leave her children for such a long time. ABBA's manager, Stig Anderson, realised the potential of showing a simple video clip on television to publicise a single or album, thereby allowing easier and quicker exposure than a concert tour. Some of these videos have become classics because of the 1970s-era costumes and early video effects, such as the grouping of the band members in different combinations of pairs, overlapping one singer's profile with the other's full face, and the contrasting of one member against another. In 1976, ABBA participated in an advertising campaign to promote the Matsushita Electric Industrial Co.'s brand, National, in Australia. The campaign was also broadcast in Japan. Five commercial spots, each of approximately one minute, were produced, each presenting the "National Song" performed by ABBA using the melody and instrumental arrangements of "Fernando" and revised lyrics. In September 2010, band members Andersson and Ulvaeus criticised the right-wing Danish People's Party (DF) for using the ABBA song "Mamma Mia" (with modified lyrics) at rallies. The band threatened to file a lawsuit against the DF, saying they never allowed their music to be used politically and that they had absolutely no interest in supporting the party. Their record label Universal Music later said that no legal action would be taken because an agreement had been reached. During their active career, from 1972 to 1982, ABBA placed 20 singles on the "Billboard" Hot 100, 14 of which made the Top 40 (13 on the Cashbox Top 100), with 10 making the Top 20 on both charts. A total of four of those singles reached the Top 10, including "Dancing Queen" which reached number one in April 1977. While "Fernando" and "SOS" did not break the Top 10 on the "Billboard" Hot 100 (reaching number 13 and 15 respectively), they did reach the Top 10 on Cashbox ("Fernando") and Record World ("SOS") charts. Both "Dancing Queen" and "Take a Chance on Me" were certified gold by the Recording Industry Association of America for sales of over one million copies each. The group also had 12 Top 20 singles on the "Billboard" Adult Contemporary chart with two of them, "Fernando" and "The Winner Takes It All", reaching number-one. "Lay All Your Love on Me" was ABBA's fourth number-one single on a "Billboard" chart, topping the Hot Dance Club Play chart. Nine ABBA albums made their way into the top half of the "Billboard" 200 album chart, with seven reaching the Top 50 and four reaching the Top 20. "ABBA: The Album" was the highest-charting album of the group's career, peaking at No. 14. Five albums received RIAA gold certification (more than 500,000 copies sold), while three acquired platinum status (selling more than one million copies). The compilation album "" topped the "Billboard" Top Pop Catalog Albums chart in August 2008 (15 years after it was first released in the US in 1993), becoming the group's first number-one album ever on any of the "Billboard" album charts. It has sold 6 million copies there. On 15 March 2010, ABBA were inducted into the Rock and Roll Hall of Fame by Bee Gees members Barry Gibb and Robin Gibb. The ceremony was held at the Waldorf Astoria Hotel in New York City. The group was represented by Anni-Frid Lyngstad and Benny Andersson. The members of ABBA were married as follows: Agnetha Fältskog and Björn Ulvaeus from 1971 to 1980: Benny Andersson and Anni-Frid Lyngstad from 1978 to 1981. Studio albums
https://en.wikipedia.org/wiki?curid=880
MessagePad The MessagePad is the first series of personal digital assistant devices developed by Apple Computer for the Newton platform in 1993. Some electronic engineering and the manufacture of Apple's MessagePad devices was undertaken in Japan by the Sharp Corporation. The devices were based on the ARM 610 RISC processor and all featured handwriting recognition software and were developed and marketed by Apple. The devices ran the Newton OS. With the MessagePad 120 with Newton OS 2.0, the Newton Keyboard by Apple became available, which can also be used via the dongle on Newton devices with a Newton InterConnect port, most notably the Apple MessagePad 2000/2100 series, as well as the Apple eMate 300. Newton devices featuring Newton OS 2.1 or higher can be used with the screen turned horizontally ("landscape") as well as vertically ("portrait"). A change of a setting rotates the contents of the display by 90, 180 or 270 degrees. Handwriting recognition still works properly with the display rotated, although display calibration is needed when rotation in any direction is used for the first time or when the Newton device is reset. In initial versions (Newton OS 1.x) the handwriting recognition gave extremely mixed results for users and was sometimes inaccurate. The original handwriting recognition engine was called Calligrapher, and was licensed from a Russian company called . Calligrapher's design was quite sophisticated; it attempted to learn the user's natural handwriting, using a database of known words to make guesses as to what the user was writing, and could interpret writing anywhere on the screen, whether hand-printed, in cursive, or a mix of the two. By contrast, Palm Pilot's Graffiti had a less sophisticated design than Calligrapher, but was sometimes found to be more accurate and precise due to its reliance on a fixed, predefined stroke alphabet. The stroke alphabet used letter shapes which resembled standard handwriting, but which were modified to be both simple and very easy to differentiate. Palm Computing also released two versions of Graffiti for Newton devices. The Newton version sometimes performed better and could also show strokes as they were being written as input was done on the display itself, rather than on a silkscreen area. For editing text, Newton had a very intuitive system for handwritten editing, such as scratching out words to be deleted, circling text to be selected, or using written carets to mark inserts. Later releases of the Newton operating system retained the original recognizer for compatibility, but added a hand-printed-text-only (not cursive) recognizer, called "Rosetta", which was developed by Apple, included in version 2.0 of the Newton operating system, and refined in Newton 2.1. Rosetta is generally considered a significant improvement and many reviewers, testers, and most users consider the Newton 2.1 handwriting recognition software better than any of the alternatives even 10 years after it was introduced.
https://en.wikipedia.org/wiki?curid=887
A. E. van Vogt Alfred Elton van Vogt (; April 26, 1912 – January 26, 2000) was a Canadian-born science fiction author. His fragmented, bizarre narrative style influenced later science fiction writers, notably Philip K. Dick. He is one of the most popular and influential practitioners of science fiction in the mid-twentieth century, the genre's so-called Golden Age, and one of the most complex. Alfred Vogt (both "Elton" and "van" were added much later) was born on April 26, 1912 on his grandparents' farm in Edenburg, Manitoba, a tiny (and now defunct) Russian Mennonite community east of Gretna, Manitoba, Canada in the Mennonite West Reserve. He was the third of six children born to Heinrich "Henry" Vogt and Aganetha "Agnes" Vogt (née Buhr), both of whom were themselves born in Manitoba, but who grew up in heavily immigrant communities. Until age four, van Vogt and his family spoke only Plautdietsch at home. For the first dozen or so years of his life, van Vogt's father, Henry Vogt, a lawyer, moved his family several times within western Canada, alighting successively in Neville, Saskatchewan; Morden, Manitoba; and finally Winnipeg, Manitoba. Alfred Vogt found these moves difficult, later remarking: By the 1920s, living in Winnipeg, father Henry worked as an agent for a steamship company, but the stock market crash of 1929 proved financially disastrous, and the family could not afford to send Alfred to college. During his teen years, Alfred worked as a farmhand and a truck driver, and by the age of 19, he was working in Ottawa for the Canadian census bureau. He began his writing career with stories in the true confession style of pulp magazines such as "True Story". Most of these stories were published anonymously, with the first-person narratives allegedly being written by people (often women) in extraordinary, emotional, and life-changing circumstances. After a year in Ottawa, he moved back to Winnipeg, where he sold newspaper advertising space and continued to write. While continuing to pen melodramatic "true confessions" stories through 1937, he also began writing short radio dramas for local radio station CKY, as well as conducting interviews published in trade magazines. He added the middle name "Elton" at some point in the mid-1930s, and at least one confessional story (1937's "To Be His Keeper") was sold to the "Toronto Star", who misspelled his name "Alfred Alton Bogt" in the byline. Shortly thereafter, he added the "van" to his surname, and from that point forward he used the name "A. E. van Vogt" both personally and professionally. By 1938, van Vogt decided to switch to writing science fiction, a genre he enjoyed reading. He was inspired by the August 1938 issue of "Astounding Science Fiction," which he picked up at a newsstand. John W. Campbell's novelette "Who Goes There?" (later adapted into "The Thing from Another World" and "The Thing") inspired van Vogt to write "Vault of the Beast", which he submitted to that same magazine. Campbell, who edited "Astounding" (and had written the story under a pseudonym), sent van Vogt a rejection letter, but one which encouraged van Vogt to try again. Van Vogt sent another story, entitled "Black Destroyer", which was accepted. A revised version of "Vault of the Beast" would be published in 1940. Van Vogt's first SF publication was inspired by "The Voyage of the Beagle" by Charles Darwin. "The Black Destroyer" was published in July 1939 by John W. Campbell in "Astounding Science Fiction", the centennial year of Darwin's journal. It featured a fierce, carnivorous alien, the coeurl, stalking the crew of an exploration spaceship, and served as the inspiration for multiple science fiction movies, including "Alien" (1979). Also in 1939, still living in Winnipeg, van Vogt married Edna Mayne Hull, a fellow Manitoban. Hull, who had previously worked as a private secretary, would act as van Vogt's typist, and be credited with writing several SF stories of her own throughout the early 1940s. The outbreak of World War II in September 1939 caused a change in van Vogt's circumstances. Ineligible for military service due to his poor eyesight, he accepted a clerking job with the Canadian Department of National Defence. This necessitated a move back to Ottawa, where he and his wife would stay for the next year and a half. Meanwhile, his writing career continued. "Discord in Scarlet" was van Vogt's second story to be published, also appearing as the cover story. It was accompanied by interior illustrations created by Frank Kramer and Paul Orban. (Van Vogt and Kramer thus debuted in the issue of "Astounding" that is sometimes identified as the start of the Golden Age of Science Fiction.) Van Vogt's first completed novel, and one of his most famous, is "Slan" (Arkham House, 1946), which Campbell serialized in "Astounding" September to December 1940. Using what became one of van Vogt's recurring themes, it told the story of a nine-year-old superman living in a world in which his kind are slain by "Homo sapiens". Others saw van Vogt's talent from his first story, and in May 1941, van Vogt decided to become a full-time writer, quitting his job at the Canadian Department of National Defence. Freed from the necessity of living in Ottawa, he and his wife lived for a time in the Gatineau region of Quebec before moving to Toronto in the fall of 1941. Prolific throughout this period, van Vogt wrote many of his more famous short stories and novels in the years from 1941 through 1944. The novels "The Book of Ptath" and "The Weapon Makers" both appeared in magazines in serial form during this era; they were later published in book form after World War II. As well, several (though not all) of the stories that were compiled to make up the novels "The Weapon Shops of Isher", "The Mixed Men" and "The War Against the Rull" were also published during this time. In November 1944, van Vogt and Hull moved to Hollywood; van Vogt would spend the rest of his life in California. He had been using the name "A. E. van Vogt" in his public life for several years, and as part of the process of obtaining American citizenship in 1945 he finally and formally changed his legal name from Alfred Vogt to Alfred Elton van Vogt. To his friends in the California science fiction community, he was known as "Van". Van Vogt systematized his writing method, using scenes of 800 words or so where a new complication was added or something resolved. Several of his stories hinge on temporal conundra, a favorite theme. He stated that he acquired many of his writing techniques from three books: "Narrative Technique" by Thomas Uzzell, "The Only Two Ways to Write a Story" by John Gallishaw, and "Twenty Problems of the Fiction Writer" by Gallishaw. He also claimed many of his ideas came from dreams; throughout his writing life he arranged to be awakened every 90 minutes during his sleep period so he could write down his dreams. Van Vogt was also always interested in the idea of all-encompassing systems of knowledge (akin to modern meta-systems)—the characters in his very first story used a system called "Nexialism" to analyze the alien's behavior. Around this time, he became particularly interested in the general semantics of Alfred Korzybski. He subsequently wrote a novel merging these overarching themes, "The World of Ā", originally serialized in "Astounding" in 1945. Ā (often rendered as "Null-A"), or non-Aristotelian logic, refers to the capacity for, and practice of, using intuitive, inductive reasoning (compare fuzzy logic), rather than reflexive, or conditioned, deductive reasoning. The novel recounts the adventures of an individual living in an apparent Utopia, where those with superior brainpower make up the ruling class... though all is not as it seems. A sequel, "The Players of Ā" (later re-titled "The Pawns of Null-A") was serialized in 1948–49. At the same time, in his fiction, van Vogt was consistently sympathetic to absolute monarchy as a form of government. This was the case, for instance, in the "Weapon Shop" series, the "Mixed Men" series, and in single stories such as "Heir Apparent" (1945), whose protagonist was described as a "benevolent dictator". These sympathies were the subject of much critical discussion during van Vogt's career, and afterwards. Van Vogt published "Enchanted Village" in the July 1950 issue of "Other Worlds Science Stories". It was reprinted in over 20 collections or anthologies, and appeared many times in translation. In 1950, van Vogt was briefly appointed as head of L. Ron Hubbard's Dianetics operation in California. Van Vogt had first met Hubbard in 1945, and became interested in his Dianetics theories, which were published shortly thereafter. Dianetics was the secular precursor to Hubbard's Church of Scientology; van Vogt would have no association with Scientology, as he did not approve of its mysticism. The California Dianetics operation went broke nine months later, but never went bankrupt, due to van Vogt's arrangements with creditors. Very shortly after that, van Vogt and his wife opened their own Dianetics center, partly financed by his writings, until he "signed off" around 1961. In practical terms, what this meant was that from 1951 through 1961, van Vogt's focus was on Dianetics, and no new story ideas flowed from his typewriter. However, during the 1950s, van Vogt retrospectively patched together many of his previously published stories into novels, sometimes creating new interstitial material to help bridge gaps in the narrative. Van Vogt referred to the resulting books as "fix-ups", a term that entered the vocabulary of science-fiction criticism. When the original stories were closely related this was often successful — although some van Vogt fix-ups featured disparate stories thrown together that bore little relation to each other, generally making for a less coherent plot. One of his best-known (and well-regarded) novels, "The Voyage of the Space Beagle" (1950) was a fix-up of four short stories including "Discord in Scarlet"; it was published in at least five European languages by 1955. Although Van Vogt averaged a new book title every ten months from 1951 to 1961, none of them were new stories. All of van Vogt's books from 1951 to 1961 were fix-ups, or collections of previously published stories, or expansions of previously published short stories to novel length, or republications of his books under new titles. All were based on story material written and originally published between 1939 and 1950. As well, one non-fiction work, "The Hypnotism Handbook", appeared in 1956, though it had apparently been written much earlier. Some of van Vogt's more well-known work was still produced using the fix-up method. In 1951, he published the fix-up "The Weapon Shops of Isher." In the same decade, van Vogt also produced collections and fixups such as "The Mixed Men" (1952), "The War Against the Rull" (1959), and the two "Clane" novels, "Empire of the Atom" (1957) and "The Wizard of Linn" (1962), which were inspired (like Asimov's Foundation series) by Roman imperial history, specifically the reign of Claudius. After more than a decade of running their Dianetics center, Hull and van Vogt closed it in 1961. Nevertheless, van Vogt maintained his association with the overall organization and was still president of the Californian Association of Dianetic Auditors into the 1980s. Though the constant re-packaging of his older work meant that he had never really been away from the book publishing world, van Vogt had not published any wholly new fiction for almost 12 years when he decided to return to writing in 1962. He did not return immediately to science fiction, however, but instead wrote the only mainstream, non-sf novel of his career. Van Vogt was profoundly affected by revelations of totalitarian police states that emerged after World War II. Accordingly, he wrote a mainstream novel that he set in Communist China, "The Violent Man" (1962); he said that to research this book he had read 100 books about China. Into this book he incorporated his view of "the violent male type", which he described as a "man who had to be right", a man who "instantly attracts women" and who he said were the men who "run the world". Contemporary reviews were lukewarm at best, and van Vogt thereafter returned to science fiction. From 1963 through the mid-1980s, van Vogt once again published new material on a regular basis, though fix-ups and reworked material also appeared relatively often. His later novels included fix-ups such as "The Beast" (also known as "Moonbeast") (1963), "Rogue Ship" (1965), "Quest for the Future" (1970) and "Supermind" (1977). He also wrote novels by expanding previously published short stories; works of this type include "The Darkness on Diamondia" (1972) and "Future Glitter" (also known as "Tyranopolis"; 1973). Novels that were written simply as novels, and not serialized magazine pieces or fix-ups, were very rare in van Vogt's oeuvre, but began to appear regularly beginning in the 1970s. Van Vogt's original novels included "Children of Tomorrow" (1970), "The Battle of Forever" (1971) and "The Anarchistic Colossus" (1977). Over the years, many sequels to his classic works were promised, but only one appeared: "Null-A Three" (1984; originally published in French). Several later books were originally published in Europe, and at least one novel only ever appeared in foreign language editions and was never published in its original English. When the 1979 film "Alien" appeared, it was noted that the plot closely matched the plots of both "Black Destroyer" and "Discord in Scarlet", both published in "Astounding magazine" in 1939, and then later published in the 1950 book "Voyage of the Space Beagle". Van Vogt sued the production company for plagiarism, and eventually collected an out-of-court settlement of $50,000 from 20th Century Fox. In increasingly frail health, van Vogt published his final short story in 1986. On January 26, 2000, A. E. van Vogt died in Los Angeles from Alzheimer's disease. He was survived by his second wife, the former Lydia Bereginsky. Van Vogt's first wife, Edna Mayne Hull, died in 1975. Van Vogt married Lydia Bereginsky in 1979; they remained together until his death. Critical opinion about the quality of van Vogt's work is sharply divided. An early and articulate critic was Damon Knight. In a 1945 chapter-long essay reprinted in "In Search of Wonder," entitled "Cosmic Jerrybuilder: A. E. van Vogt", Knight described van Vogt as "no giant; he is a pygmy who has learned to operate an overgrown typewriter". Knight described "The World of Null-A" as "one of the worst allegedly adult science fiction stories ever published". Concerning van Vogt's writing, Knight said: About "Empire of the Atom" Knight wrote: Knight also expressed misgivings about van Vogt's politics. He noted that van Vogt's stories almost invariably present absolute monarchy in a favorable light. In 1974, Knight retracted some of his criticism after finding out about Vogt's writing down his dreams as a part of his working methods: Knight's criticism greatly damaged van Vogt's reputation. On the other hand, when science fiction author Philip K. Dick was asked which science fiction writers had influenced his work the most, he replied: Dick also defended van Vogt against Damon Knight's criticisms: In a review of "Transfinite: The Essential A. E. van Vogt", science fiction writer Paul Di Filippo said: In "The John W. Campbell Letters", Campbell says, "The son-of-a-gun gets hold of you in the first paragraph, ties a knot around you, and keeps it tied in every paragraph thereafter—including the ultimate last one". Harlan Ellison (who had begun reading van Vogt as a teenager) wrote, "Van was the first writer to shine light on the restricted ways in which I had been taught to view the universe and the human condition". Writing in 1984, David Hartwell said: The literary critic Leslie A. Fiedler said something similar: American literary critic Fredric Jameson says of van Vogt: Van Vogt still has his critics. For example, Darrell Schweitzer writing to "The New York Review of Science Fiction" in 1999 quoted a passage from the original van Vogt novelette "The Mixed Men", which he was then reading, and remarked: The Science Fiction Writers of America named him its 14th Grand Master in 1995 (presented 1996). Also in 1996, van Vogt received a Special Award from the World Science Fiction Convention "for six decades of golden age science fiction". That same year, he was inducted as an inaugural member of the Science Fiction and Fantasy Hall of Fame. In 1946, van Vogt and his first wife, Edna Mayne Hull, were Guests of Honor at the fourth World Science Fiction Convention. In 1980, van Vogt received a "Casper Award" (precursor to the Canadian Prix Aurora Awards) for Lifetime Achievement. The Science Fiction Writers of America named him its 14th Grand Master in 1995 (presented 1996). Great controversy within SFWA accompanied its long wait in bestowing its highest honor (limited to living writers, no more than one annually). Writing an obituary of van Vogt, Robert J. Sawyer, a fellow Canadian writer of science fiction, remarked: It is generally held that the "damnable SFWA politics" concerns Damon Knight, the founder of the SFWA, who abhorred van Vogt's style and politics and thoroughly demolished his literary reputation in the 1950s. Harlan Ellison was more explicit in 1999 introduction to "Futures Past: The Best Short Fiction of A. E. van Vogt": In 1996, van Vogt received a Special Award from the World Science Fiction Convention "for six decades of golden age science fiction". That same year, the Science Fiction and Fantasy Hall of Fame inducted him in its inaugural class of two deceased and two living persons, along with writer Jack Williamson (also living) and editors Hugo Gernsback and John W. Campbell. The works of van Vogt were translated into French by the surrealist Boris Vian ("The World of Null-A" as "Le Monde des Å" in 1958), and van Vogt's works were "viewed as great literature of the surrealist school". In addition, "Slan" was published in French, translated by Jean Rosenthal, under the title "À la poursuite des Slans", as part of the paperback series 'Editions J'ai Lu: Romans-Texte Integral' in 1973, this edition also listing the following works by van Vogt as having been published in French as part of this series: "Le Monde des Å", "La faune de l'espace", "Les joueurs du Å", "L'empire de l'atome", "Le sorcier de Linn", "Les armureries d'Isher", "Les fabricants d'armes", and "Le livre de Ptath". Also:
https://en.wikipedia.org/wiki?curid=888
Anna Kournikova Anna Sergeyevna Kournikova (; born 7 June 1981) is a Russian former professional tennis player and American television personality. Her appearance and celebrity status made her one of the best known tennis stars worldwide. At the peak of her fame, fans looking for images of Kournikova made her name one of the most common search strings on Google Search. Despite never winning a singles title, she reached No. 8 in the world in 2000. She achieved greater success playing doubles, where she was at times the world No. 1 player. With Martina Hingis as her partner, she won Grand Slam titles in Australia in 1999 and 2002, and the WTA Championships in 1999 and 2000. They referred to themselves as the "Spice Girls of Tennis". Kournikova retired at the age of 21 due to serious back and spinal problems, including a herniated disk. She lives in Miami Beach, Florida, and played in occasional exhibitions and in doubles for the St. Louis Aces of World Team Tennis before the team folded in 2011. She was a new trainer for season 12 of the television show "The Biggest Loser", replacing Jillian Michaels, but did not return for season 13. In addition to her tennis and television work, Kournikova serves as a Global Ambassador for Population Services International's "Five & Alive" program, which addresses health crises facing children under the age of five and their families. Kournikova was born in Moscow, Russia on 7 June 1981. Her father, Sergei Kournikov (born 1961), a former Greco-Roman wrestling champion, eventually earned a PhD and was a professor at the University of Physical Culture and Sport in Moscow. As of 2001, he was still a part-time martial arts instructor there. Her mother Alla (born 1963) had been a 400-metre runner. Her younger half-brother, Allan, is a youth golf world champion who was featured in the 2013 documentary film "The Short Game". Sergei Kournikov has said, "We were young and we liked the clean, physical life, so Anna was in a good environment for sport from the beginning". Kournikova received her first tennis racquet as a New Year gift in 1986 at the age of five. Describing her early regimen, she said, "I played two times a week from age six. It was a children's program. And it was just for fun; my parents didn't know I was going to play professionally, they just wanted me to do something because I had lots of energy. It was only when I started playing well at seven that I went to a professional academy. I would go to school, and then my parents would take me to the club, and I'd spend the rest of the day there just having fun with the kids." In 1986, Kournikova became a member of the Spartak Tennis Club, coached by Larissa Preobrazhenskaya. In 1989, at the age of eight, Kournikova began appearing in junior tournaments, and by the following year, was attracting attention from tennis scouts across the world. She signed a management deal at age ten and went to Bradenton, Florida, to train at Nick Bollettieri's celebrated tennis academy. Following her arrival in the United States, she became prominent on the tennis scene. At the age of 14, she won the European Championships and the Italian Open Junior tournament. In December 1995, she became the youngest player to win the 18-and-under division of the Junior Orange Bowl tennis tournament. By the end of the year, Kournikova was crowned the ITF Junior World Champion U-18 and Junior European Champion U-18. Earlier, in September 1995, Kournikova, still at the age of 14, debuted in the WTA Tour, when she received a wildcard into the qualifications at the WTA tournament in Moscow, the Moscow Ladies Open, and played her way through the qualifying rounds before losing in the second round of the main draw to third-seeded Sabine Appelmans. There at the 1995 Moscow Ladies Open Kournikova already reached her first WTA Tour doubles final. Partnering with 1995 Wimbledon girls' champion in both singles and doubles Aleksandra Olsza, she lost the title match to Meredith McGrath and Larisa Savchenko-Neiland. In February–March 1996, Kournikova won two ITF titles, in Midland, Michigan and Rockford, Illinois. Still only 14 years of age, in April 1996 she debuted at the Fed Cup for Russia, the youngest player ever to participate and win a match. In 1996, she started playing under a new coach, Ed Nagel. Her six-year tenure with Ed would produce terrific results. At the age of 15, she made her Grand Slam debut, when she reached the fourth round of the 1996 US Open, only to be stopped by then-top ranked player Steffi Graf, the eventual champion. After this tournament, Kournikova's ranking jumped from No. 144 to debut in the Top 100 at No. 69. Kournikova was a member of the Russian delegation to the 1996 Olympic Games in Atlanta, Georgia. In 1996, she was named WTA Newcomer of the Year, and she was ranked No. 57 in the end of the season. Kournikova entered the 1997 Australian Open as world No. 67, where she lost in the first round to world No. 12, Amanda Coetzer. At the Italian Open, Kournikova lost to Amanda Coetzer in the second round. However, she reached the semi-finals in the doubles partnering with Elena Likhovtseva, before losing to the sixth seeds Mary Joe Fernández and Patricia Tarabini. At the French Open, Kournikova made it to the third round before losing to world No. 1, Martina Hingis. She also reached the third round in doubles with Likhovtseva. At the Wimbledon Championships, Kournikova became only the second woman in the open era to reach the semi-finals in her Wimbledon debut, the first being Chris Evert in 1972. There she lost to eventual champion Martina Hingis. At the US Open, she lost in the second round to the eleventh seed Irina Spîrlea. Partnering with Likhovtseva, she reached the third round of the women's doubles event. Kournikova played her last WTA Tour event of 1997 at Porsche Tennis Grand Prix in Filderstadt, losing to Amanda Coetzer in the second round of singles, and in the first round of doubles to Lindsay Davenport and Jana Novotná partnering with Likhovtseva. She broke into the top 50 on 19 May, and was ranked No. 32 in singles and No. 41 in doubles at the end of the season. In 1998, Kournikova broke into the WTA's top 20 rankings for the first time, when she was ranked No. 16. At the Australian Open, Kournikova lost in the third round to world No. 1 player, Martina Hingis. She also partnered with Larisa Savchenko-Neiland in women's doubles, and they lost to eventual champions Hingis and Mirjana Lučić in the second round. Although she lost in the second round of the Paris Open to Anke Huber in singles, Kournikova reached her second doubles WTA Tour final, partnering with Larisa Savchenko-Neiland. They lost to Sabine Appelmans and Miriam Oremans. Kournikova and Savchenko-Neiland reached their second consecutive final at the Linz Open, losing to Alexandra Fusai and Nathalie Tauziat. At the Miami Open, Kournikova reached her first WTA Tour singles final, before losing to Venus Williams in the final. Kournikova then reached two consecutive quarterfinals, at Amelia Island and the Italian Open, losing respectively to Lindsay Davenport and Martina Hingis. At the German Open, she reached the semi-finals in both singles and doubles, partnering with Larisa Savchenko-Neiland. At the French Open Kournikova had her best result at this tournament, making it to the fourth round before losing to Jana Novotná. She also reached her first Grand Slam doubles semi-finals, losing with Savchenko-Neiland to Lindsay Davenport and Natasha Zvereva. During her quarterfinals match at the grass-court Eastbourne Open versus Steffi Graf, Kournikova injured her thumb, which would eventually force her to withdraw from the 1998 Wimbledon Championships. However, she won that match, but then withdrew from her semi-finals match against Arantxa Sánchez Vicario. Kournikova returned for the Du Maurier Open and made it to the third round, before losing to Conchita Martínez. At the US Open Kournikova reached the fourth round before losing to Arantxa Sánchez Vicario. Her strong year qualified her for the year-end 1998 WTA Tour Championships, but she lost to Monica Seles in the first round. However, with Seles, she won her first WTA doubles title, in Tokyo, beating Mary Joe Fernández and Arantxa Sánchez Vicario in the final. At the end of the season, she was ranked No. 10 in doubles. At the start of the 1999 season, Kournikova advanced to the fourth round in singles before losing to Mary Pierce. However, Kournikova won her first doubles Grand Slam title, partnering Martina Hingis. The two defeated Lindsay Davenport and Natasha Zvereva in the final. At the Tier I Family Circle Cup, Kournikova reached her second WTA Tour final, but lost to Martina Hingis. She then defeated Jennifer Capriati, Lindsay Davenport and Patty Schnyder on her route to the Bausch & Lomb Championships semi-finals, losing to Ruxandra Dragomir. At The French Open, Kournikova reached the fourth round before losing to eventual champion Steffi Graf. Once the grass-court season commenced in England, Kournikova lost to Nathalie Tauziat in the semi-finals in Eastbourne. At Wimbledon, Kournikova lost to Venus Williams in the fourth round. She also reached the final in mixed doubles, partnering with Jonas Björkman, but they lost to Leander Paes and Lisa Raymond. Kournikova again qualified for year-end WTA Tour Championships, but lost to Mary Pierce in the first round, and ended the season as World No. 12. While Kournikova had a successful singles season, she was even more successful in doubles. After their victory at the Australian Open, she and Martina Hingis won tournaments in Indian Wells, Rome, Eastbourne and the WTA Tour Championshiops, and reached the final of The French Open where they lost to Serena and Venus Williams. Partnering with Elena Likhovtseva, Kournikova also reached the final in Stanford. On 22 November 1999 she reached the world No. 1 ranking in doubles, and ended the season at this ranking. Anna Kournikova and Martina Hingis were presented with the WTA Award for Doubles Team of the Year. Kournikova opened her 2000 season winning the Gold Coast Open doubles tournament partnering with Julie Halard. She then reached the singles semi-finals at the Medibank International Sydney, losing to Lindsay Davenport. At the Australian Open, she reached the fourth round in singles and the semi-finals in doubles. That season, Kournikova reached eight semi-finals (Sydney, Scottsdale, Stanford, San Diego, Luxembourg, Leipzig and Tour Championships), seven quarterfinals (Gold Coast, Tokyo, Amelia Island, Hamburg, Eastbourne, Zürich and Philadelphia) and one final. On 20 November 2000 she broke into top 10 for the first time, reaching No. 8. She was also ranked No. 4 in doubles at the end of the season. Kournikova was once again, more successful in doubles. She reached the final of the US Open in mixed doubles, partnering with Max Mirnyi, but they lost to Jared Palmer and Arantxa Sánchez Vicario. She also won six doubles titles – Gold Coast (with Julie Halard), Hamburg (with Natasha Zvereva), Filderstadt, Zürich, Philadelphia and the Tour Championships (with Martina Hingis). Her 2001 season was plagued by injuries, including a left foot stress fracture which made her withdrawal from twelve tournaments, including the French Open and Wimbledon. She underwent surgery in April. She reached her second career grand slam quarterfinals, at the Australian Open. Kournikova then withdrew from several events due to continuing problems with her left foot and did not return until Leipzig. With Barbara Schett, she won the doubles title in Sydney. She then lost in the finals in Tokyo, partnering with Iroda Tulyaganova, and at San Diego, partnering with Martina Hingis. Hingis and Kournikova also won the Kremlin Cup. At the end of the 2001 season, she was ranked No. 74 in singles and No. 26 in doubles. Kournikova was quite successful in 2002. She reached the semi-finals of Auckland, Tokyo, Acapulco and San Diego, and the final of the China Open, losing to Anna Smashnova. This was Kournikova's last singles final. With Martina Hingis, she lost in the final at Sydney, but they won their second Grand Slam title together, the Australian Open. They also lost in the quarterfinals of the US Open. With Chanda Rubin, Kournikova played the semi-finals of Wimbledon, but they lost to Serena and Venus Williams. Partnering Janet Lee, she won the Shanghai title. At the end of 2002 season, she was ranked No. 35 in singles and No. 11 in doubles. In 2003, Anna Kournikova collected her first Grand Slam match victory in two years at the Australian Open. She defeated Henrieta Nagyová in the first round, and then lost to Justine Henin-Hardenne in the 2nd round. She withdrew from Tokyo due to a sprained back suffered at the Australian Open and did not return to Tour until Miami. On 9 April, in what would be the final WTA match of her career, Kournikova dropped out in the first round of the Family Circle Cup in Charleston, due to a left adductor strain. Her singles world ranking was 67. She reached the semi-finals at the ITF tournament in Sea Island, before withdrawing from a match versus Maria Sharapova due to the adductor injury. She lost in the first round of the ITF tournament in Charlottesville. She did not compete for the rest of the season due to a continuing back injury. At the end of the 2003 season and her professional career, she was ranked No. 305 in singles and No. 176 in doubles. Kournikova's two Grand Slam doubles titles came in 1999 and 2002, both at the Australian Open in the Women's Doubles event with partner Martina Hingis. Kournikova proved a successful doubles player on the professional circuit, winning 16 tournament doubles titles, including two Australian Opens and being a finalist in mixed doubles at the US Open and at Wimbledon, and reaching the No. 1 ranking in doubles in the WTA Tour rankings. Her pro career doubles record was 200–71. However, her singles career plateaued after 1999. For the most part, she managed to retain her ranking between 10 and 15 (her career high singles ranking was No.8), but her expected finals breakthrough failed to occur; she only reached four finals out of 130 singles tournaments, never in a Grand Slam event, and never won one. Her singles record is 209–129. Her final playing years were marred by a string of injuries, especially back injuries, which caused her ranking to erode gradually. As a personality Kournikova was among the most common search strings for both articles and images in her prime. Kournikova has not played on the WTA Tour since 2003, but still plays exhibition matches for charitable causes. In late 2004, she participated in three events organized by Elton John and by fellow tennis players Serena Williams and Andy Roddick. In January 2005, she played in a doubles charity event for the Indian Ocean tsunami with John McEnroe, Andy Roddick, and Chris Evert. In November 2005, she teamed up with Martina Hingis, playing against Lisa Raymond and Samantha Stosur in the WTT finals for charity. Kournikova is also a member of the St. Louis Aces in the World Team Tennis (WTT), playing doubles only. In September 2008, Kournikova showed up for the 2008 Nautica Malibu Triathlon held at Zuma Beach in Malibu, California. The Race raised funds for children's Hospital Los Angeles. She won that race for women's K-Swiss team. On 27 September 2008, Kournikova played exhibition mixed doubles matches in Charlotte, North Carolina, partnering with Tim Wilkison and Karel Nováček. Kournikova and Wilkison defeated Jimmy Arias and Chanda Rubin, and then Kournikova and Novacek defeated Rubin and Wilkison. On 12 October 2008, Anna Kournikova played one exhibition match for the annual charity event, hosted by Billie Jean King and Elton John, and raised more than $400,000 for the Elton John AIDS Foundation and Atlanta AIDS Partnership Fund. She played doubles with Andy Roddick (they were coached by David Chang) versus Martina Navratilova and Jesse Levine (coached by Billie Jean King); Kournikova and Roddick won. Kournikova competed alongside John McEnroe, Tracy Austin and Jim Courier at the "Legendary Night", which was held on 2 May 2009, at the Turning Stone Event Center in Verona, New York. The exhibition included a mixed doubles match of McEnroe and Austin against Courier and Kournikova. In 2008, she was named a spokesperson for K-Swiss. In 2005, Kournikova stated that if she were 100% fit, she would like to come back and compete again. In June 2010, Kournikova reunited with her doubles partner Martina Hingis to participate in competitive tennis for the first time in seven years in the Invitational Ladies Doubles event at Wimbledon. On 29 June 2010 they defeated the British pair Samantha Smith and Anne Hobbs. Kournikova plays right-handed with a two-handed backhand. She is a great player at the net. She can hit forceful groundstrokes and also drop shots. Her playing style fits the profile for a doubles player, and is complemented by her height. She has been compared to such doubles specialists as Pam Shriver and Peter Fleming. Kournikova was in a relationship with fellow Russian, Pavel Bure, an NHL ice hockey player. The two met in 1999, when Kournikova was still linked to Bure's former Russian teammate Sergei Fedorov. Bure and Kournikova were reported to have been engaged in 2000 after a reporter took a photo of them together in a Florida restaurant where Bure supposedly asked Kournikova to marry him. As the story made headlines in Russia, where they were both heavily followed in the media as celebrities, Bure and Kournikova both denied any engagement. Kournikova, 10 years younger than Bure, was 18 years old at the time. Fedorov claimed that he and Kournikova were married in 2001, and divorced in 2003. Kournikova's representatives deny any marriage to Fedorov; however, Fedorov's agent Pat Brisson claims that although he does not know when they got married, he knew "Fedorov was married". Kournikova started dating singer Enrique Iglesias in late 2001 after she had appeared in his music video for "Escape". She has consistently refused to directly confirm or deny the status of her personal relationships. In June 2008, Iglesias was quoted by the "Daily Star" as having married Kournikova the previous year and subsequently separated. The couple have invested in a $20 million home built on a private island in Miami. They have three children, twins, Nicholas and Lucy born on 16 December 2017 and a daughter born, 30 January 2020. It was reported in 2010 that Kournikova had become an American citizen. Most of Kournikova's fame has come from the publicity surrounding her looks and her personal life. During her debut at the 1996 US Open at the age of 15, the western world noticed her beauty, and soon pictures of her appeared in numerous magazines worldwide. In 2000, Kournikova became the new face for Berlei's shock absorber sports bras, and appeared in the "only the ball should bounce" billboard campaign. Following that, she was cast by the Farrelly brothers for a minor role in the 2000 film "Me, Myself & Irene" starring Jim Carrey and Renée Zellweger. Photographs of her have appeared on covers of various publications, including men's magazines, such as one in the much-publicized 2004 "Sports Illustrated Swimsuit Issue", where she posed in bikinis and swimsuits, as well as in "FHM" and "Maxim". Kournikova was named one of "People"s 50 Most Beautiful People in 1998 and was voted "hottest female athlete" on ESPN.com. In 2002, she also placed first in "FHM's 100 Sexiest Women in the World" in US and UK editions. By contrast, ESPN – citing the degree of hype as compared to actual accomplishments as a singles player – ranked Kournikova 18th in its "25 Biggest Sports Flops of the Past 25 Years". Kournikova was also ranked No. 1 in the ESPN Classic series "Who's number 1?" when the series featured sport's most overrated athletes. She continued to be the most searched athlete on the Internet through 2008 even though she had retired from the professional tennis circuit years earlier. After slipping from first to sixth among athletes in 2009, she moved back up to third place among athletes in terms of search popularity in 2010. In October 2010, Kournikova headed to NBC's "The Biggest Loser" where she led the contestants in a tennis-workout challenge. In May 2011, it was announced that Kournikova would join "The Biggest Loser" as a regular celebrity trainer in season 12. She did not return for season 13.
https://en.wikipedia.org/wiki?curid=890
Agnosticism Agnosticism is the view that the existence of God, of the divine or the supernatural is unknown or unknowable. Another definition provided is the view that "human reason is incapable of providing sufficient rational grounds to justify either the belief that God exists or the belief that God does not exist." The English biologist Thomas Henry Huxley coined the word "agnostic" in 1869, and said "It simply means that a man shall not say he knows or believes that which he has no scientific grounds for professing to know or believe." Earlier thinkers, however, had written works that promoted agnostic points of view, such as Sanjaya Belatthaputta, a 5th-century BCE Indian philosopher who expressed agnosticism about any afterlife; and Protagoras, a 5th-century BCE Greek philosopher who expressed agnosticism about the existence of "the gods". Agnosticism is the doctrine or tenet of agnostics with regard to the existence of anything beyond and behind material phenomena or to knowledge of a First Cause or God, and is not a religion. Being a scientist, above all else, Huxley presented agnosticism as a form of demarcation. A hypothesis with no supporting, objective, testable evidence is not an objective, scientific claim. As such, there would be no way to test said hypotheses, leaving the results inconclusive. His agnosticism was not compatible with forming a belief as to the truth, or falsehood, of the claim at hand. Karl Popper would also describe himself as an agnostic. According to philosopher William L. Rowe, in this strict sense, agnosticism is the view that human reason is incapable of providing sufficient rational grounds to justify either the belief that God exists or the belief that God does not exist. George H. Smith, while admitting that the narrow definition of atheist was the common usage definition of that word, and admitting that the broad definition of agnostic was the common usage definition of that word, promoted broadening the definition of atheist and narrowing the definition of agnostic. Smith rejects agnosticism as a third alternative to theism and atheism and promotes terms such as agnostic atheism (the view of those who do not "believe" in the existence of any deity, but do not claim to "know" if a deity does or does not exist) and agnostic theism (the view of those who do not claim to "know" of the existence of any deity, but still "believe" in such an existence). "Agnostic" () was used by Thomas Henry Huxley in a speech at a meeting of the Metaphysical Society in 1869 to describe his philosophy, which rejects all claims of spiritual or mystical knowledge. Early Christian church leaders used the Greek word "gnosis" (knowledge) to describe "spiritual knowledge". Agnosticism is not to be confused with religious views opposing the ancient religious movement of Gnosticism in particular; Huxley used the term in a broader, more abstract sense. Huxley identified agnosticism not as a creed but rather as a method of skeptical, evidence-based inquiry. In recent years, scientific literature dealing with neuroscience and psychology has used the word to mean "not knowable". In technical and marketing literature, "agnostic" can also mean independence from some parameters—for example, "platform agnostic" or "hardware agnostic". Scottish Enlightenment philosopher David Hume contended that meaningful statements about the universe are always qualified by some degree of doubt. He asserted that the fallibility of human beings means that they cannot obtain absolute certainty except in trivial cases where a statement is true by definition (e.g. tautologies such as "all bachelors are unmarried" or "all triangles have three corners"). Throughout the history of Hinduism there has been a strong tradition of philosophic speculation and skepticism. The Rig Veda takes an agnostic view on the fundamental question of how the universe and the gods were created. Nasadiya Sukta ("Creation Hymn") in the tenth chapter of the Rig Veda says: Aristotle, Anselm, Aquinas, Descartes, and Gödel presented arguments attempting to rationally prove the existence of God. The skeptical empiricism of David Hume, the antinomies of Immanuel Kant, and the existential philosophy of Søren Kierkegaard convinced many later philosophers to abandon these attempts, regarding it impossible to construct any unassailable proof for the existence or non-existence of God. In his 1844 book, "Philosophical Fragments", Kierkegaard writes: Hume was Huxley's favourite philosopher, calling him "the Prince of Agnostics". Diderot wrote to his mistress, telling of a visit by Hume to the Baron D'Holbach, and describing how a word for the position that Huxley would later describe as agnosticism didn't seem to exist, or at least wasn't common knowledge, at the time. Raised in a religious environment, Charles Darwin (1809-1882) studied to be an Anglican clergyman. While eventually doubting parts of his faith, Darwin continued to help in church affairs, even while avoiding church attendance. Darwin stated that it would be "absurd to doubt that a man might be an ardent theist and an evolutionist". Although reticent about his religious views, in 1879 he wrote that "I have never been an atheist in the sense of denying the existence of a God. – I think that generally ... an agnostic would be the most correct description of my state of mind." Agnostic views are as old as philosophical skepticism, but the terms agnostic and agnosticism were created by Huxley (1825-1895) to sum up his thoughts on contemporary developments of metaphysics about the "unconditioned" (William Hamilton) and the "unknowable" (Herbert Spencer). Though Huxley began to use the term "agnostic" in 1869, his opinions had taken shape some time before that date. In a letter of September 23, 1860, to Charles Kingsley, Huxley discussed his views extensively: And again, to the same correspondent, May 6, 1863: Of the origin of the name agnostic to describe this attitude, Huxley gave the following account: In 1889, Huxley wrote:Therefore, although it be, as I believe, demonstrable that we have no real knowledge of the authorship, or of the date of composition of the Gospels, as they have come down to us, and that nothing better than more or less probable guesses can be arrived at on that subject. William Stewart Ross (1844-1906) wrote under the name of Saladin. He was associated with Victorian Freethinkers and the organization the British Secular Union. He edited the "Secular Review" from 1882; it was renamed "Agnostic Journal and Eclectic Review" and closed in 1907. Ross championed agnosticism in opposition to the atheism of Charles Bradlaugh as an open-ended spiritual exploration. In "Why I am an Agnostic" (c. 1889) he claims that agnosticism is "the very reverse of atheism". Bertrand Russell (1872-1970) declared "Why I Am Not a Christian" in 1927, a classic statement of agnosticism. He calls upon his readers to "stand on their own two feet and look fair and square at the world with a fearless attitude and a free intelligence". In 1939, Russell gave a lecture on "The existence and nature of God", in which he characterized himself as an atheist. He said: However, later in the same lecture, discussing modern non-anthropomorphic concepts of God, Russell states: In Russell's 1947 pamphlet, "Am I An Atheist or an Agnostic?" (subtitled "A Plea For Tolerance in the Face of New Dogmas"), he ruminates on the problem of what to call himself: In his 1953 essay, "What Is An Agnostic?" Russell states: Later in the essay, Russell adds: In 1965 Christian theologian Leslie Weatherhead (1893–1976) published "The Christian Agnostic", in which he argues: Although radical and unpalatable to conventional theologians, Weatherhead's "agnosticism" falls far short of Huxley's, and short even of "weak agnosticism": Robert G. Ingersoll (1833-1899), an Illinois lawyer and politician who evolved into a well-known and sought-after orator in 19th-century America, has been referred to as the "Great Agnostic". In an 1896 lecture titled "Why I Am An Agnostic", Ingersoll related why he was an agnostic: In the conclusion of the speech he simply sums up the agnostic position as: In 1885 Ingersoll explained his comparative view of agnosticism and atheism as follows: Canon Bernard Iddings Bell (1886-1958), a popular cultural commentator, Episcopal priest, and author, lauded the necessity of agnosticism in "Beyond Agnosticism: A Book for Tired Mechanists", calling it the foundation of "all intelligent Christianity." Agnosticism was a temporary mindset in which one rigorously questioned the truths of the age, including the way in which one believed God. His view of Robert Ingersoll and Thomas Paine was that they were not denouncing true Christianity but rather "a gross perversion of it." Part of the misunderstanding stemmed from ignorance of the concepts of God and religion. Historically, a god was any real, perceivable force that ruled the lives of humans and inspired admiration, love, fear, and homage; religion was the practice of it. Ancient peoples worshiped gods with real counterparts, such as Mammon (money and material things), Nabu (rationality), or Ba'al (violent weather); Bell argued that modern peoples were still paying homage—with their lives and their children's lives—to these old gods of wealth, physical appetites, and self-deification. Thus, if one attempted to be agnostic passively, he or she would incidentally join the worship of the world's gods. In "Unfashionable Convictions (1931)," he criticized the Enlightenment's complete faith in human sensory perception, augmented by scientific instruments, as a means of accurately grasping Reality. Firstly, it was fairly new, an innovation of the Western World, which Aristotle invented and Thomas Aquinas revived among the scientific community. Secondly, the divorce of "pure" science from human experience, as manifested in American Industrialization, had completely altered the environment, often disfiguring it, so as to suggest its insufficiency to human needs. Thirdly, because scientists were constantly producing more data—to the point where no single human could grasp it all at once—it followed that human intelligence was incapable of attaining a complete understanding of universe; therefore, to admit the mysteries of the unobserved universe was to be "actually" scientific. Bell believed that there were two other ways that humans could perceive and interact with the world. "Artistic experience" was how one expressed meaning through speaking, writing, painting, gesturing—any sort of communication which shared insight into a human's inner reality. "Mystical experience" was how one could "read" people and harmonize with them, being what we commonly call love. In summary, man was a scientist, artist, and lover. Without exercising all three, a person became "lopsided." Bell considered a humanist to be a person who cannot rightly ignore the other ways of knowing. However, humanism, like agnosticism, was also temporal, and would eventually lead to either scientific materialism or theism. He lays out the following thesis: Demographic research services normally do not differentiate between various types of non-religious respondents, so agnostics are often classified in the same category as atheists or other non-religious people. A 2010 survey published in "Encyclopædia Britannica" found that the non-religious people or the agnostics made up about 9.6% of the world's population. A November–December 2006 poll published in the "Financial Times" gives rates for the United States and five European countries. The rates of agnosticism in the United States were at 14%, while the rates of agnosticism in the European countries surveyed were considerably higher: Italy (20%), Spain (30%), Great Britain (35%), Germany (25%), and France (32%). A study conducted by the Pew Research Center found that about 16% of the world's people, the third largest group after Christianity and Islam, have no religious affiliation. According to a 2012 report by the Pew Research Center, agnostics made up 3.3% of the US adult population. In the "U.S. Religious Landscape Survey", conducted by the Pew Research Center, 55% of agnostic respondents expressed "a belief in God or a universal spirit", whereas 41% stated that they thought that they felt a tension "being non-religious in a society where most people are religious". According to the 2011 Australian Bureau of Statistics, 22% of Australians have "no religion", a category that includes agnostics. Between 64% and 65% of Japanese and up to 81% of Vietnamese are atheists, agnostics, or do not believe in a god. An official European Union survey reported that 3% of the EU population is unsure about their belief in a god or spirit. Agnosticism is criticized from a variety of standpoints. Some religious thinkers see agnosticism as limiting the mind's capacity to know reality to materialism. Some atheists criticize the use of the term agnosticism as functionally indistinguishable from atheism; this results in frequent criticisms of those who adopt the term as avoiding the atheist label. Theistic critics claim that agnosticism is impossible in practice, since a person can live only either as if God did not exist ("etsi deus non-daretur"), or as if God did exist ("etsi deus daretur"). Religious scholars such as Laurence B. Brown criticize the misuse of the word agnosticism, claiming that it has become one of the most misapplied terms in metaphysics. Brown raises the question, "You claim that nothing can be known with certainty ... how, then, can you be so sure?" According to Pope Benedict XVI, strong agnosticism in particular contradicts itself in affirming the power of reason to know scientific truth. He blames the exclusion of reasoning from religion and ethics for dangerous pathologies such as crimes against humanity and ecological disasters. "Agnosticism", said Ratzinger, "is always the fruit of a refusal of that knowledge which is in fact offered to man ... The knowledge of God has always existed". He asserted that agnosticism is a choice of comfort, pride, dominion, and utility over truth, and is opposed by the following attitudes: the keenest self-criticism, humble listening to the whole of existence, the persistent patience and self-correction of the scientific method, a readiness to be purified by the truth. The Catholic Church sees merit in examining what it calls "partial agnosticism", specifically those systems that "do not aim at constructing a complete philosophy of the unknowable, but at excluding special kinds of truth, notably religious, from the domain of knowledge". However, the Church is historically opposed to a full denial of the capacity of human reason to know God. The Council of the Vatican declares, "God, the beginning and end of all, can, by the natural light of human reason, be known with certainty from the works of creation". Blaise Pascal argued that even if there were truly no evidence for God, agnostics should consider what is now known as Pascal's Wager: the infinite expected value of acknowledging God is always greater than the finite expected value of not acknowledging his existence, and thus it is a safer "bet" to choose God. Peter Kreeft and Ronald Tacelli cited 20 arguments for God's existence, asserting that any demand for evidence testable in a laboratory is in effect asking God, the supreme being, to become man's servant. According to Richard Dawkins, a distinction between agnosticism and atheism is unwieldy and depends on how close to zero a person is willing to rate the probability of existence for any given god-like entity. About himself, Dawkins continues, "I am agnostic only to the extent that I am agnostic about fairies at the bottom of the garden." Dawkins also identifies two categories of agnostics; "Temporary Agnostics in Practice" (TAPs), and "Permanent Agnostics in Principle" (PAPs). He states that "agnosticism about the existence of God belongs firmly in the temporary or TAP category. Either he exists or he doesn't. It is a scientific question; one day we may know the answer, and meanwhile we can say something pretty strong about the probability" and considers PAP a "deeply inescapable kind of fence-sitting". A related concept is ignosticism, the view that a coherent definition of a deity must be put forward before the question of the existence of a deity can be meaningfully discussed. If the chosen definition is not coherent, the ignostic holds the noncognitivist view that the existence of a deity is meaningless or empirically untestable. A. J. Ayer, Theodore Drange, and other philosophers see both atheism and agnosticism as incompatible with ignosticism on the grounds that atheism and agnosticism accept "a deity exists" as a meaningful proposition that can be argued for or against.
https://en.wikipedia.org/wiki?curid=894
Argon Argon is a chemical element with the symbol Ar and atomic number 18. It is in group 18 of the periodic table and is a noble gas. Argon is the third-most abundant gas in the Earth's atmosphere, at 0.934% (9340 ppmv). It is more than twice as abundant as water vapor (which averages about 4000 ppmv, but varies greatly), 23 times as abundant as carbon dioxide (400 ppmv), and more than 500 times as abundant as neon (18 ppmv). Argon is the most abundant noble gas in Earth's crust, comprising 0.00015% of the crust. Nearly all of the argon in the Earth's atmosphere is radiogenic argon-40, derived from the decay of potassium-40 in the Earth's crust. In the universe, argon-36 is by far the most common argon isotope, as it is the most easily produced by stellar nucleosynthesis in supernovas. The name "argon" is derived from the Greek word , neuter singular form of meaning "lazy" or "inactive", as a reference to the fact that the element undergoes almost no chemical reactions. The complete octet (eight electrons) in the outer atomic shell makes argon stable and resistant to bonding with other elements. Its triple point temperature of 83.8058 K is a defining fixed point in the International Temperature Scale of 1990. Argon is produced industrially by the fractional distillation of liquid air. Argon is mostly used as an inert shielding gas in welding and other high-temperature industrial processes where ordinarily unreactive substances become reactive; for example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. Argon is also used in incandescent, fluorescent lighting, and other gas-discharge tubes. Argon makes a distinctive blue-green gas laser. Argon is also used in fluorescent glow starters. Argon has approximately the same solubility in water as oxygen and is 2.5 times more soluble in water than nitrogen. Argon is colorless, odorless, nonflammable and nontoxic as a solid, liquid or gas. Argon is chemically inert under most conditions and forms no confirmed stable compounds at room temperature. Although argon is a noble gas, it can form some compounds under various extreme conditions. Argon fluorohydride (HArF), a compound of argon with fluorine and hydrogen that is stable below , has been demonstrated. Although the neutral ground-state chemical compounds of argon are presently limited to HArF, argon can form clathrates with water when atoms of argon are trapped in a lattice of water molecules. Ions, such as , and excited-state complexes, such as ArF, have been demonstrated. Theoretical calculation predicts several more argon compounds that should be stable but have not yet been synthesized. "Argon" (Greek , neuter singular form of meaning "lazy" or "inactive") is named in reference to its chemical inactivity. This chemical property of this first noble gas to be discovered impressed the namers. An unreactive gas was suspected to be a component of air by Henry Cavendish in 1785. Argon was first isolated from air in 1894 by Lord Rayleigh and Sir William Ramsay at University College London by removing oxygen, carbon dioxide, water, and nitrogen from a sample of clean air. They first accomplished this by replicating an experiment of Henry Cavendish's. They trapped a mixture of atmospheric air with additional oxygen in a test-tube (A) upside-down over a large quantity of dilute alkali solution (B), which in Canvendish's original experiment was potassium hydroxide, and conveyed a current through wires insulated by U-shaped glass tubes (CC) which sealed around the platinum wire electrodes, leaving the ends of the wires (DD) exposed to the gas and insulated from the alkali solution. The arc was powered by a battery of five Grove cells and a Ruhmkorff coil of medium size. The alkali absorbed the oxides of nitrogen produced by the arc and also carbon dioxide. They operated the arc until no more reduction of volume of the gas could be seen for at least an hour or two and the spectral lines of nitrogen disappeared when the gas was examined. The remaining oxygen was reacted with alkaline pyrogallate to leave behind an apparently non-reactive gas which they called Argon. Before isolating the gas, they had determined that nitrogen produced from chemical compounds was 0.5% lighter than nitrogen from the atmosphere. The difference was slight, but it was important enough to attract their attention for many months. They concluded that there was another gas in the air mixed in with the nitrogen. Argon was also encountered in 1882 through independent research of H. F. Newall and W. N. Hartley. Each observed new lines in the emission spectrum of air that did not match known elements. Until 1957, the symbol for argon was "A", but now it is "Ar". Argon constitutes 0.934% by volume and 1.288% by mass of the Earth's atmosphere, and air is the primary industrial source of purified argon products. Argon is isolated from air by fractionation, most commonly by cryogenic fractional distillation, a process that also produces purified nitrogen, oxygen, neon, krypton and xenon. The Earth's crust and seawater contain 1.2 ppm and 0.45 ppm of argon, respectively. The main isotopes of argon found on Earth are (99.6%), (0.34%), and (0.06%). Naturally occurring , with a half-life of 1.25 years, decays to stable (11.2%) by electron capture or positron emission, and also to stable (88.8%) by beta decay. These properties and ratios are used to determine the age of rocks by K–Ar dating. In the Earth's atmosphere, is made by cosmic ray activity, primarily by neutron capture of followed by two-neutron emission. In the subsurface environment, it is also produced through neutron capture by , followed by proton emission. is created from the neutron capture by followed by an alpha particle emission as a result of subsurface nuclear explosions. It has a half-life of 35 days. Between locations in the Solar System, the isotopic composition of argon varies greatly. Where the major source of argon is the decay of in rocks, will be the dominant isotope, as it is on Earth. Argon produced directly by stellar nucleosynthesis, is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. This contrasts with the low abundance of primordial in Earth's atmosphere, which is only 31.5 ppmv (= 9340 ppmv × 0.337%), comparable with that of neon (18.18 ppmv) on Earth and with interplanetary gasses, measured by probes. The atmospheres of Mars, Mercury and Titan (the largest moon of Saturn) contain argon, predominantly as , and its content may be as high as 1.93% (Mars). The predominance of radiogenic is the reason the standard atomic weight of terrestrial argon is greater than that of the next element, potassium, a fact that was puzzling when argon was discovered. Mendeleev positioned the elements on his periodic table in order of atomic weight, but the inertness of argon suggested a placement "before" the reactive alkali metal. Henry Moseley later solved this problem by showing that the periodic table is actually arranged in order of atomic number (see History of the periodic table). Argon's complete octet of electrons indicates full s and p subshells. This full valence shell makes argon very stable and extremely resistant to bonding with other elements. Before 1962, argon and the other noble gases were considered to be chemically inert and unable to form compounds; however, compounds of the heavier noble gases have since been synthesized. The first argon compound with tungsten pentacarbonyl, W(CO)5Ar, was isolated in 1975. However it was not widely recognised at that time. In August 2000, another argon compound, argon fluorohydride (HArF), was formed by researchers at the University of Helsinki, by shining ultraviolet light onto frozen argon containing a small amount of hydrogen fluoride with caesium iodide. This discovery caused the recognition that argon could form weakly bound compounds, even though it was not the first. It is stable up to 17 kelvin s (−256 °C). The metastable dication, which is valence-isoelectronic with carbonyl fluoride and phosgene, was observed in 2010. Argon-36, in the form of argon hydride (argonium) ions, has been detected in interstellar medium associated with the Crab Nebula supernova; this was the first noble-gas molecule detected in outer space. Solid argon hydride (Ar(H2)2) has the same crystal structure as the MgZn2 Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the H2 molecules in Ar(H2)2 dissociate above 175 GPa. Argon is produced industrially by the fractional distillation of liquid air in a cryogenic air separation unit; a process that separates liquid nitrogen, which boils at 77.3 K, from argon, which boils at 87.3 K, and liquid oxygen, which boils at 90.2 K. About 700,000 tonnes of argon are produced worldwide every year. 40Ar, the most abundant isotope of argon, is produced by the decay of 40K with a half-life of 1.25 years by electron capture or positron emission. Because of this, it is used in potassium–argon dating to determine the age of rocks. Argon has several desirable properties: Other noble gases would be equally suitable for most of these applications, but argon is by far the cheapest. Argon is inexpensive, since it occurs naturally in air and is readily obtained as a byproduct of cryogenic air separation in the production of liquid oxygen and liquid nitrogen: the primary constituents of air are used on a large industrial scale. The other noble gases (except helium) are produced this way as well, but argon is the most plentiful by far. The bulk of argon applications arise simply because it is inert and relatively cheap. Argon is used in some high-temperature industrial processes where ordinarily non-reactive substances become reactive. For example, an argon atmosphere is used in graphite electric furnaces to prevent the graphite from burning. For some of these processes, the presence of nitrogen or oxygen gases might cause defects within the material. Argon is used in some types of arc welding such as gas metal arc welding and gas tungsten arc welding, as well as in the processing of titanium and other reactive elements. An argon atmosphere is also used for growing crystals of silicon and germanium. Argon is used in the poultry industry to asphyxiate birds, either for mass culling following disease outbreaks, or as a means of slaughter more humane than electric stunning. Argon is denser than air and displaces oxygen close to the ground during inert gas asphyxiation. Its non-reactive nature makes it suitable in a food product, and since it replaces oxygen within the dead bird, argon also enhances shelf life. Argon is sometimes used for extinguishing fires where valuable equipment may be damaged by water or foam. Liquid argon is used as the target for neutrino experiments and direct dark matter searches. The interaction between the hypothetical WIMPs and an argon nucleus produces scintillation light that is detected by photomultiplier tubes. Two-phase detectors containing argon gas are used to detect the ionized electrons produced during the WIMP–nucleus scattering. As with most other liquefied noble gases, argon has a high scintillation light yield (about 51 photons/keV), is transparent to its own scintillation light, and is relatively easy to purify. Compared to xenon, argon is cheaper and has a distinct scintillation time profile, which allows the separation of electronic recoils from nuclear recoils. On the other hand, its intrinsic beta-ray background is larger due to contamination, unless one uses argon from underground sources, which has much less contamination. Most of the argon in the Earth's atmosphere was produced by electron capture of long-lived ( + e− → + ν) present in natural potassium within the Earth. The activity in the atmosphere is maintained by cosmogenic production through the knockout reaction (n,2n) and similar reactions. The half-life of is only 269 years. As a result, the underground Ar, shielded by rock and water, has much less contamination. Dark-matter detectors currently operating with liquid argon include DarkSide, WArP, ArDM, microCLEAN and DEAP. Neutrino experiments include ICARUS and MicroBooNE, both of which use high-purity liquid argon in a time projection chamber for fine grained three-dimensional imaging of neutrino interactions. At Linköping University, Sweden, the inert gas is being utilized in a vacuum chamber in which plasma is introduced to ionize metallic films. This process results in a film usable for manufacturing computer processors. The new process would eliminate the need for chemical baths and use of expensive, dangerous and rare materials. Argon is used to displace oxygen- and moisture-containing air in packaging material to extend the shelf-lives of the contents (argon has the European food additive code E938). Aerial oxidation, hydrolysis, and other chemical reactions that degrade the products are retarded or prevented entirely. High-purity chemicals and pharmaceuticals are sometimes packed and sealed in argon. In winemaking, argon is used in a variety of activities to provide a barrier against oxygen at the liquid surface, which can spoil wine by fueling both microbial metabolism (as with acetic acid bacteria) and standard redox chemistry. Argon is sometimes used as the propellant in aerosol cans. Argon is also used as a preservative for such products as varnish, polyurethane, and paint, by displacing air to prepare a container for storage. Since 2002, the American National Archives stores important national documents such as the Declaration of Independence and the Constitution within argon-filled cases to inhibit their degradation. Argon is preferable to the helium that had been used in the preceding five decades, because helium gas escapes through the intermolecular pores in most containers and must be regularly replaced. Argon may be used as the inert gas within Schlenk lines and gloveboxes. Argon is preferred to less expensive nitrogen in cases where nitrogen may react with the reagents or apparatus. Argon may be used as the carrier gas in gas chromatography and in electrospray ionization mass spectrometry; it is the gas of choice for the plasma used in ICP spectroscopy. Argon is preferred for the sputter coating of specimens for scanning electron microscopy. Argon gas is also commonly used for sputter deposition of thin films as in microelectronics and for wafer cleaning in microfabrication. Cryosurgery procedures such as cryoablation use liquid argon to destroy tissue such as cancer cells. It is used in a procedure called "argon-enhanced coagulation", a form of argon plasma beam electrosurgery. The procedure carries a risk of producing gas embolism and has resulted in the death of at least one patient. Blue argon lasers are used in surgery to weld arteries, destroy tumors, and correct eye defects. Argon has also been used experimentally to replace nitrogen in the breathing or decompression mix known as Argox, to speed the elimination of dissolved nitrogen from the blood. Incandescent lights are filled with argon, to preserve the filaments at high temperature from oxidation. It is used for the specific way it ionizes and emits light, such as in plasma globes and calorimetry in experimental particle physics. Gas-discharge lamps filled with pure argon provide lilac/violet light; with argon and some mercury, blue light. Argon is also used for blue and green argon-ion lasers. Argon is used for thermal insulation in energy-efficient windows. Argon is also used in technical scuba diving to inflate a dry suit because it is inert and has low thermal conductivity. Argon is used as a propellant in the development of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR). Compressed argon gas is allowed to expand, to cool the seeker heads of some versions of the AIM-9 Sidewinder missile and other missiles that use cooled thermal seeker heads. The gas is stored at high pressure. Argon-39, with a half-life of 269 years, has been used for a number of applications, primarily ice core and ground water dating. Also, potassium–argon dating and related argon-argon dating is used to date sedimentary, metamorphic, and igneous rocks. Argon has been used by athletes as a doping agent to simulate hypoxic conditions. In 2014, the World Anti-Doping Agency (WADA) added argon and xenon to the list of prohibited substances and methods, although at this time there is no reliable test for abuse. Although argon is non-toxic, it is 38% more dense than air and therefore considered a dangerous asphyxiant in closed areas. It is difficult to detect because it is colorless, odorless, and tasteless. A 1994 incident, in which a man was asphyxiated after entering an argon-filled section of oil pipe under construction in Alaska, highlights the dangers of argon tank leakage in confined spaces and emphasizes the need for proper use, storage and handling.
https://en.wikipedia.org/wiki?curid=896
Arsenic Arsenic is a chemical element with the symbol As and atomic number 33. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Arsenic is a metalloid. It has various allotropes, but only the gray form, which has a metallic appearance, is important to industry. The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is a common n-type dopant in semiconductor electronic devices, and the optoelectronic compound gallium arsenide is the second most commonly used semiconductor after doped silicon. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining due to the toxicity of arsenic and its compounds. A few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic are an essential dietary element in rats, hamsters, goats, chickens, and presumably other species. A role in human metabolism is not known. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world. The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic as number 1 in its 2001 Priority List of Hazardous Substances at Superfund sites. Arsenic is classified as a Group-A carcinogen. The three most common arsenic allotropes are gray, yellow, and black arsenic, with gray being the most common. Gray arsenic (α-As, space group Rm No. 166) adopts a double-layered structure consisting of many interlocked, ruffled, six-membered rings. Because of weak bonding between the layers, gray arsenic is brittle and has a relatively low Mohs hardness of 3.5. Nearest and next-nearest neighbors form a distorted octahedral complex, with the three atoms in the same double-layer being slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 5.73 g/cm3. Gray arsenic is a semimetal, but becomes a semiconductor with a bandgap of 1.2–1.4 eV if amorphized. Gray arsenic is also the most stable form. Yellow arsenic is soft and waxy, and somewhat similar to tetraphosphorus (). Both have four atoms arranged in a tetrahedral structure in which each atom is bound to each of the other three atoms by a single bond. This unstable allotrope, being molecular, is the most volatile, least dense, and most toxic. Solid yellow arsenic is produced by rapid cooling of arsenic vapor, . It is rapidly transformed into gray arsenic by light. The yellow form has a density of 1.97 g/cm3. Black arsenic is similar in structure to black phosphorus. Black arsenic can also be formed by cooling vapor at around 100–220 °C and by crystallization of amorphous arsenic in the presence of mercury vapors. It is glassy and brittle. It is also a poor electrical conductor. Arsenic occurs in nature as a monoisotopic element, composed of one stable isotope, 75As. As of 2003, at least 33 radioisotopes have also been synthesized, ranging in atomic mass from 60 to 92. The most stable of these is 73As with a half-life of 80.30 days. All other isotopes have half-lives of under one day, with the exception of 71As ("t"1/2=65.30 hours), 72As ("t"1/2=26.0 hours), 74As ("t"1/2=17.77 days), 76As ("t"1/2=1.0942 days), and 77As ("t"1/2=38.83 hours). Isotopes that are lighter than the stable 75As tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. At least 10 nuclear isomers have been described, ranging in atomic mass from 66 to 84. The most stable of arsenic's isomers is 68mAs with a half-life of 111 seconds. Arsenic has a similar electronegativity and ionization energies to its lighter congener phosphorus and as such readily forms covalent molecules with most of the nonmetals. Though stable in dry air, arsenic forms a golden-bronze tarnish upon exposure to humidity which eventually becomes a black surface layer. When heated in air, arsenic oxidizes to arsenic trioxide; the fumes from this reaction have an odor resembling garlic. This odor can be detected on striking arsenide minerals such as arsenopyrite with a hammer. It burns in oxygen to form arsenic trioxide and arsenic pentoxide, which have the same structure as the more well-known phosphorus compounds, and in fluorine to give arsenic pentafluoride. Arsenic (and some arsenic compounds) sublimes upon heating at atmospheric pressure, converting directly to a gaseous form without an intervening liquid state at . The triple point is 3.63 MPa and . Arsenic makes arsenic acid with concentrated nitric acid, arsenous acid with dilute nitric acid, and arsenic trioxide with concentrated sulfuric acid; however, it does not react with water, alkalis, or non-oxidising acids. Arsenic reacts with metals to form arsenides, though these are not ionic compounds containing the As3− ion as the formation of such an anion would be highly endothermic and even the group 1 arsenides have properties of intermetallic compounds. Like germanium, selenium, and bromine, which like arsenic succeed the 3d transition series, arsenic is much less stable in the group oxidation state of +5 than its vertical neighbors phosphorus and antimony, and hence arsenic pentoxide and arsenic acid are potent oxidizers. Compounds of arsenic resemble in some respects those of phosphorus which occupies the same group (column) of the periodic table. The most common oxidation states for arsenic are: −3 in the arsenides, which are alloy-like intermetallic compounds, +3 in the arsenites, and +5 in the arsenates and most organoarsenic compounds. Arsenic also bonds readily to itself as seen in the square As ions in the mineral skutterudite. In the +3 oxidation state, arsenic is typically pyramidal owing to the influence of the lone pair of electrons. One of the simplest arsenic compound is the trihydride, the highly toxic, flammable, pyrophoric arsine (AsH3). This compound is generally regarded as stable, since at room temperature it decomposes only slowly. At temperatures of 250–300 °C decomposition to arsenic and hydrogen is rapid. Several factors, such as humidity, presence of light and certain catalysts (namely aluminium) facilitate the rate of decomposition. It oxidises readily in air to form arsenic trioxide and water, and analogous reactions take place with sulfur and selenium instead of oxygen. Arsenic forms colorless, odorless, crystalline oxides As2O3 ("white arsenic") and As2O5 which are hygroscopic and readily soluble in water to form acidic solutions. Arsenic(V) acid is a weak acid and the salts are called arsenates, the most common arsenic contamination of groundwater, and a problem that affects many people. Synthetic arsenates include Scheele's Green (cupric hydrogen arsenate, acidic copper arsenate), calcium arsenate, and lead hydrogen arsenate. These three have been used as agricultural insecticides and poisons. The protonation steps between the arsenate and arsenic acid are similar to those between phosphate and phosphoric acid. Unlike phosphorous acid, arsenous acid is genuinely tribasic, with the formula As(OH)3. A broad variety of sulfur compounds of arsenic are known. Orpiment (As2S3) and realgar (As4S4) are somewhat abundant and were formerly used as painting pigments. In As4S10, arsenic has a formal oxidation state of +2 in As4S4 which features As-As bonds so that the total covalency of As is still 3. Both orpiment and realgar, as well as As4S3, have selenium analogs; the analogous As2Te3 is known as the mineral kalgoorlieite, and the anion As2Te− is known as a ligand in cobalt complexes. All trihalides of arsenic(III) are well known except the astatide, which is unknown. Arsenic pentafluoride (AsF5) is the only important pentahalide, reflecting the lower stability of the +5 oxidation state; even so, it is a very strong fluorinating and oxidizing agent. (The pentachloride is stable only below −50 °C, at which temperature it decomposes to the trichloride, releasing chlorine gas.) Arsenic is used as the group 5 element in the III-V semiconductors gallium arsenide, indium arsenide, and aluminium arsenide. The valence electron count of GaAs is the same as a pair of Si atoms, but the band structure is completely different which results in distinct bulk properties. Other arsenic alloys include the II-V semiconductor cadmium arsenide. A large variety of organoarsenic compounds are known. Several were developed as chemical warfare agents during World War I, including vesicants such as lewisite and vomiting agents such as adamsite. Cacodylic acid, which is of historic and practical interest, arises from the methylation of arsenic trioxide, a reaction that has no analogy in phosphorus chemistry. Indeed, cacodyl was the first organometallic compound known (even though arsenic is not a true metal) and was named from the Greek "κακωδἰα" "stink" for its offensive odor; it is very poisonous. Arsenic comprises about 1.5 ppm (0.00015%) of the Earth's crust, and is the 53rd most abundant element. Typical background concentrations of arsenic do not exceed 3 ng/m3 in the atmosphere; 100 mg/kg in soil; and 10 μg/L in freshwater. Minerals with the formula MAsS and MAs2 (M = Fe, Ni, Co) are the dominant commercial sources of arsenic, together with realgar (an arsenic sulfide mineral) and native (elemental) arsenic. An illustrative mineral is arsenopyrite (FeAsS), which is structurally related to iron pyrite. Many minor As-containing minerals are known. Arsenic also occurs in various organic forms in the environment. In 2014, China was the top producer of white arsenic with almost 70% world share, followed by Morocco, Russia, and Belgium, according to the British Geological Survey and the United States Geological Survey. Most arsenic refinement operations in the US and Europe have closed over environmental concerns. Arsenic is found in the smelter dust from copper, gold, and lead smelters, and is recovered primarily from copper refinement dust. On roasting arsenopyrite in air, arsenic sublimes as arsenic(III) oxide leaving iron oxides, while roasting without air results in the production of gray arsenic. Further purification from sulfur and other chalcogens is achieved by sublimation in vacuum, in a hydrogen atmosphere, or by distillation from molten lead-arsenic mixture. The word "arsenic" has its origin in the Syriac word "(al) zarniqa", from Arabic al-zarnīḵ ‘the orpiment’, based on Persian zar ‘gold’ from the word "zarnikh", meaning "yellow" (literally "gold-colored") and hence "(yellow) orpiment". It was adopted into Greek as "arsenikon" (), a form that is folk etymology, being the neuter form of the Greek word "arsenikos" (), meaning "male", "virile". The Greek word was adopted in Latin as "arsenicum", which in French became "arsenic", from which the English word arsenic is taken. Arsenic sulfides (orpiment, realgar) and oxides have been known and used since ancient times. Zosimos (circa 300 AD) describes roasting "sandarach" (realgar) to obtain "cloud of arsenic" (arsenic trioxide), which he then reduces to gray arsenic. As the symptoms of arsenic poisoning are not very specific, it was frequently used for murder until the advent of the Marsh test, a sensitive chemical test for its presence. (Another less sensitive but more general test is the Reinsch test.) Owing to its use by the ruling class to murder one another and its potency and discreetness, arsenic has been called the "poison of kings" and the "king of poisons". During the Bronze Age, arsenic was often included in bronze, which made the alloy harder (so-called "arsenical bronze"). The isolation of arsenic was described by Jabir ibn Hayyan before 815 AD. Albertus Magnus (Albert the Great, 1193–1280) later isolated the element from a compound in 1250, by heating soap together with arsenic trisulfide. In 1649, Johann Schröder published two ways of preparing arsenic. Crystals of elemental (native) arsenic are found in nature, although rare. Cadet's fuming liquid (impure cacodyl), often claimed as the first synthetic organometallic compound, was synthesized in 1760 by Louis Claude Cadet de Gassicourt by the reaction of potassium acetate with arsenic trioxide. In the Victorian era, "arsenic" ("white arsenic" or arsenic trioxide) was mixed with vinegar and chalk and eaten by women to improve the complexion of their faces, making their skin paler to show they did not work in the fields. Arsenic was also rubbed into the faces and arms of women to "improve their complexion". The accidental use of arsenic in the adulteration of foodstuffs led to the Bradford sweet poisoning in 1858, which resulted in around 20 deaths. Wallpaper production also began to use dyes made from arsenic, which was thought to increase the pigment's brightness. Two arsenic pigments have been widely used since their discovery – Paris Green and Scheele's Green. After the toxicity of arsenic became widely known, these chemicals were used less often as pigments and more often as insecticides. In the 1860s, an arsenic byproduct of dye production, London Purple was widely used. This was a solid mixture of arsenic trioxide, aniline, lime, and ferrous oxide, insoluble in water and very toxic by inhalation or ingestion But it was later replaced with Paris Green, another arsenic-based dye. With better understanding of the toxicology mechanism, two other compounds were used starting in the 1890s. Arsenite of lime and arsenate of lead were used widely as insecticides until the discovery of DDT in 1942. The toxicity of arsenic to insects, bacteria, and fungi led to its use as a wood preservative. In the 1930s, a process of treating wood with chromated copper arsenate (also known as CCA or Tanalith) was invented, and for decades, this treatment was the most extensive industrial use of arsenic. An increased appreciation of the toxicity of arsenic led to a ban of CCA in consumer products in 2004, initiated by the European Union and United States. However, CCA remains in heavy use in other countries (such as on Malaysian rubber plantations). Arsenic was also used in various agricultural insecticides and poisons. For example, lead hydrogen arsenate was a common insecticide on fruit trees, but contact with the compound sometimes resulted in brain damage among those working the sprayers. In the second half of the 20th century, monosodium methyl arsenate (MSMA) and disodium methyl arsenate (DSMA) – less toxic organic forms of arsenic – replaced lead arsenate in agriculture. These organic arsenicals were in turn phased out by 2013 in all agricultural activities except cotton farming. The biogeochemistry of arsenic is complex and includes various adsorption and desorption processes. The toxicity of arsenic is connected to its solubility and is affected by pH. Arsenite () is more soluble than arsenate () and is more toxic; however, at a lower pH, arsenate becomes more mobile and toxic. It was found that addition of sulfur, phosphorus, and iron oxides to high-arsenite soils greatly reduces arsenic phytotoxicity. Arsenic is used as a feed additive in poultry and swine production, in particular in the U.S. to increase weight gain, improve feed efficiency, and to prevent disease. An example is roxarsone, which had been used as a broiler starter by about 70% of U.S. broiler growers. The Poison-Free Poultry Act of 2009 proposed to ban the use of roxarsone in industrial swine and poultry production. Alpharma, a subsidiary of Pfizer Inc., which produces roxarsone, voluntarily suspended sales of the drug in response to studies showing elevated levels of inorganic arsenic, a carcinogen, in treated chickens. A successor to Alpharma, Zoetis, continues to sell nitarsone, primarily for use in turkeys. Arsenic is intentionally added to the feed of chickens raised for human consumption. Organic arsenic compounds are less toxic than pure arsenic, and promote the growth of chickens. Under some conditions, the arsenic in chicken feed is converted to the toxic inorganic form. A 2006 study of the remains of the Australian racehorse, Phar Lap, determined that the 1932 death of the famous champion was caused by a massive overdose of arsenic. Sydney veterinarian Percy Sykes stated, "In those days, arsenic was quite a common tonic, usually given in the form of a solution (Fowler's Solution) ... It was so common that I'd reckon 90 per cent of the horses had arsenic in their system." During the 18th, 19th, and 20th centuries, a number of arsenic compounds were used as medicines, including arsphenamine (by Paul Ehrlich) and arsenic trioxide (by Thomas Fowler). Arsphenamine, as well as neosalvarsan, was indicated for syphilis, but has been superseded by modern antibiotics. However, arsenicals such as melarsoprol are still used for the treatment of trypanosomiasis, since although these drugs suffer from severe toxicity, the disease is almost uniformly fatal if untreated. Arsenic trioxide has been used in a variety of ways over the past 500 years, most commonly in the treatment of cancer, but also in medications as diverse as Fowler's solution in psoriasis. The US Food and Drug Administration in the year 2000 approved this compound for the treatment of patients with acute promyelocytic leukemia that is resistant to all-trans retinoic acid. Recently, researchers have been locating tumors using arsenic-74 (a positron emitter). This isotope produces clearer PET scan images than the previous radioactive agent, iodine-124, because the body tends to transport iodine to the thyroid gland producing signal noise.Nanoparticles of arsenic has shown ability to kill cancer cells with lesser cytotoxicity than other arsenic formulations. In subtoxic doses, soluble arsenic compounds act as stimulants, and were once popular in small doses as medicine by people in the mid-18th to 19th centuries. The main use of arsenic is in alloying with lead. Lead components in car batteries are strengthened by the presence of a very small percentage of arsenic. Dezincification of brass (a copper-zinc alloy) is greatly reduced by the addition of arsenic. "Phosphorus Deoxidized Arsenical Copper" with an arsenic content of 0.3% has an increased corrosion stability in certain environments. Gallium arsenide is an important semiconductor material, used in integrated circuits. Circuits made from GaAs are much faster (but also much more expensive) than those made from silicon. Unlike silicon, GaAs has a direct bandgap, and can be used in laser diodes and LEDs to convert electrical energy directly into light. After World War I, the United States built a stockpile of 20,000 tons of weaponized lewisite (ClCH=CHAsCl2), an organoarsenic vesicant (blister agent) and lung irritant. The stockpile was neutralized with bleach and dumped into the Gulf of Mexico in the 1950s. During the Vietnam War, the United States used Agent Blue, a mixture of sodium cacodylate and its acid form, as one of the rainbow herbicides to deprive North Vietnamese soldiers of foliage cover and rice. Some species of bacteria obtain their energy in the absence of oxygen by oxidizing various fuels while reducing arsenate to arsenite. Under oxidative environmental conditions some bacteria use arsenite as fuel, which they oxidize to arsenate. The enzymes involved are known as arsenate reductases (Arr). In 2008, bacteria were discovered that employ a version of photosynthesis in the absence of oxygen with arsenites as electron donors, producing arsenates (just as ordinary photosynthesis uses water as electron donor, producing molecular oxygen). Researchers conjecture that, over the course of history, these photosynthesizing organisms produced the arsenates that allowed the arsenate-reducing bacteria to thrive. One strain PHS-1 has been isolated and is related to the gammaproteobacterium "Ectothiorhodospira shaposhnikovii". The mechanism is unknown, but an encoded Arr enzyme may function in reverse to its known homologues. In 2011, it was postulated that a strain of "Halomonadaceae" was able to be grown in the absence of phosphorus by substituting it with arsenic, exploiting the fact that the arsenate and phosphate anions are similar structurally. The study was widely criticised and subsequently refuted by independent researcher groups. Some evidence indicates that arsenic is an essential trace mineral in birds (chickens), and in mammals (rats, hamsters, and goats). However, the biological function is not known. Arsenic has been linked to epigenetic changes, heritable changes in gene expression that occur without changes in DNA sequence. These include DNA methylation, histone modification, and RNA interference. Toxic levels of arsenic cause significant DNA hypermethylation of tumor suppressor genes p16 and p53, thus increasing risk of carcinogenesis. These epigenetic events have been studied "in vitro" using human kidney cells and "in vivo" using rat liver cells and peripheral blood leukocytes in humans. Inductively coupled plasma mass spectrometry (ICP-MS) is used to detect precise levels of intracellular arsenic and other arsenic bases involved in epigenetic modification of DNA. Studies investigating arsenic as an epigenetic factor can be used to develop precise biomarkers of exposure and susceptibility. The Chinese brake fern ("Pteris vittata") hyperaccumulates arsenic from the soil into its leaves and has a proposed use in phytoremediation. Inorganic arsenic and its compounds, upon entering the food chain, are progressively metabolized through a process of methylation. For example, the mold "Scopulariopsis brevicaulis" produces significant amounts of trimethylarsine if inorganic arsenic is present. The organic compound arsenobetaine is found in some marine foods such as fish and algae, and also in mushrooms in larger concentrations. The average person's intake is about 10–50 µg/day. Values about 1000 µg are not unusual following consumption of fish or mushrooms, but there is little danger in eating fish because this arsenic compound is nearly non-toxic. Naturally occurring sources of human exposure include volcanic ash, weathering of minerals and ores, and mineralized groundwater. Arsenic is also found in food, water, soil, and air. Arsenic is absorbed by all plants, but is more concentrated in leafy vegetables, rice, apple and grape juice, and seafood. An additional route of exposure is inhalation of atmospheric gases and dusts. During the Victorian era, arsenic was widely used in home decor, especially wallpapers. Extensive arsenic contamination of groundwater has led to widespread arsenic poisoning in Bangladesh and neighboring countries. It is estimated that approximately 57 million people in the Bengal basin are drinking groundwater with arsenic concentrations elevated above the World Health Organization's standard of 10 parts per billion (ppb). However, a study of cancer rates in Taiwan suggested that significant increases in cancer mortality appear only at levels above 150 ppb. The arsenic in the groundwater is of natural origin, and is released from the sediment into the groundwater, caused by the anoxic conditions of the subsurface. This groundwater was used after local and western NGOs and the Bangladeshi government undertook a massive shallow tube well drinking-water program in the late twentieth century. This program was designed to prevent drinking of bacteria-contaminated surface waters, but failed to test for arsenic in the groundwater. Many other countries and districts in Southeast Asia, such as Vietnam and Cambodia, have geological environments that produce groundwater with a high arsenic content. was reported in Nakhon Si Thammarat, Thailand in 1987, and the Chao Phraya River probably contains high levels of naturally occurring dissolved arsenic without being a public health problem because much of the public uses bottled water. In Pakistan, more than 60 million people are exposed to arsenic polluted drinking water indicated by a recent report of Science. Podgorski's team investigated more than 1200 samples and more than 66% samples exceeded the WHO minimum contamination level. In the United States, arsenic is most commonly found in the ground waters of the southwest. Parts of New England, Michigan, Wisconsin, Minnesota and the Dakotas are also known to have significant concentrations of arsenic in ground water. Increased levels of skin cancer have been associated with arsenic exposure in Wisconsin, even at levels below the 10 part per billion drinking water standard. According to a recent film funded by the US Superfund, millions of private wells have unknown arsenic levels, and in some areas of the US, more than 20% of the wells may contain levels that exceed established limits. Low-level exposure to arsenic at concentrations of 100 parts per billion (i.e., above the 10 parts per billion drinking water standard) compromises the initial immune response to H1N1 or swine flu infection according to NIEHS-supported scientists. The study, conducted in laboratory mice, suggests that people exposed to arsenic in their drinking water may be at increased risk for more serious illness or death from the virus. Some Canadians are drinking water that contains inorganic arsenic. Private-dug–well waters are most at risk for containing inorganic arsenic. Preliminary well water analysis typically does not test for arsenic. Researchers at the Geological Survey of Canada have modeled relative variation in natural arsenic hazard potential for the province of New Brunswick. This study has important implications for potable water and health concerns relating to inorganic arsenic. Epidemiological evidence from Chile shows a dose-dependent connection between chronic arsenic exposure and various forms of cancer, in particular when other risk factors, such as cigarette smoking, are present. These effects have been demonstrated at contaminations less than 50 ppb. Arsenic is itself a constituent of tobacco smoke. Analyzing multiple epidemiological studies on inorganic arsenic exposure suggests a small but measurable increase in risk for bladder cancer at 10 ppb. According to Peter Ravenscroft of the Department of Geography at the University of Cambridge, roughly 80 million people worldwide consume between 10 and 50 ppb arsenic in their drinking water. If they all consumed exactly 10 ppb arsenic in their drinking water, the previously cited multiple epidemiological study analysis would predict an additional 2,000 cases of bladder cancer alone. This represents a clear underestimate of the overall impact, since it does not include lung or skin cancer, and explicitly underestimates the exposure. Those exposed to levels of arsenic above the current WHO standard should weigh the costs and benefits of arsenic remediation. Early (1973) evaluations of the processes for removing dissolved arsenic from drinking water demonstrated the efficacy of co-precipitation with either iron or aluminum oxides. In particular, iron as a coagulant was found to remove arsenic with an efficacy exceeding 90%. Several adsorptive media systems have been approved for use at point-of-service in a study funded by the United States Environmental Protection Agency (US EPA) and the National Science Foundation (NSF). A team of European and Indian scientists and engineers have set up six arsenic treatment plants in West Bengal based on in-situ remediation method (SAR Technology). This technology does not use any chemicals and arsenic is left in an insoluble form (+5 state) in the subterranean zone by recharging aerated water into the aquifer and developing an oxidation zone that supports arsenic oxidizing micro-organisms. This process does not produce any waste stream or sludge and is relatively cheap. Another effective and inexpensive method to avoid arsenic contamination is to sink wells 500 feet or deeper to reach purer waters. A recent 2011 study funded by the US National Institute of Environmental Health Sciences' Superfund Research Program shows that deep sediments can remove arsenic and take it out of circulation. In this process, called "adsorption", arsenic sticks to the surfaces of deep sediment particles and is naturally removed from the ground water. Magnetic separations of arsenic at very low magnetic field gradients with high-surface-area and monodisperse magnetite (Fe3O4) nanocrystals have been demonstrated in point-of-use water purification. Using the high specific surface area of Fe3O4 nanocrystals, the mass of waste associated with arsenic removal from water has been dramatically reduced. Epidemiological studies have suggested a correlation between chronic consumption of drinking water contaminated with arsenic and the incidence of all leading causes of mortality. The literature indicates that arsenic exposure is causative in the pathogenesis of diabetes. Chaff-based filters have recently been shown to reduce the arsenic content of water to 3 µg/L. This may find applications in areas where the potable water is extracted from underground aquifers. For several centuries, the people of San Pedro de Atacama in Chile have been drinking water that is contaminated with arsenic, and some evidence suggests they have developed some immunity. Around one-third of the world's population drinks water from groundwater resources. Of this, about 10 percent, approximately 300 million people, obtains water from groundwater resources that are contaminated with unhealthy levels of arsenic or fluoride. These trace elements derive mainly from minerals and ions in the ground. Arsenic is unique among the trace metalloids and oxyanion-forming trace metals (e.g. As, Se, Sb, Mo, V, Cr, U, Re). It is sensitive to mobilization at pH values typical of natural waters (pH 6.5–8.5) under both oxidizing and reducing conditions. Arsenic can occur in the environment in several oxidation states (−3, 0, +3 and +5), but in natural waters it is mostly found in inorganic forms as oxyanions of trivalent arsenite [As(III)] or pentavalent arsenate [As(V)]. Organic forms of arsenic are produced by biological activity, mostly in surface waters, but are rarely quantitatively important. Organic arsenic compounds may, however, occur where waters are significantly impacted by industrial pollution. Arsenic may be solubilized by various processes. When pH is high, arsenic may be released from surface binding sites that lose their positive charge. When water level drops and sulfide minerals are exposed to air, arsenic trapped in sulfide minerals can be released into water. When organic carbon is present in water, bacteria are fed by directly reducing As(V) to As(III) or by reducing the element at the binding site, releasing inorganic arsenic. The aquatic transformations of arsenic are affected by pH, reduction-oxidation potential, organic matter concentration and the concentrations and forms of other elements, especially iron and manganese. The main factors are pH and the redox potential. Generally, the main forms of arsenic under oxic conditions are H3AsO4, H2AsO4−, HAsO42−, and AsO43− at pH 2, 2–7, 7–11 and 11, respectively. Under reducing conditions, H3AsO4 is predominant at pH 2–9. Oxidation and reduction affects the migration of arsenic in subsurface environments. Arsenite is the most stable soluble form of arsenic in reducing environments and arsenate, which is less mobile than arsenite, is dominant in oxidizing environments at neutral pH. Therefore, arsenic may be more mobile under reducing conditions. The reducing environment is also rich in organic matter which may enhance the solubility of arsenic compounds. As a result, the adsorption of arsenic is reduced and dissolved arsenic accumulates in groundwater. That is why the arsenic content is higher in reducing environments than in oxidizing environments. The presence of sulfur is another factor that affects the transformation of arsenic in natural water. Arsenic can precipitate when metal sulfides form. In this way, arsenic is removed from the water and its mobility decreases. When oxygen is present, bacteria oxidize reduced sulfur to generate energy, potentially releasing bound arsenic. Redox reactions involving Fe also appear to be essential factors in the fate of arsenic in aquatic systems. The reduction of iron oxyhydroxides plays a key role in the release of arsenic to water. So arsenic can be enriched in water with elevated Fe concentrations. Under oxidizing conditions, arsenic can be mobilized from pyrite or iron oxides especially at elevated pH. Under reducing conditions, arsenic can be mobilized by reductive desorption or dissolution when associated with iron oxides. The reductive desorption occurs under two circumstances. One is when arsenate is reduced to arsenite which adsorbs to iron oxides less strongly. The other results from a change in the charge on the mineral surface which leads to the desorption of bound arsenic. Some species of bacteria catalyze redox transformations of arsenic. Dissimilatory arsenate-respiring prokaryotes (DARP) speed up the reduction of As(V) to As(III). DARP use As(V) as the electron acceptor of anaerobic respiration and obtain energy to survive. Other organic and inorganic substances can be oxidized in this process. Chemoautotrophic arsenite oxidizers (CAO) and heterotrophic arsenite oxidizers (HAO) convert As(III) into As(V). CAO combine the oxidation of As(III) with the reduction of oxygen or nitrate. They use obtained energy to fix produce organic carbon from CO2. HAO cannot obtain energy from As(III) oxidation. This process may be an arsenic detoxification mechanism for the bacteria. Equilibrium thermodynamic calculations predict that As(V) concentrations should be greater than As(III) concentrations in all but strongly reducing conditions, i.e. where SO42− reduction is occurring. However, abiotic redox reactions of arsenic are slow. Oxidation of As(III) by dissolved O2 is a particularly slow reaction. For example, Johnson and Pilson (1975) gave half-lives for the oxygenation of As(III) in seawater ranging from several months to a year. In other studies, As(V)/As(III) ratios were stable over periods of days or weeks during water sampling when no particular care was taken to prevent oxidation, again suggesting relatively slow oxidation rates. Cherry found from experimental studies that the As(V)/As(III) ratios were stable in anoxic solutions for up to 3 weeks but that gradual changes occurred over longer timescales. Sterile water samples have been observed to be less susceptible to speciation changes than non-sterile samples. Oremland found that the reduction of As(V) to As(III) in Mono Lake was rapidly catalyzed by bacteria with rate constants ranging from 0.02 to 0.3 day−1. As of 2002, US-based industries consumed 19,600 metric tons of arsenic. Ninety percent of this was used for treatment of wood with chromated copper arsenate (CCA). In 2007, 50% of the 5,280 metric tons of consumption was still used for this purpose. In the United States, the voluntary phasing-out of arsenic in production of consumer products and residential and general consumer construction products began on 31 December 2003, and alternative chemicals are now used, such as Alkaline Copper Quaternary, borates, copper azole, cyproconazole, and propiconazole. Although discontinued, this application is also one of the most concerning to the general public. The vast majority of older pressure-treated wood was treated with CCA. CCA lumber is still in widespread use in many countries, and was heavily used during the latter half of the 20th century as a structural and outdoor building material. Although the use of CCA lumber was banned in many areas after studies showed that arsenic could leach out of the wood into the surrounding soil (from playground equipment, for instance), a risk is also presented by the burning of older CCA timber. The direct or indirect ingestion of wood ash from burnt CCA lumber has caused fatalities in animals and serious poisonings in humans; the lethal human dose is approximately 20 grams of ash. Scrap CCA lumber from construction and demolition sites may be inadvertently used in commercial and domestic fires. Protocols for safe disposal of CCA lumber are not consistent throughout the world. Widespread landfill disposal of such timber raises some concern, but other studies have shown no arsenic contamination in the groundwater. One tool that maps the location (and other information) of arsenic releases in the United State is TOXMAP. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services of the United States National Library of Medicine (NLM) funded by the US Federal Government. With marked-up maps of the United States, TOXMAP enables users to visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network (TOXNET), PubMed, and from other authoritative sources. Physical, chemical, and biological methods have been used to remediate arsenic contaminated water. Bioremediation is said to be cost-effective and environmentally friendly. Bioremediation of ground water contaminated with arsenic aims to convert arsenite, the toxic form of arsenic to humans, to arsenate. Arsenate (+5 oxidation state) is the dominant form of arsenic in surface water, while arsenite (+3 oxidation state) is the dominant form in hypoxic to anoxic environments. Arsenite is more soluble and mobile than arsenate. Many species of bacteria can transform arsenite to arsenate in anoxic conditions by using arsenite as an electron donor. This is a useful method in ground water remediation. Another bioremediation strategy is to use plants that accumulate arsenic in their tissues via phytoremediation but the disposal of contaminated plant material needs to be considered. Bioremediation requires careful evaluation and design in accordance with existing conditions. Some sites may require the addition of an electron acceptor while others require microbe supplementation (bioaugmentation). Regardless of the method used, only constant monitoring can prevent future contamination. Arsenic and many of its compounds are especially potent poisons. Elemental arsenic and arsenic sulfate and trioxide compounds are classified as "toxic" and "dangerous for the environment" in the European Union under directive 67/548/EEC. The International Agency for Research on Cancer (IARC) recognizes arsenic and inorganic arsenic compounds as group 1 carcinogens, and the EU lists arsenic trioxide, arsenic pentoxide, and arsenate salts as category 1 carcinogens. Arsenic is known to cause when present in drinking water, "the most common species being arsenate [; As(V)] and arsenite [H3AsO3; As(III)]". In the United States since 2006, the maximum concentration in drinking water allowed by the Environmental Protection Agency (EPA) is 10 ppb and the FDA set the same standard in 2005 for bottled water. The Department of Environmental Protection for New Jersey set a drinking water limit of 5 ppb in 2006. The IDLH (immediately dangerous to life and health) value for arsenic metal and inorganic arsenic compounds is 5 mg/m3 (5 ppb). The Occupational Safety and Health Administration has set the permissible exposure limit (PEL) to a time-weighted average (TWA) of 0.01 mg/m3 (0.01 ppb), and the National Institute for Occupational Safety and Health (NIOSH) has set the recommended exposure limit (REL) to a 15-minute constant exposure of 0.002 mg/m3 (0.002 ppb). The PEL for organic arsenic compounds is a TWA of 0.5 mg/m3. (0.5 ppb). In 2008, based on its ongoing testing of a wide variety of American foods for toxic chemicals, the U.S. Food and Drug Administration set the "level of concern" for inorganic arsenic in apple and pear juices at 23 ppb, based on non-carcinogenic effects, and began blocking importation of products in excess of this level; it also required recalls for non-conforming domestic products. In 2011, the national "Dr. Oz" television show broadcast a program highlighting tests performed by an independent lab hired by the producers. Though the methodology was disputed (it did not distinguish between organic and inorganic arsenic) the tests showed levels of arsenic up to 36 ppb. In response, FDA tested the worst brand from the "Dr." "Oz" show and found much lower levels. Ongoing testing found 95% of the apple juice samples were below the level of concern. Later testing by Consumer Reports showed inorganic arsenic at levels slightly above 10 ppb, and the organization urged parents to reduce consumption. In July 2013, on consideration of consumption by children, chronic exposure, and carcinogenic effect, the FDA established an "action level" of 10 ppb for apple juice, the same as the drinking water standard. Concern about arsenic in rice in Bangladesh was raised in 2002, but at the time only Australia had a legal limit for food (one milligram per kilogram). Concern was raised about people who were eating U.S. rice exceeding WHO standards for personal arsenic intake in 2005. In 2011, the People's Republic of China set a food standard of 150 ppb for arsenic. In the United States in 2012, testing by separate groups of researchers at the Children's Environmental Health and Disease Prevention Research Center at Dartmouth College (early in the year, focusing on urinary levels in children) and Consumer Reports (in November) found levels of arsenic in rice that resulted in calls for the FDA to set limits. The FDA released some testing results in September 2012, and as of July 2013, is still collecting data in support of a new potential regulation. It has not recommended any changes in consumer behavior. Consumer Reports recommended: A 2014 World Health Organization advisory conference was scheduled to consider limits of 200–300 ppb for rice. Arsenic is bioaccumulative in many organisms, marine species in particular, but it does not appear to biomagnify significantly in food webs. In polluted areas, plant growth may be affected by root uptake of arsenate, which is a phosphate analog and therefore readily transported in plant tissues and cells. In polluted areas, uptake of the more toxic arsenite ion (found more particularly in reducing conditions) is likely in poorly-drained soils. Arsenic's toxicity comes from the affinity of arsenic(III) oxides for thiols. Thiols, in the form of cysteine residues and cofactors such as lipoic acid and coenzyme A, are situated at the active sites of many important enzymes. Arsenic disrupts ATP production through several mechanisms. At the level of the citric acid cycle, arsenic inhibits lipoic acid, which is a cofactor for pyruvate dehydrogenase. By competing with phosphate, arsenate uncouples oxidative phosphorylation, thus inhibiting energy-linked reduction of NAD+, mitochondrial respiration and ATP synthesis. Hydrogen peroxide production is also increased, which, it is speculated, has potential to form reactive oxygen species and oxidative stress. These metabolic interferences lead to death from multi-system organ failure. The organ failure is presumed to be from necrotic cell death, not apoptosis, since energy reserves have been too depleted for apoptosis to occur. Occupational exposure and arsenic poisoning may occur in persons working in industries involving the use of inorganic arsenic and its compounds, such as wood preservation, glass production, nonferrous metal alloys, and electronic semiconductor manufacturing. Inorganic arsenic is also found in coke oven emissions associated with the smelter industry. The conversion between As(III) and As(V) is a large factor in arsenic environmental contamination. According to Croal, Gralnick, Malasarn and Newman, "[the] understanding [of] what stimulates As(III) oxidation and/or limits As(V) reduction is relevant for bioremediation of contaminated sites (Croal). The study of chemolithoautotrophic As(III) oxidizers and the heterotrophic As(V) reducers can help the understanding of the oxidation and/or reduction of arsenic. Treatment of chronic arsenic poisoning is possible. British anti-lewisite (dimercaprol) is prescribed in doses of 5 mg/kg up to 300 mg every 4 hours for the first day, then every 6 hours for the second day, and finally every 8 hours for 8 additional days. However the USA's Agency for Toxic Substances and Disease Registry (ATSDR) states that the long-term effects of arsenic exposure cannot be predicted. Blood, urine, hair, and nails may be tested for arsenic; however, these tests cannot foresee possible health outcomes from the exposure. Long-term exposure and consequent excretion through urine has been linked to bladder and kidney cancer in addition to cancer of the liver, prostate, skin, lungs, and nasal cavity.
https://en.wikipedia.org/wiki?curid=897
Antimony Antimony is a chemical element with the symbol Sb (from ) and atomic number 51. A lustrous gray metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were powdered for use as medicine and cosmetics, often known by the Arabic name kohl. Metallic antimony was also known, but it was erroneously identified as lead upon its discovery. The earliest known description of the metal in the West was written in 1540 by Vannoccio Biringuccio. For some time, China has been the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. The industrial methods for refining antimony are roasting and reduction with carbon or direct reduction of stibnite with iron. The largest applications for metallic antimony are an alloy with lead and tin and the lead antimony plates in lead–acid batteries. Alloys of lead and tin with antimony have improved properties for solders, bullets, and plain bearings. Antimony compounds are prominent additives for chlorine and bromine-containing fire retardants found in many commercial and domestic products. An emerging application is the use of antimony in microelectronics. Antimony is a member of group 15 of the periodic table, one of the elements called pnictogens, and has an electronegativity of 2.05. In accordance with periodic trends, it is more electronegative than tin or bismuth, and less electronegative than tellurium or arsenic. Antimony is stable in air at room temperature, but reacts with oxygen if heated to produce antimony trioxide, Sb2O3. Antimony is a silvery, lustrous gray metalloid with a Mohs scale hardness of 3, which is too soft to make hard objects; coins of antimony were issued in China's Guizhou province in 1931 but the durability was poor and the minting was soon discontinued. Antimony is resistant to attack by acids. Four allotropes of antimony are known: a stable metallic form and three metastable forms (explosive, black and yellow). Elemental antimony is a brittle, silver-white shiny metalloid. When slowly cooled, molten antimony crystallizes in a trigonal cell, isomorphic with the gray allotrope of arsenic. A rare explosive form of antimony can be formed from the electrolysis of antimony trichloride. When scratched with a sharp implement, an exothermic reaction occurs and white fumes are given off as metallic antimony forms; when rubbed with a pestle in a mortar, a strong detonation occurs. Black antimony is formed upon rapid cooling of antimony vapor. It has the same crystal structure as red phosphorus and black arsenic; it oxidizes in air and may ignite spontaneously. At 100 °C, it gradually transforms into the stable form. The yellow allotrope of antimony is the most unstable. It has only been generated by oxidation of stibine (SbH3) at −90 °C. Above this temperature and in ambient light, this metastable allotrope transforms into the more stable black allotrope. Elemental antimony adopts a layered structure (space group Rm No. 166) in which layers consist of fused, ruffled, six-membered rings. The nearest and next-nearest neighbors form an irregular octahedral complex, with the three atoms in each double layer slightly closer than the three atoms in the next. This relatively close packing leads to a high density of 6.697 g/cm3, but the weak bonding between the layers leads to the low hardness and brittleness of antimony. Antimony has two stable isotopes: 121Sb with a natural abundance of 57.36% and 123Sb with a natural abundance of 42.64%. It also has 35 radioisotopes, of which the longest-lived is 125Sb with a half-life of 2.75 years. In addition, 29 metastable states have been characterized. The most stable of these is 120m1Sb with a half-life of 5.76 days. Isotopes that are lighter than the stable 123Sb tend to decay by β+ decay, and those that are heavier tend to decay by β− decay, with some exceptions. The abundance of antimony in the Earth's crust is estimated to be 0.2 to 0.5 parts per million, comparable to thallium at 0.5 parts per million and silver at 0.07 ppm. Even though this element is not abundant, it is found in more than 100 mineral species. Antimony is sometimes found natively (e.g. on Antimony Peak), but more frequently it is found in the sulfide stibnite (Sb2S3) which is the predominant ore mineral. Antimony compounds are often classified according to their oxidation state: Sb(III) and Sb(V). The +5 oxidation state is more stable. Antimony trioxide is formed when antimony is burnt in air. In the gas phase, the molecule of the compound is , but it polymerizes upon condensing. Antimony pentoxide () can be formed only by oxidation with concentrated nitric acid. Antimony also forms a mixed-valence oxide, antimony tetroxide (), which features both Sb(III) and Sb(V). Unlike oxides of phosphorus and arsenic, these oxides are amphoteric, do not form well-defined oxoacids, and react with acids to form antimony salts. Antimonous acid is unknown, but the conjugate base sodium antimonite () forms upon fusing sodium oxide and . Transition metal antimonites are also known. Antimonic acid exists only as the hydrate , forming salts as the antimonate anion . When a solution containing this anion is dehydrated, the precipitate contains mixed oxides. Many antimony ores are sulfides, including stibnite (), pyrargyrite (), zinkenite, jamesonite, and boulangerite. Antimony pentasulfide is non-stoichiometric and features antimony in the +3 oxidation state and S-S bonds. Several thioantimonides are known, such as and . Antimony forms two series of halides: and . The trihalides , , , and are all molecular compounds having trigonal pyramidal molecular geometry. The trifluoride is prepared by the reaction of with HF: It is Lewis acidic and readily accepts fluoride ions to form the complex anions and . Molten is a weak electrical conductor. The trichloride is prepared by dissolving in hydrochloric acid: The pentahalides and have trigonal bipyramidal molecular geometry in the gas phase, but in the liquid phase, is polymeric, whereas is monomeric. is a powerful Lewis acid used to make the superacid fluoroantimonic acid ("H2SbF7"). Oxyhalides are more common for antimony than for arsenic and phosphorus. Antimony trioxide dissolves in concentrated acid to form oxoantimonyl compounds such as SbOCl and . Compounds in this class generally are described as derivatives of Sb3−. Antimony forms antimonides with metals, such as indium antimonide (InSb) and silver antimonide (). The alkali metal and zinc antimonides, such as Na3Sb and Zn3Sb2, are more reactive. Treating these antimonides with acid produces the highly unstable gas stibine, : Stibine can also be produced by treating salts with hydride reagents such as sodium borohydride. Stibine decomposes spontaneously at room temperature. Because stibine has a positive heat of formation, it is thermodynamically unstable and thus antimony does not react with hydrogen directly. Organoantimony compounds are typically prepared by alkylation of antimony halides with Grignard reagents. A large variety of compounds are known with both Sb(III) and Sb(V) centers, including mixed chloro-organic derivatives, anions, and cations. Examples include Sb(C6H5)3 (triphenylstibine), Sb2(C6H5)4 (with an Sb-Sb bond), and cyclic [Sb(C6H5)]n. Pentacoordinated organoantimony compounds are common, examples being Sb(C6H5)5 and several related halides. Antimony(III) sulfide, Sb2S3, was recognized in predynastic Egypt as an eye cosmetic (kohl) as early as about 3100 BC, when the cosmetic palette was invented. An artifact, said to be part of a vase, made of antimony dating to about 3000 BC was found at Telloh, Chaldea (part of present-day Iraq), and a copper object plated with antimony dating between 2500 BC and 2200 BC has been found in Egypt. Austen, at a lecture by Herbert Gladstone in 1892, commented that "we only know of antimony at the present day as a highly brittle and crystalline metal, which could hardly be fashioned into a useful vase, and therefore this remarkable 'find' (artifact mentioned above) must represent the lost art of rendering antimony malleable." The British archaeologist Roger Moorey was unconvinced the artifact was indeed a vase, mentioning that Selimkhanov, after his analysis of the Tello object (published in 1975), "attempted to relate the metal to Transcaucasian natural antimony" (i.e. native metal) and that "the antimony objects from Transcaucasia are all small personal ornaments." This weakens the evidence for a lost art "of rendering antimony malleable." The Roman scholar Pliny the Elder described several ways of preparing antimony sulfide for medical purposes in his treatise "Natural History". Pliny the Elder also made a distinction between "male" and "female" forms of antimony; the male form is probably the sulfide, while the female form, which is superior, heavier, and less friable, has been suspected to be native metallic antimony. The Greek naturalist Pedanius Dioscorides mentioned that antimony sulfide could be roasted by heating by a current of air. It is thought that this produced metallic antimony. The intentional isolation of antimony is described by Jabir ibn Hayyan before 815 AD. A description of a procedure for isolating antimony is later given in the 1540 book "De la pirotechnia" by Vannoccio Biringuccio, predating the more famous 1556 book by Agricola, "De re metallica". In this context Agricola has been often incorrectly credited with the discovery of metallic antimony. The book "Currus Triumphalis Antimonii" (The Triumphal Chariot of Antimony), describing the preparation of metallic antimony, was published in Germany in 1604. It was purported to be written by a Benedictine monk, writing under the name Basilius Valentinus in the 15th century; if it were authentic, which it is not, it would predate Biringuccio. The metal antimony was known to German chemist Andreas Libavius in 1615 who obtained it by adding iron to a molten mixture of antimony sulfide, salt and potassium tartrate. This procedure produced antimony with a crystalline or starred surface. With the advent of challenges to phlogiston theory, it was recognized that antimony is an element forming sulfides, oxides, and other compounds, as do other metals. The first discovery of naturally occurring pure antimony in the Earth's crust was described by the Swedish scientist and local mine district engineer Anton von Swab in 1783; the type-sample was collected from the Sala Silver Mine in the Bergslagen mining district of Sala, Västmanland, Sweden. The medieval Latin form, from which the modern languages and late Byzantine Greek take their names for antimony, is "antimonium". The origin of this is uncertain; all suggestions have some difficulty either of form or interpretation. The popular etymology, from ἀντίμοναχός "anti-monachos" or French "antimoine", still has adherents; this would mean "monk-killer", and is explained by many early alchemists being monks, and antimony being poisonous. Another popular etymology is the hypothetical Greek word ἀντίμόνος "antimonos", "against aloneness", explained as "not found as metal", or "not found unalloyed". Lippmann conjectured a hypothetical Greek word ανθήμόνιον "anthemonion", which would mean "floret", and cites several examples of related Greek words (but not that one) which describe chemical or biological efflorescence. The early uses of "antimonium" include the translations, in 1050–1100, by Constantine the African of Arabic medical treatises. Several authorities believe "antimonium" is a scribal corruption of some Arabic form; Meyerhof derives it from "ithmid"; other possibilities include "athimar", the Arabic name of the metalloid, and a hypothetical "as-stimmi", derived from or parallel to the Greek. The standard chemical symbol for antimony (Sb) is credited to Jöns Jakob Berzelius, who derived the abbreviation from "stibium". The ancient words for antimony mostly have, as their chief meaning, kohl, the sulfide of antimony. The Egyptians called antimony "mśdmt"; in hieroglyphs, the vowels are uncertain, but the Coptic form of the word is ⲥⲧⲏⲙ (stēm). The Greek word, στίμμι "stimmi", is probably a loan word from Arabic or from Egyptian "stm" O34:D46-G17-F21:D4 and is used by Attic tragic poets of the 5th century BC. Later Greeks also used στἰβι "stibi", as did Celsus and Pliny, writing in Latin, in the first century AD. Pliny also gives the names "stimi" , "larbaris", alabaster, and the "very common" "platyophthalmos", "wide-eye" (from the effect of the cosmetic). Later Latin authors adapted the word to Latin as "stibium". The Arabic word for the substance, as opposed to the cosmetic, can appear as إثمد "ithmid, athmoud, othmod", or "uthmod". Littré suggests the first form, which is the earliest, derives from "stimmida", an accusative for "stimmi". The British Geological Survey (BGS) reported that in 2005 China was the top producer of antimony with approximately 84% of the world share, followed at a distance by South Africa, Bolivia and Tajikistan. Xikuangshan Mine in Hunan province has the largest deposits in China with an estimated deposit of 2.1 million metric tons. In 2016, according to the US Geological Survey, China accounted for 76.9% of total antimony production, followed in second place by Russia with 6.9% and Tajikistan with 6.2%. Chinese production of antimony is expected to decline in the future as mines and smelters are closed down by the government as part of pollution control. Especially due to a new environmental protection law having gone into effect on January 2015 and revised “Emission Standards of Pollutants for Stanum, Antimony, and Mercury” having gone into effect, hurdles for economic production are higher. According to the National Bureau of Statistics in China, by September 2015 50% of antimony production capacity in the Hunan province (the province with biggest antimony reserves in China) had not been used. Reported production of antimony in China has fallen and is unlikely to increase in the coming years, according to the Roskill report. No significant antimony deposits in China have been developed for about ten years, and the remaining economic reserves are being rapidly depleted. The world's largest antimony producers, according to Roskill, are listed below: According to statistics from the USGS, current global reserves of antimony will be depleted in 13 years. However, the USGS expects more resources will be found. The extraction of antimony from ores depends on the quality and composition of the ore. Most antimony is mined as the sulfide; lower-grade ores are concentrated by froth flotation, while higher-grade ores are heated to 500–600 °C, the temperature at which stibnite melts and separates from the gangue minerals. Antimony can be isolated from the crude antimony sulfide by reduction with scrap iron: The sulfide is converted to an oxide; the product is then roasted, sometimes for the purpose of vaporizing the volatile antimony(III) oxide, which is recovered. This material is often used directly for the main applications, impurities being arsenic and sulfide. Antimony is isolated from the oxide by a carbothermal reduction: The lower-grade ores are reduced in blast furnaces while the higher-grade ores are reduced in reverberatory furnaces. Antimony has consistently been ranked high in European and US risk lists concerning criticality of the element indicating the relative risk to the supply of chemical elements or element groups required to maintain the current economy and lifestyle. With most of the antimony imported into Europe and the US coming from China, Chinese production is critical to supply. As China is revising and increasing environmental control standards, antimony production is becoming increasingly restricted. Additionally Chinese export quotas for antimony have been decreasing in the past years. These two factors increase supply risk for both Europe and US. According to the BGS Risk List 2015, antimony is ranked second highest (after rare earth elements) on the relative supply risk index. This indicates that it has currently the second highest supply risk for chemical elements or element groups which are of economic value to the British economy and lifestyle. Furthermore, antimony was identified as one of 20 critical raw materials for the EU in a report published in 2014 (which revised the initial report published in 2011). As seen in Figure xxx antimony maintains high supply risk relative to its economic importance. 92% of the antimony is imported from China, which is a significantly high concentration of production. Much analysis has been conducted in the U.S. toward defining which metals should be called strategic or critical to the nation's security. Exact definitions do not exist, and views as to what constitutes a strategic or critical mineral to U.S. security diverge. In 2015, no antimony was mined in the U.S. The metal is imported from foreign countries. In the period 2011–2014, 68% of America's antimony came from China, 14% from India, 4% from Mexico, and 14% from other sources. There are no publicly known government stockpiles in place currently. The U.S. "Subcommittee on Critical and Strategic Mineral Supply Chains" has screened 78 mineral resources from 1996–2008. It found that a small subset of minerals including antimony has fallen into the category of potentially critical minerals consistently. In the future, a second assessment will be made of the found subset of minerals to identify which should be defined of significant risk and critical to U.S. interests. About 60% of antimony is consumed in flame retardants, and 20% is used in alloys for batteries, plain bearings, and solders. Antimony is mainly used as the trioxide for flame-proofing compounds, always in combination with halogenated flame retardants except in halogen-containing polymers. The flame retarding effect of antimony trioxide is produced by the formation of halogenated antimony compounds, which react with hydrogen atoms, and probably also with oxygen atoms and OH radicals, thus inhibiting fire. Markets for these flame-retardants include children's clothing, toys, aircraft, and automobile seat covers. They are also added to polyester resins in fiberglass composites for such items as light aircraft engine covers. The resin will burn in the presence of an externally generated flame, but will extinguish when the external flame is removed. Antimony forms a highly useful alloy with lead, increasing its hardness and mechanical strength. For most applications involving lead, varying amounts of antimony are used as alloying metal. In lead–acid batteries, this addition improves plate strength and charging characteristics. For sailboats, lead keels are used as counterweights, ranging from 600 lbs to over 8000 lbs; to improve hardness and tensile strength of the lead keel, antimony is mixed with lead between 2% and 5% by volume. Antimony is used in antifriction alloys (such as Babbitt metal), in bullets and lead shot, electrical cable sheathing, type metal (for example, for linotype printing machines), solder (some "lead-free" solders contain 5% Sb), in pewter, and in hardening alloys with low tin content in the manufacturing of organ pipes. Three other applications consume nearly all the rest of the world's supply. One application is as a stabilizer and catalyst for the production of polyethylene terephthalate. Another is as a fining agent to remove microscopic bubbles in glass, mostly for TV screens; antimony ions interact with oxygen, suppressing the tendency of the latter to form bubbles. The third application is pigments. Antimony is increasingly being used in semiconductors as a dopant in n-type silicon wafers for diodes, infrared detectors, and Hall-effect devices. In the 1950s, the emitters and collectors of n-p-n alloy junction transistors were doped with tiny beads of a lead-antimony alloy. Indium antimonide is used as a material for mid-infrared detectors. Biology and medicine have few uses for antimony. Treatments containing antimony, known as antimonials, are used as emetics. Antimony compounds are used as antiprotozoan drugs. Potassium antimonyl tartrate, or tartar emetic, was once used as an anti-schistosomal drug from 1919 on. It was subsequently replaced by praziquantel. Antimony and its compounds are used in several veterinary preparations, such as anthiomaline and lithium antimony thiomalate, as a skin conditioner in ruminants. Antimony has a nourishing or conditioning effect on keratinized tissues in animals. Antimony-based drugs, such as meglumine antimoniate, are also considered the drugs of choice for treatment of leishmaniasis in domestic animals. Unfortunately, besides having low therapeutic indices, the drugs have minimal penetration of the bone marrow, where some of the "Leishmania" amastigotes reside, and curing the disease – especially the visceral form – is very difficult. Elemental antimony as an antimony pill was once used as a medicine. It could be reused by others after ingestion and elimination. Antimony(III) sulfide is used in the heads of some safety matches. Antimony sulfides help to stabilize the friction coefficient in automotive brake pad materials. Antimony is used in bullets, bullet tracers, paint, glass art, and as an opacifier in enamel. Antimony-124 is used together with beryllium in neutron sources; the gamma rays emitted by antimony-124 initiate the photodisintegration of beryllium. The emitted neutrons have an average energy of 24 keV. Natural antimony is used in startup neutron sources. Historically, the powder derived from crushed antimony ("kohl") has been applied to the eyes with a metal rod and with one's spittle, thought by the ancients to aid in curing eye infections. The practice is still seen in Yemen and in other Arabian countries. The effects of antimony and its compounds on human and environmental health differ widely. Elemental antimony metal does not affect human and environmental health. Inhalation of antimony trioxide (and similar poorly soluble Sb(III) dust particles such as antimony dust) is considered harmful and suspected of causing cancer. However, these effects are only observed with female rats and after long-term exposure to high dust concentrations. The effects are hypothesized to be attributed to inhalation of poorly soluble Sb particles leading to impaired lung clearance, lung overload, inflammation and ultimately tumour formation, not to exposure to antimony ions (OECD, 2008). Antimony chlorides are corrosive to skin. The effects of antimony are not comparable to those of arsenic; this might be caused by the significant differences of uptake, metabolism, and excretion between arsenic and antimony. For oral absorption, ICRP (1994) has recommended values of 10% for tartar emetic and 1% for all other antimony compounds. Dermal absorption for metals is estimated to be at most 1% (HERAG, 2007). Inhalation absorption of antimony trioxide and other poorly soluble Sb(III) substances (such as antimony dust) is estimated at 6.8% (OECD, 2008), whereas a value <1% is derived for Sb(V) substances. Antimony(V) is not quantitatively reduced to antimony(III) in the cell, and both species exist simultaneously. Antimony is mainly excreted from the human body via urine. Antimony and its compounds do not cause acute human health effects, with the exception of antimony potassium tartrate ("tartar emetic"), a prodrug that is intentionally used to treat leishmaniasis patients. Prolonged skin contact with antimony dust may cause dermatitis. However, it was agreed at the European Union level that the skin rashes observed are not substance-specific, but most probably due to a physical blocking of sweat ducts (ECHA/PR/09/09, Helsinki, 6 July 2009). Antimony dust may also be explosive when dispersed in the air; when in a bulk solid it is not combustible. Antimony is incompatible with strong acids, halogenated acids, and oxidizers; when exposed to newly formed hydrogen it may form stibine (SbH3). The 8-hour time-weighted average (TWA) is set at 0.5 mg/m3 by the American Conference of Governmental Industrial Hygienists and by the Occupational Safety and Health Administration (OSHA) as a legal permissible exposure limit (PEL) in the workplace. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 0.5 mg/m3 as an 8 hour TWA. Antimony compounds are used as catalysts for polyethylene terephthalate (PET) production. Some studies report minor antimony leaching from PET bottles into liquids, but levels are below drinking water guidelines. Antimony concentrations in fruit juice concentrates were somewhat higher (up to 44.7 µg/L of antimony), but juices do not fall under the drinking water regulations. The drinking water guidelines are: The TDI proposed by WHO is 6 µg antimony per kilogram of body weight. The IDLH (immediately dangerous to life and health) value for antimony is 50 mg/m3. Certain compounds of antimony appear to be toxic, particularly antimony trioxide and antimony potassium tartrate. Effects may be similar to arsenic poisoning. Occupational exposure may cause respiratory irritation, pneumoconiosis, antimony spots on the skin, gastrointestinal symptoms, and cardiac arrhythmias. In addition, antimony trioxide is potentially carcinogenic to humans. Adverse health effects have been observed in humans and animals following inhalation, oral, or dermal exposure to antimony and antimony compounds. Antimony toxicity typically occurs either due to occupational exposure, during therapy or from accidental ingestion. It is unclear if antimony can enter the body through the skin.
https://en.wikipedia.org/wiki?curid=898
Actinium Actinium is a chemical element with the symbol Ac and atomic number 89. It was first isolated by French chemist André-Louis Debierne in 1899. Friedrich Oskar Giesel later independently isolated it in 1902 and, unaware that it was already known, gave it the name emanium. Actinium gave the name to the actinide series, a group of 15 similar elements between actinium and lawrencium in the periodic table. It is also sometimes considered the first of the 7th-period transition metals, although lawrencium is less commonly given that position. Together with polonium, radium, and radon, actinium was one of the first non-primordial radioactive elements to be isolated. A soft, silvery-white radioactive metal, actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that prevents further oxidation. As with most lanthanides and many actinides, actinium assumes oxidation state +3 in nearly all its chemical compounds. Actinium is found only in traces in uranium and thorium ores as the isotope 227Ac, which decays with a half-life of 21.772 years, predominantly emitting beta and sometimes alpha particles, and 228Ac, which is beta active with a half-life of 6.15 hours. One tonne of natural uranium in ore contains about 0.2 milligrams of actinium-227, and one tonne of thorium contains about 5 nanograms of actinium-228. The close similarity of physical and chemical properties of actinium and lanthanum makes separation of actinium from the ore impractical. Instead, the element is prepared, in milligram amounts, by the neutron irradiation of 226 in a nuclear reactor. Owing to its scarcity, high price and radioactivity, actinium has no significant industrial use. Its current applications include a neutron source and an agent for radiation therapy. André-Louis Debierne, a French chemist, announced the discovery of a new element in 1899. He separated it from pitchblende residues left by Marie and Pierre Curie after they had extracted radium. In 1899, Debierne described the substance as similar to titanium and (in 1900) as similar to thorium. Friedrich Oskar Giesel independently discovered actinium in 1902 as a substance being similar to lanthanum and called it "emanium" in 1904. After a comparison of the substances half-lives determined by Debierne, Harriet Brooks in 1904, and Otto Hahn and Otto Sackur in 1905, Debierne's chosen name for the new element was retained because it had seniority, despite the contradicting chemical properties he claimed for the element at different times. Articles published in the 1970s and later suggest that Debierne's results published in 1904 conflict with those reported in 1899 and 1900. Furthermore, the now-known chemistry of actinium precludes its presence as anything other than a minor constituent of Debierne's 1899 and 1900 results; in fact, the chemical properties he reported make it likely that he had, instead, accidentally identified protactinium, which would not be discovered for another fourteen years, only to have it disappear due to its hydrolysis and adsorption onto his laboratory equipment. This has led some authors to advocate that Giesel alone should be credited with the discovery. A less confrontational vision of scientific discovery is proposed by Adloff. He suggests that hindsight criticism of the early publications should be mitigated by the then nascent state of radiochemistry: highlighting the prudence of Debierne's claims in the original papers, he notes that nobody can contend that Debierne's substance did not contain actinium. Debierne, who is now considered by the vast majority of historians as the discoverer, lost interest in the element and left the topic. Giesel, on the other hand, can rightfully be credited with the first preparation of radiochemically pure actinium and with the identification of its atomic number 89. The name actinium originates from the Ancient Greek "aktis, aktinos" (ακτίς, ακτίνος), meaning beam or ray. Its symbol Ac is also used in abbreviations of other compounds that have nothing to do with actinium, such as acetyl, acetate and sometimes acetaldehyde. Actinium is a soft, silvery-white, radioactive, metallic element. Its estimated shear modulus is similar to that of lead. Owing to its strong radioactivity, actinium glows in the dark with a pale blue light, which originates from the surrounding air ionized by the emitted energetic particles. Actinium has similar chemical properties to lanthanum and other lanthanides, and therefore these elements are difficult to separate when extracting from uranium ores. Solvent extraction and ion chromatography are commonly used for the separation. The first element of the actinides, actinium gave the group its name, much as lanthanum had done for the lanthanides. The group of elements is more diverse than the lanthanides and therefore it was not until 1945 that the most significant change to Dmitri Mendeleev's periodic table since the recognition of the lanthanides, the introduction of the actinides, was generally accepted after Glenn T. Seaborg's research on the transuranium elements (although it had been proposed as early as 1892 by British chemist Henry Bassett). Actinium reacts rapidly with oxygen and moisture in air forming a white coating of actinium oxide that impedes further oxidation. As with most lanthanides and actinides, actinium exists in the oxidation state +3, and the Ac3+ ions are colorless in solutions. The oxidation state +3 originates from the [Rn]6d17s2 electronic configuration of actinium, with three valence electrons that are easily donated to give the stable closed-shell structure of the noble gas radon. The rare oxidation state +2 is only known for actinium dihydride (AcH2); even this may in reality be an electride compound like its lighter congener LaH2 and thus have actinium(III). Ac3+ is the largest of all known tripositive ions and its first coordination sphere contains approximately 10.9 ± 0.5 water molecules. Due to actinium's intense radioactivity, only a limited number of actinium compounds are known. These include: AcF3, AcCl3, AcBr3, AcOF, AcOCl, AcOBr, Ac2S3, Ac2O3 and AcPO4. Except for AcPO4, they are all similar to the corresponding lanthanum compounds. They all contain actinium in the oxidation state +3. In particular, the lattice constants of the analogous lanthanum and actinium compounds differ by only a few percent. Here "a", "b" and "c" are lattice constants, No is space group number and "Z" is the number of formula units per unit cell. Density was not measured directly but calculated from the lattice parameters. Actinium oxide (Ac2O3) can be obtained by heating the hydroxide at 500 °C or the oxalate at 1100 °C, in vacuum. Its crystal lattice is isotypic with the oxides of most trivalent rare-earth metals. Actinium trifluoride can be produced either in solution or in solid reaction. The former reaction is carried out at room temperature, by adding hydrofluoric acid to a solution containing actinium ions. In the latter method, actinium metal is treated with hydrogen fluoride vapors at 700 °C in an all-platinum setup. Treating actinium trifluoride with ammonium hydroxide at 900–1000 °C yields oxyfluoride AcOF. Whereas lanthanum oxyfluoride can be easily obtained by burning lanthanum trifluoride in air at 800 °C for an hour, similar treatment of actinium trifluoride yields no AcOF and only results in melting of the initial product. Actinium trichloride is obtained by reacting actinium hydroxide or oxalate with carbon tetrachloride vapors at temperatures above 960 °C. Similar to oxyfluoride, actinium oxychloride can be prepared by hydrolyzing actinium trichloride with ammonium hydroxide at 1000 °C. However, in contrast to the oxyfluoride, the oxychloride could well be synthesized by igniting a solution of actinium trichloride in hydrochloric acid with ammonia. Reaction of aluminium bromide and actinium oxide yields actinium tribromide: and treating it with ammonium hydroxide at 500 °C results in the oxybromide AcOBr. Actinium hydride was obtained by reduction of actinium trichloride with potassium at 300 °C, and its structure was deduced by analogy with the corresponding LaH2 hydride. The source of hydrogen in the reaction was uncertain. Mixing monosodium phosphate (NaH2PO4) with a solution of actinium in hydrochloric acid yields white-colored actinium phosphate hemihydrate (AcPO4·0.5H2O), and heating actinium oxalate with hydrogen sulfide vapors at 1400 °C for a few minutes results in a black actinium sulfide Ac2S3. It may possibly be produced by acting with a mixture of hydrogen sulfide and carbon disulfide on actinium oxide at 1000 °C. Naturally occurring actinium is composed of two radioactive isotopes; (from the radioactive family of ) and (a granddaughter of ). decays mainly as a beta emitter with a very small energy, but in 1.38% of cases it emits an alpha particle, so it can readily be identified through alpha spectrometry. Thirty-six radioisotopes have been identified, the most stable being with a half-life of 21.772 years, with a half-life of 10.0 days and with a half-life of 29.37 hours. All remaining radioactive isotopes have half-lives that are less than 10 hours and the majority of them have half-lives shorter than one minute. The shortest-lived known isotope of actinium is (half-life of 69 nanoseconds) which decays through alpha decay. Actinium also has two known meta states. The most significant isotopes for chemistry are 225Ac, 227Ac, and 228Ac. Purified comes into equilibrium with its decay products after about a half of year. It decays according to its 21.772-year half-life emitting mostly beta (98.62%) and some alpha particles (1.38%); the successive decay products are part of the actinium series. Owing to the low available amounts, low energy of its beta particles (maximum 44.8 keV) and low intensity of alpha radiation, is difficult to detect directly by its emission and it is therefore traced via its decay products. The isotopes of actinium range in atomic weight from 205 u () to 236 u (). Actinium is found only in traces in uranium ores – one tonne of uranium in ore contains about 0.2 milligrams of 227Ac – and in thorium ores, which contain about 5 nanograms of 228Ac per one tonne of thorium. The actinium isotope 227Ac is a transient member of the uranium-actinium series decay chain, which begins with the parent isotope 235U (or 239Pu) and ends with the stable lead isotope 207Pb. The isotope 228Ac is a transient member of the thorium series decay chain, which begins with the parent isotope 232Th and ends with the stable lead isotope 208Pb. Another actinium isotope (225Ac) is transiently present in the neptunium series decay chain, beginning with 237Np (or 233U) and ending with thallium (205Tl) and near-stable bismuth (209Bi); even though all primordial 237Np has decayed away, it is continuously produced by neutron knock-out reactions on natural 238U. The low natural concentration, and the close similarity of physical and chemical properties to those of lanthanum and other lanthanides, which are always abundant in actinium-bearing ores, render separation of actinium from the ore impractical, and complete separation was never achieved. Instead, actinium is prepared, in milligram amounts, by the neutron irradiation of 226 in a nuclear reactor. The reaction yield is about 2% of the radium weight. 227Ac can further capture neutrons resulting in small amounts of 228Ac. After the synthesis, actinium is separated from radium and from the products of decay and nuclear fusion, such as thorium, polonium, lead and bismuth. The extraction can be performed with thenoyltrifluoroacetone-benzene solution from an aqueous solution of the radiation products, and the selectivity to a certain element is achieved by adjusting the pH (to about 6.0 for actinium). An alternative procedure is anion exchange with an appropriate resin in nitric acid, which can result in a separation factor of 1,000,000 for radium and actinium vs. thorium in a two-stage process. Actinium can then be separated from radium, with a ratio of about 100, using a low cross-linking cation exchange resin and nitric acid as eluant. 225Ac was first produced artificially at the Institute for Transuranium Elements (ITU) in Germany using a cyclotron and at St George Hospital in Sydney using a linac in 2000. This rare isotope has potential applications in radiation therapy and is most efficiently produced by bombarding a radium-226 target with 20–30 MeV deuterium ions. This reaction also yields 226Ac which however decays with a half-life of 29 hours and thus does not contaminate 225Ac. Actinium metal has been prepared by the reduction of actinium fluoride with lithium vapor in vacuum at a temperature between 1100 and 1300 °C. Higher temperatures resulted in evaporation of the product and lower ones lead to an incomplete transformation. Lithium was chosen among other alkali metals because its fluoride is most volatile. Owing to its scarcity, high price and radioactivity, 227Ac currently has no significant industrial use, but 225Ac is currently being studied for use in cancer treatments such as targeted alpha therapies. 227Ac is highly radioactive and was therefore studied for use as an active element of radioisotope thermoelectric generators, for example in spacecraft. The oxide of 227Ac pressed with beryllium is also an efficient neutron source with the activity exceeding that of the standard americium-beryllium and radium-beryllium pairs. In all those applications, 227Ac (a beta source) is merely a progenitor which generates alpha-emitting isotopes upon its decay. Beryllium captures alpha particles and emits neutrons owing to its large cross-section for the (α,n) nuclear reaction: The 227AcBe neutron sources can be applied in a neutron probe – a standard device for measuring the quantity of water present in soil, as well as moisture/density for quality control in highway construction. Such probes are also used in well logging applications, in neutron radiography, tomography and other radiochemical investigations. 225Ac is applied in medicine to produce 213 in a reusable generator or can be used alone as an agent for radiation therapy, in particular targeted alpha therapy (TAT). This isotope has a half-life of 10 days, making it much more suitable for radiation therapy than 213Bi (half-life 46 minutes). Additionally, 225Ac decays to nontoxic 209Bi rather than stable but toxic lead, which is the final product in the decay chains of several other candidate isotopes, namely 227Th, 228Th, and 230U. Not only 225Ac itself, but also its daughters, emit alpha particles which kill cancer cells in the body. The major difficulty with application of 225Ac was that intravenous injection of simple actinium complexes resulted in their accumulation in the bones and liver for a period of tens of years. As a result, after the cancer cells were quickly killed by alpha particles from 225Ac, the radiation from the actinium and its daughters might induce new mutations. To solve this problem, 225Ac was bound to a chelating agent, such as citrate, ethylenediaminetetraacetic acid (EDTA) or diethylene triamine pentaacetic acid (DTPA). This reduced actinium accumulation in the bones, but the excretion from the body remained slow. Much better results were obtained with such chelating agents as HEHA () or DOTA () coupled to trastuzumab, a monoclonal antibody that interferes with the HER2/neu receptor. The latter delivery combination was tested on mice and proved to be effective against leukemia, lymphoma, breast, ovarian, neuroblastoma and prostate cancers. The medium half-life of 227Ac (21.77 years) makes it very convenient radioactive isotope in modeling the slow vertical mixing of oceanic waters. The associated processes cannot be studied with the required accuracy by direct measurements of current velocities (of the order 50 meters per year). However, evaluation of the concentration depth-profiles for different isotopes allows estimating the mixing rates. The physics behind this method is as follows: oceanic waters contain homogeneously dispersed 235U. Its decay product, 231Pa, gradually precipitates to the bottom, so that its concentration first increases with depth and then stays nearly constant. 231Pa decays to 227Ac; however, the concentration of the latter isotope does not follow the 231Pa depth profile, but instead increases toward the sea bottom. This occurs because of the mixing processes which raise some additional 227Ac from the sea bottom. Thus analysis of both 231Pa and 227Ac depth profiles allows researchers to model the mixing behavior. There are theoretical predictions that AcHx hydrides (in this case with very high pressure) are a candidate for a near room-temperature superconductor as they have Tc significantly higher than H3S, possibly near 250 K. 227Ac is highly radioactive and experiments with it are carried out in a specially designed laboratory equipped with a tight glove box. When actinium trichloride is administered intravenously to rats, about 33% of actinium is deposited into the bones and 50% into the liver. Its toxicity is comparable to, but slightly lower than that of americium and plutonium. For trace quantities, fume hoods with good aeration suffice; for gram amounts, hot cells with shielding from the intense gamma radiation emitted by 227Ac are necessary.
https://en.wikipedia.org/wiki?curid=899
Americium Americium is a synthetic radioactive chemical element with the symbol Am and atomic number 95. It is a transuranic member of the actinide series, in the periodic table located under the lanthanide element europium, and thus by analogy was named after the Americas. Americium was first produced in 1944 by the group of Glenn T. Seaborg from Berkeley, California, at the Metallurgical Laboratory of the University of Chicago, a part of the Manhattan Project. Although it is the third element in the transuranic series, it was discovered fourth, after the heavier curium. The discovery was kept secret and only released to the public in November 1945. Most americium is produced by uranium or plutonium being bombarded with neutrons in nuclear reactors – one tonne of spent nuclear fuel contains about 100 grams of americium. It is widely used in commercial ionization chamber smoke detectors, as well as in neutron sources and industrial gauges. Several unusual applications, such as nuclear batteries or fuel for space ships with nuclear propulsion, have been proposed for the isotope 242mAm, but they are as yet hindered by the scarcity and high price of this nuclear isomer. Americium is a relatively soft radioactive metal with silvery appearance. Its common isotopes are 241Am and 243Am. In chemical compounds, americium usually assumes the oxidation state +3, especially in solutions. Several other oxidation states are known, ranging from +2 to +7, and can be identified by their characteristic optical absorption spectra. The crystal lattice of solid americium and its compounds contain small intrinsic radiogenic defects, due to metamictization induced by self-irradiation with alpha particles, which accumulates with time; this can cause a drift of some material properties over time, more noticeable in older samples. Although americium was likely produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in late autumn 1944, at the University of California, Berkeley, by Glenn T. Seaborg, Leon O. Morgan, Ralph A. James, and Albert Ghiorso. They used a 60-inch cyclotron at the University of California, Berkeley. The element was chemically identified at the Metallurgical Laboratory (now Argonne National Laboratory) of the University of Chicago. Following the lighter neptunium, plutonium, and heavier curium, americium was the fourth transuranium element to be discovered. At the time, the periodic table had been restructured by Seaborg to its present layout, containing the actinide row below the lanthanide one. This led to americium being located right below its twin lanthanide element europium; it was thus by analogy named after the Americas: "The name americium (after the Americas) and the symbol Am are suggested for the element on the basis of its position as the sixth member of the actinide rare-earth series, analogous to europium, Eu, of the lanthanide series." The new element was isolated from its oxides in a complex, multi-step process. First plutonium-239 nitrate (239PuNO3) solution was coated on a platinum foil of about 0.5 cm2 area, the solution was evaporated and the residue was converted into plutonium dioxide (PuO2) by calcining. After cyclotron irradiation, the coating was dissolved with nitric acid, and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The residue was dissolved in perchloric acid. Further separation was carried out by ion exchange, yielding a certain isotope of curium. The separation of curium and americium was so painstaking that those elements were initially called by the Berkeley group as "pandemonium" (from Greek for "all demons" or "hell") and "delirium" (from Latin for "madness"). Initial experiments yielded four americium isotopes: 241Am, 242Am, 239Am and 238Am. Americium-241 was directly obtained from plutonium upon absorption of two neutrons. It decays by emission of a α-particle to 237Np; the half-life of this decay was first determined as years but then corrected to 432.2 years. The second isotope 242Am was produced upon neutron bombardment of the already-created 241Am. Upon rapid β-decay, 242Am converts into the isotope of curium 242Cm (which had been discovered previously). The half-life of this decay was initially determined at 17 hours, which was close to the presently accepted value of 16.02 h. The discovery of americium and curium in 1944 was closely related to the Manhattan Project; the results were confidential and declassified only in 1945. Seaborg leaked the synthesis of the elements 95 and 96 on the U.S. radio show for children "Quiz Kids" five days before the official presentation at an American Chemical Society meeting on 11 November 1945, when one of the listeners asked whether any new transuranium element beside plutonium and neptunium had been discovered during the war. After the discovery of americium isotopes 241Am and 242Am, their production and compounds were patented listing only Seaborg as the inventor. The initial americium samples weighed a few micrograms; they were barely visible and were identified by their radioactivity. The first substantial amounts of metallic americium weighing 40–200 micrograms were not prepared until 1951 by reduction of americium(III) fluoride with barium metal in high vacuum at 1100 °C. The longest-lived and most common isotopes of americium, 241Am and 243Am, have half-lives of 432.2 and 7,370 years, respectively. Therefore, any primordial americium (americium that was present on Earth during its formation) should have decayed by now. Trace amounts of americium probably occur naturally in uranium minerals as a result of nuclear reactions, though this has not been confirmed. Existing americium is concentrated in the areas used for the atmospheric nuclear weapons tests conducted between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster. For example, the analysis of the debris at the testing site of the first U.S. hydrogen bomb, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides including americium; but due to military secrecy, this result was not published until later, in 1956. Trinitite, the glassy residue left on the desert floor near Alamogordo, New Mexico, after the plutonium-based Trinity nuclear bomb test on 16 July 1945, contains traces of americium-241. Elevated levels of americium were also detected at the crash site of a US Boeing B-52 bomber aircraft, which carried four hydrogen bombs, in 1968 in Greenland. In other regions, the average radioactivity of surface soil due to residual americium is only about 0.01 picocuries/g (0.37 mBq/g). Atmospheric americium compounds are poorly soluble in common solvents and mostly adhere to soil particles. Soil analysis revealed about 1,900 times higher concentration of americium inside sandy soil particles than in the water present in the soil pores; an even higher ratio was measured in loam soils. Americium is produced mostly artificially in small quantities, for research purposes. A tonne of spent nuclear fuel contains about 100 grams of various americium isotopes, mostly 241Am and 243Am. Their prolonged radioactivity is undesirable for the disposal, and therefore americium, together with other long-lived actinides, must be neutralized. The associated procedure may involve several steps, where americium is first separated and then converted by neutron bombardment in special reactors to short-lived nuclides. This procedure is well known as nuclear transmutation, but it is still being developed for americium. The transuranic elements from americium to fermium occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Americium has been produced in small quantities in nuclear reactors for decades, and kilograms of its 241Am and 243Am isotopes have been accumulated by now. Nevertheless, since it was first offered for sale in 1962, its price, about 1,500 USD per gram of 241Am, remains almost unchanged owing to the very complex separation procedure. The heavier isotope 243Am is produced in much smaller amounts; it is thus more difficult to separate, resulting in a higher cost of the order 100,000–160,000 USD/g. Americium is not synthesized directly from uranium – the most common reactor material – but from the plutonium isotope 239Pu. The latter needs to be produced first, according to the following nuclear process: The capture of two neutrons by 239Pu (a so-called (n,γ) reaction), followed by a β-decay, results in 241Am: The plutonium present in spent nuclear fuel contains about 12% of 241Pu. Because it spontaneously converts to 241Am, 241Pu can be extracted and may be used to generate further 241Am. However, this process is rather slow: half of the original amount of 241Pu decays to 241Am after about 15 years, and the 241Am amount reaches a maximum after 70 years. The obtained 241Am can be used for generating heavier americium isotopes by further neutron capture inside a nuclear reactor. In a light water reactor (LWR), 79% of 241Am converts to 242Am and 10% to its nuclear isomer 242mAm: Americium-242 has a half-life of only 16 hours, which makes its further conversion to 243Am extremely inefficient. The latter isotope is produced instead in a process where 239Pu captures four neutrons under high neutron flux: Most synthesis routines yield a mixture of different actinide isotopes in oxide forms, from which isotopes of americium can be separated. In a typical procedure, the spent reactor fuel (e.g. MOX fuel) is dissolved in nitric acid, and the bulk of uranium and plutonium is removed using a PUREX-type extraction (Plutonium–URanium EXtraction) with tributyl phosphate in a hydrocarbon. The lanthanides and remaining actinides are then separated from the aqueous residue (raffinate) by a diamide-based extraction, to give, after stripping, a mixture of trivalent actinides and lanthanides. Americium compounds are then selectively extracted using multi-step chromatographic and centrifugation techniques with an appropriate reagent. A large amount of work has been done on the solvent extraction of americium. For example, a 2003 EU-funded project codenamed "EUROPART" studied triazines and other compounds as potential extraction agents. A "bis"-triazinyl bipyridine complex was proposed in 2009 as such a reagent is highly selective to americium (and curium). Separation of americium from the highly similar curium can be achieved by treating a slurry of their hydroxides in aqueous sodium bicarbonate with ozone, at elevated temperatures. Both Am and Cm are mostly present in solutions in the +3 valence state; whereas curium remains unchanged, americium oxidizes to soluble Am(IV) complexes which can be washed away. Metallic americium is obtained by reduction from its compounds. Americium(III) fluoride was first used for this purpose. The reaction was conducted using elemental barium as reducing agent in a water- and oxygen-free environment inside an apparatus made of tantalum and tungsten. An alternative is the reduction of americium dioxide by metallic lanthanum or thorium: In the periodic table, americium is located to the right of plutonium, to the left of curium, and below the lanthanide europium, with which it shares many similarities in physical and chemical properties. Americium is a highly radioactive element. When freshly prepared, it has a silvery-white metallic lustre, but then slowly tarnishes in air. With a density of 12 g/cm3, americium is less dense than both curium (13.52 g/cm3) and plutonium (19.8 g/cm3); but has a higher density than europium (5.264 g/cm3)—mostly because of its higher atomic mass. Americium is relatively soft and easily deformable and has a significantly lower bulk modulus than the actinides before it: Th, Pa, U, Np and Pu. Its melting point of 1173 °C is significantly higher than that of plutonium (639 °C) and europium (826 °C), but lower than for curium (1340 °C). At ambient conditions, americium is present in its most stable α form which has a hexagonal crystal symmetry, and a space group P63/mmc with cell parameters "a" = 346.8 pm and "c" = 1124 pm, and four atoms per unit cell. The crystal consists of a double-hexagonal close packing with the layer sequence ABAC and so is isotypic with α-lanthanum and several actinides such as α-curium. The crystal structure of americium changes with pressure and temperature. When compressed at room temperature to 5 GPa, α-Am transforms to the β modification, which has a face-centered cubic ("fcc") symmetry, space group Fmm and lattice constant "a" = 489 pm. This "fcc" structure is equivalent to the closest packing with the sequence ABC. Upon further compression to 23 GPa, americium transforms to an orthorhombic γ-Am structure similar to that of α-uranium. There are no further transitions observed up to 52 GPa, except for an appearance of a monoclinic phase at pressures between 10 and 15 GPa. There is no consistency on the status of this phase in the literature, which also sometimes lists the α, β and γ phases as I, II and III. The β-γ transition is accompanied by a 6% decrease in the crystal volume; although theory also predicts a significant volume change for the α-β transition, it is not observed experimentally. The pressure of the α-β transition decreases with increasing temperature, and when α-americium is heated at ambient pressure, at 770 °C it changes into an "fcc" phase which is different from β-Am, and at 1075 °C it converts to a body-centered cubic structure. The pressure-temperature phase diagram of americium is thus rather similar to those of lanthanum, praseodymium and neodymium. As with many other actinides, self-damage of the crystal structure due to alpha-particle irradiation is intrinsic to americium. It is especially noticeable at low temperatures, where the mobility of the produced structure defects is relatively low, by broadening of X-ray diffraction peaks. This effect makes somewhat uncertain the temperature of americium and some of its properties, such as electrical resistivity. So for americium-241, the resistivity at 4.2 K increases with time from about 2 µOhm·cm to 10 µOhm·cm after 40 hours, and saturates at about 16 µOhm·cm after 140 hours. This effect is less pronounced at room temperature, due to annihilation of radiation defects; also heating to room temperature the sample which was kept for hours at low temperatures restores its resistivity. In fresh samples, the resistivity gradually increases with temperature from about 2 µOhm·cm at liquid helium to 69 µOhm·cm at room temperature; this behavior is similar to that of neptunium, uranium, thorium and protactinium, but is different from plutonium and curium which show a rapid rise up to 60 K followed by saturation. The room temperature value for americium is lower than that of neptunium, plutonium and curium, but higher than for uranium, thorium and protactinium. Americium is paramagnetic in a wide temperature range, from that of liquid helium, to room temperature and above. This behavior is markedly different from that of its neighbor curium which exhibits antiferromagnetic transition at 52 K. The thermal expansion coefficient of americium is slightly anisotropic and amounts to along the shorter "a" axis and for the longer "c" hexagonal axis. The enthalpy of dissolution of americium metal in hydrochloric acid at standard conditions is , from which the standard enthalpy change of formation (Δf"H"°) of aqueous Am3+ ion is . The standard potential Am3+/Am0 is . Americium metal readily reacts with oxygen and dissolves in aqueous acids. The most stable oxidation state for americium is +3,. The chemistry of americium(III) has many similarities to the chemistry of lanthanide(III) compounds. For example, trivalent americium forms insoluble fluoride, oxalate, iodate, hydroxide, phosphate and other salts. Compounds of americium in oxidation states 2, 4, 5 and 6 have also been studied. This is the widest range that has been observed with actinide elements. The color of americium compounds in aqueous solution is as follows: Am3+ (yellow-reddish), Am4+ (yellow-reddish), AmV; (yellow), AmVI (brown) and AmVII (dark green). The absorption spectra have sharp peaks, due to "f"-"f" transitions' in the visible and near-infrared regions. Typically, Am(III) has absorption maxima at ca. 504 and 811 nm, Am(V) at ca. 514 and 715 nm, and Am(VI) at ca. 666 and 992 nm. Americium compounds with oxidation state +4 and higher are strong oxidizing agents, comparable in strength to the permanganate ion () in acidic solutions. Whereas the Am4+ ions are unstable in solutions and readily convert to Am3+, compounds such as americium dioxide (AmO2) and americium(IV) fluoride (AmF4) are stable in the solid state. The pentavalent oxidation state of americium was first observed in 1951. In acidic aqueous solution the ion is unstable with respect to disproportionation. The reaction is typical. The chemistry of Am(V) and Am(VI) is comparable to the chemistry of uranium in those oxidation states. In particular, compounds like Li3AmO4 and Li6AmO6 are comparable to uranates and the ion AmO22+ is comparable to the uranyl ion, UO22+. Such compounds can be prepared by oxidation of Am(III) in dilute nitric acid with ammonium persulfate. Other oxidising agents that have been used include silver(I) oxide , ozone and sodium persulfate. Three americium oxides are known, with the oxidation states +2 (AmO), +3 (Am2O3) and +4 (AmO2). Americium(II) oxide was prepared in minute amounts and has not been characterized in detail. Americium(III) oxide is a red-brown solid with a melting point of 2205 °C. Americium(IV) oxide is the main form of solid americium which is used in nearly all its applications. As most other actinide dioxides, it is a black solid with a cubic (fluorite) crystal structure. The oxalate of americium(III), vacuum dried at room temperature, has the chemical formula Am2(C2O4)3·7H2O. Upon heating in vacuum, it loses water at 240 °C and starts decomposing into AmO2 at 300 °C, the decomposition completes at about 470 °C. The initial oxalate dissolves in nitric acid with the maximum solubility of 0.25 g/L. Halides of americium are known for the oxidation states +2, +3 and +4, where the +3 is most stable, especially in solutions. Reduction of Am(III) compounds with sodium amalgam yields Am(II) salts – the black halides AmCl2, AmBr2 and AmI2. They are very sensitive to oxygen and oxidize in water, releasing hydrogen and converting back to the Am(III) state. Specific lattice constants are: Americium(III) fluoride (AmF3) is poorly soluble and precipitates upon reaction of Am3+ and fluoride ions in weak acidic solutions: The tetravalent americium(IV) fluoride (AmF4) is obtained by reacting solid americium(III) fluoride with molecular fluorine: Another known form of solid tetravalent americium chloride is KAmF5. Tetravalent americium has also been observed in the aqueous phase. For this purpose, black Am(OH)4 was dissolved in 15-M NH4F with the americium concentration of 0.01 M. The resulting reddish solution had a characteristic optical absorption spectrum which is similar to that of AmF4 but differed from other oxidation states of americium. Heating the Am(IV) solution to 90 °C did not result in its disproportionation or reduction, however a slow reduction was observed to Am(III) and assigned to self-irradiation of americium by alpha particles. Most americium(III) halides form hexagonal crystals with slight variation of the color and exact structure between the halogens. So, chloride (AmCl3) is reddish and has a structure isotypic to uranium(III) chloride (space group P63/m) and the melting point of 715 °C. The fluoride is isotypic to LaF3 (space group P63/mmc) and the iodide to BiI3 (space group R). The bromide is an exception with the orthorhombic PuBr3-type structure and space group Cmcm. Crystals of americium hexahydrate (AmCl3·6H2O) can be prepared by dissolving americium dioxide in hydrochloric acid and evaporating the liquid. Those crystals are hygroscopic and have yellow-reddish color and a monoclinic crystal structure. Oxyhalides of americium in the form AmVIO2X2, AmVO2X, AmIVOX2 and AmIIIOX can be obtained by reacting the corresponding americium halide with oxygen or Sb2O3, and AmOCl can also be produced by vapor phase hydrolysis: The known chalcogenides of americium include the sulfide AmS2, selenides AmSe2 and Am3Se4, and tellurides Am2Te3 and AmTe2. The pnictides of americium (243Am) of the AmX type are known for the elements phosphorus, arsenic, antimony and bismuth. They crystallize in the rock-salt lattice. Americium monosilicide (AmSi) and "disilicide" (nominally AmSix with: 1.87 < x < 2.0) were obtained by reduction of americium(III) fluoride with elementary silicon in vacuum at 1050 °C (AmSi) and 1150−1200 °C (AmSix). AmSi is a black solid isomorphic with LaSi, it has an orthorhombic crystal symmetry. AmSix has a bright silvery lustre and a tetragonal crystal lattice (space group "I"41/amd), it is isomorphic with PuSi2 and ThSi2. Borides of americium include AmB4 and AmB6. The tetraboride can be obtained by heating an oxide or halide of americium with magnesium diboride in vacuum or inert atmosphere. Analogous to uranocene, americium forms the organometallic compound amerocene with two cyclooctatetraene ligands, with the chemical formula (η8-C8H8)2Am. A cyclopentadienyl complex is also known that is likely to be stoichiometrically AmCp3. Formation of the complexes of the type Am(n-C3H7-BTP)3, where BTP stands for 2,6-di(1,2,4-triazin-3-yl)pyridine, in solutions containing n-C3H7-BTP and Am3+ ions has been confirmed by EXAFS. Some of these BTP-type complexes selectively interact with americium and therefore are useful in its selective separation from lanthanides and another actinides. Americium is an artificial element of recent origin, and thus does not have a biological requirement. It is harmful to life. It has been proposed to use bacteria for removal of americium and other heavy metals from rivers and streams. Thus, Enterobacteriaceae of the genus "Citrobacter" precipitate americium ions from aqueous solutions, binding them into a metal-phosphate complex at their cell walls. Several studies have been reported on the biosorption and bioaccumulation of americium by bacteria and fungi. The isotope 242mAm (half-life 141 years) has the largest cross sections for absorption of thermal neutrons (5,700 barns), that results in a small critical mass for a sustained nuclear chain reaction. The critical mass for a bare 242mAm sphere is about 9–14 kg (the uncertainty results from insufficient knowledge of its material properties). It can be lowered to 3–5 kg with a metal reflector and should become even smaller with a water reflector. Such small critical mass is favorable for portable nuclear weapons, but those based on 242mAm are not known yet, probably because of its scarcity and high price. The critical masses of two other readily available isotopes, 241Am and 243Am, are relatively high – 57.6 to 75.6 kg for 241Am and 209 kg for 243Am. Scarcity and high price yet hinder application of americium as a nuclear fuel in nuclear reactors. There are proposals of very compact 10-kW high-flux reactors using as little as 20 grams of 242mAm. Such low-power reactors would be relatively safe to use as neutron sources for radiation therapy in hospitals. About 19 isotopes and 8 nuclear isomers are known for americium. There are two long-lived alpha-emitters; 243Am has a half-life of 7,370 years and is the most stable isotope, and 241Am has a half-life of 432.2 years. The most stable nuclear isomer is 242m1Am; it has a long half-life of 141 years. The half-lives of other isotopes and isomers range from 0.64 microseconds for 245m1Am to 50.8 hours for 240Am. As with most other actinides, the isotopes of americium with odd number of neutrons have relatively high rate of nuclear fission and low critical mass. Americium-241 decays to 237Np emitting alpha particles of 5 different energies, mostly at 5.486 MeV (85.2%) and 5.443 MeV (12.8%). Because many of the resulting states are metastable, they also emit gamma rays with the discrete energies between 26.3 and 158.5 keV. Americium-242 is a short-lived isotope with a half-life of 16.02 h. It mostly (82.7%) converts by β-decay to 242Cm, but also by electron capture to 242Pu (17.3%). Both 242Cm and 242Pu transform via nearly the same decay chain through 238Pu down to 234U. Nearly all (99.541%) of 242m1Am decays by internal conversion to 242Am and the remaining 0.459% by α-decay to 238Np. The latter subsequently decays to 238Pu and then to 234U. Americium-243 transforms by α-emission into 239Np, which converts by β-decay to 239Pu, and the 239Pu changes into 235U by emitting an α-particle. Americium is used in the most common type of household smoke detector, which uses 241Am in the form of americium dioxide as its source of ionizing radiation. This isotope is preferred over 226Ra because it emits 5 times more alpha particles and relatively little harmful gamma radiation. The amount of americium in a typical new smoke detector is 1 microcurie (37 kBq) or 0.29 microgram. This amount declines slowly as the americium decays into neptunium-237, a different transuranic element with a much longer half-life (about 2.14 million years). With its half-life of 432.2 years, the americium in a smoke detector includes about 3% neptunium after 19 years, and about 5% after 32 years. The radiation passes through an ionization chamber, an air-filled space between two electrodes, and permits a small, constant current between the electrodes. Any smoke that enters the chamber absorbs the alpha particles, which reduces the ionization and affects this current, triggering the alarm. Compared to the alternative optical smoke detector, the ionization smoke detector is cheaper and can detect particles which are too small to produce significant light scattering; however, it is more prone to false alarms. As 241Am has a roughly similar half-life to 238Pu (432.2 years vs. 87 years), it has been proposed as an active element of radioisotope thermoelectric generators, for example in spacecraft. Although americium produces less heat and electricity – the power yield is 114.7 mW/g for 241Am and 6.31 mW/g for 243Am (cf. 390 mW/g for 238Pu) – and its radiation poses more threat to humans owing to neutron emission, the European Space Agency is considering using americium for its space probes. Another proposed space-related application of americium is a fuel for space ships with nuclear propulsion. It relies on the very high rate of nuclear fission of 242mAm, which can be maintained even in a micrometer-thick foil. Small thickness avoids the problem of self-absorption of emitted radiation. This problem is pertinent to uranium or plutonium rods, in which only surface layers provide alpha-particles. The fission products of 242mAm can either directly propel the spaceship or they can heat a thrusting gas. They can also transfer their energy to a fluid and generate electricity through a magnetohydrodynamic generator. One more proposal which utilizes the high nuclear fission rate of 242mAm is a nuclear battery. Its design relies not on the energy of the emitted by americium alpha particles, but on their charge, that is the americium acts as the self-sustaining "cathode". A single 3.2 kg 242mAm charge of such battery could provide about 140 kW of power over a period of 80 days. Even with all the potential benefits, the current applications of 242mAm are as yet hindered by the scarcity and high price of this particular nuclear isomer. In 2019, researchers at the UK National Nuclear Laboratory and the University of Leicester demonstrated the use of heat generated by americium to illuminate a small light bulb. This technology could lead to systems to power missions with durations up to 400 years into interstellar space, where solar panels do not function. The oxide of 241Am pressed with beryllium is an efficient neutron source. Here americium acts as the alpha source, and beryllium produces neutrons owing to its large cross-section for the (α,n) nuclear reaction: The most widespread use of 241AmBe neutron sources is a neutron probe – a device used to measure the quantity of water present in soil, as well as moisture/density for quality control in highway construction. 241Am neutron sources are also used in well logging applications, as well as in neutron radiography, tomography and other radiochemical investigations. Americium is a starting material for the production of other transuranic elements and transactinides – for example, 82.7% of 242Am decays to 242Cm and 17.3% to 242Pu. In the nuclear reactor, 242Am is also up-converted by neutron capture to 243Am and 244Am, which transforms by β-decay to 244Cm: Irradiation of 241Am by 12C or 22Ne ions yields the isotopes 247Es (einsteinium) or 260Db (dubnium), respectively. Furthermore, the element berkelium (243Bk isotope) had been first intentionally produced and identified by bombarding 241Am with alpha particles, in 1949, by the same Berkeley group, using the same 60-inch cyclotron. Similarly, nobelium was produced at the Joint Institute for Nuclear Research, Dubna, Russia, in 1965 in several reactions, one of which included irradiation of 243Am with 15N ions. Besides, one of the synthesis reactions for lawrencium, discovered by scientists at Berkeley and Dubna, included bombardment of 243Am with 18O. Americium-241 has been used as a portable source of both gamma rays and alpha particles for a number of medical and industrial uses. The 59.5409 keV gamma ray emissions from 241Am in such sources can be used for indirect analysis of materials in radiography and X-ray fluorescence spectroscopy, as well as for quality control in fixed nuclear density gauges and nuclear densometers. For example, the element has been employed to gauge glass thickness to help create flat glass. Americium-241 is also suitable for calibration of gamma-ray spectrometers in the low-energy range, since its spectrum consists of nearly a single peak and negligible Compton continuum (at least three orders of magnitude lower intensity). Americium-241 gamma rays were also used to provide passive diagnosis of thyroid function. This medical application is however obsolete. As a highly radioactive element, americium and its compounds must be handled only in an appropriate laboratory under special arrangements. Although most americium isotopes predominantly emit alpha particles which can be blocked by thin layers of common materials, many of the daughter products emit gamma-rays and neutrons which have a long penetration depth. If consumed, most of the americium is excreted within a few days, with only 0.05% absorbed in the blood, of which roughly 45% goes to the liver and 45% to the bones, and the remaining 10% is excreted. The uptake to the liver depends on the individual and increases with age. In the bones, americium is first deposited over cortical and trabecular surfaces and slowly redistributes over the bone with time. The biological half-life of 241Am is 50 years in the bones and 20 years in the liver, whereas in the gonads (testicles and ovaries) it remains permanently; in all these organs, americium promotes formation of cancer cells as a result of its radioactivity. Americium often enters landfills from discarded smoke detectors. The rules associated with the disposal of smoke detectors are relaxed in most jurisdictions. In 1994, 17-year-old David Hahn extracted the americium from about 100 smoke detectors in an attempt to build a breeder nuclear reactor. There have been a few cases of exposure to americium, the worst case being that of chemical operations technician Harold McCluskey, who at the age of 64 was exposed to 500 times the occupational standard for americium-241 as a result of an explosion in his lab. McCluskey died at the age of 75 of unrelated pre-existing disease.
https://en.wikipedia.org/wiki?curid=900
Astatine Astatine is a chemical element with the symbol At and atomic number 85. It is the rarest naturally occurring element in the Earth's crust, occurring only as the decay product of various heavier elements. All of astatine's isotopes are short-lived; the most stable is astatine-210, with a half-life of 8.1 hours. A sample of the pure element has never been assembled, because any macroscopic specimen would be immediately vaporized by the heat of its own radioactivity. The bulk properties of astatine are not known with any certainty. Many of them have been estimated based on the element's position on the periodic table as a heavier analog of iodine, and a member of the halogens (the group of elements including fluorine, chlorine, bromine, and iodine). Astatine is likely to have a dark or lustrous appearance and may be a semiconductor or possibly a metal; it probably has a higher melting point than that of iodine. Chemically, several anionic species of astatine are known and most of its compounds resemble those of iodine. It also shows some metallic behavior, including being able to form a stable monatomic cation in aqueous solution (unlike the lighter halogens). The first synthesis of the element was in 1940 by Dale R. Corson, Kenneth Ross MacKenzie, and Emilio G. Segrè at the University of California, Berkeley, who named it from the Greek "astatos" (ἄστατος), meaning "unstable". Four isotopes of astatine were subsequently found to be naturally occurring, although much less than one gram is present at any given time in the Earth's crust. Neither the most stable isotope astatine-210, nor the medically useful astatine-211, occur naturally; they can only be produced synthetically, usually by bombarding bismuth-209 with alpha particles. Astatine is an extremely radioactive element; all its isotopes have half-lives of 8.1 hours or less, decaying into other astatine isotopes, bismuth, polonium, or radon. Most of its isotopes are very unstable, with half-lives of one second or less. Of the first 101 elements in the periodic table, only francium is less stable, and all the astatine isotopes more stable than francium are in any case synthetic and do not occur in nature. The bulk properties of astatine are not known with any certainty. Research is limited by its short half-life, which prevents the creation of weighable quantities. A visible piece of astatine would immediately vaporize itself because of the heat generated by its intense radioactivity. It remains to be seen if, with sufficient cooling, a macroscopic quantity of astatine could be deposited as a thin film. Astatine is usually classified as either a nonmetal or a metalloid; metal formation has also been predicted. Most of the physical properties of astatine have been estimated (by interpolation or extrapolation), using theoretically or empirically derived methods. For example, halogens get darker with increasing atomic weight – fluorine is nearly colorless, chlorine is yellow-green, bromine is red-brown, and iodine is dark gray/violet. Astatine is sometimes described as probably being a black solid (assuming it follows this trend), or as having a metallic appearance (if it is a metalloid or a metal). The melting and boiling points of astatine are also expected to follow the trend seen in the halogen series, increasing with atomic number. On this basis they are estimated to be , respectively. Some experimental evidence suggests astatine may have lower melting and boiling points than those implied by the halogen trend; a chromatographic estimation of the boiling point of elemental astatine in 1982 suggested a boiling point of 503±3 K (about 230±3 °C or 445±5 °F). Astatine sublimes less readily than does iodine, having a lower vapor pressure. Even so, half of a given quantity of astatine will vaporize in approximately an hour if put on a clean glass surface at room temperature. The absorption spectrum of astatine in the middle ultraviolet region has lines at 224.401 and 216.225 nm, suggestive of 6p to 7s transitions. The structure of solid astatine is unknown. As an analogue of iodine it may have an orthorhombic crystalline structure composed of diatomic astatine molecules, and be a semiconductor (with a band gap of 0.7 eV). Alternatively, if condensed astatine forms a metallic phase, as has been predicted, it may have a monatomic face-centered cubic structure; in this structure it may well be a superconductor, like the similar high-pressure phase of iodine. Evidence for (or against) the existence of diatomic astatine (At2) is sparse and inconclusive. Some sources state that it does not exist, or at least has never been observed, while other sources assert or imply its existence. Despite this controversy, many properties of diatomic astatine have been predicted; for example, its bond length would be , dissociation energy , and heat of vaporization (∆Hvap) 54.39 kJ/mol. The latter figure means that astatine may (at least) be metallic in the liquid state on the basis that elements with a heat of vaporization greater than ~42 kJ/mol are metallic when liquid; diatomic iodine, with a value of 41.71 kJ/mol, falls just short of the threshold figure. The chemistry of astatine is "clouded by the extremely low concentrations at which astatine experiments have been conducted, and the possibility of reactions with impurities, walls and filters, or radioactivity by-products, and other unwanted nano-scale interactions". Many of its apparent chemical properties have been observed using tracer studies on extremely dilute astatine solutions, typically less than 10−10 mol·L−1. Some properties, such as anion formation, align with other halogens. Astatine has some metallic characteristics as well, such as plating onto a cathode, coprecipitating with metal sulfides in hydrochloric acid, and forming a stable monatomic cation in aqueous solution. It forms complexes with EDTA, a metal chelating agent, and is capable of acting as a metal in antibody radiolabeling; in some respects astatine in the +1 state is akin to silver in the same state. Most of the organic chemistry of astatine is, however, analogous to that of iodine. Astatine has an electronegativity of 2.2 on the revised Pauling scale – lower than that of iodine (2.66) and the same as hydrogen. In hydrogen astatide (HAt), the negative charge is predicted to be on the hydrogen atom, implying that this compound could be referred to as astatine hydride according to certain nomenclatures. That would be consistent with the electronegativity of astatine on the Allred–Rochow scale (1.9) being less than that of hydrogen (2.2). However, official IUPAC stoichiometric nomenclature is based on an idealized convention of determining the relative electronegativities of the elements by the mere virtue of their position within the periodic table. According to this convention, astatine is handled as though it is more electronegative than hydrogen, irrespective of its true electronegativity. The electron affinity of astatine, at 233 kJ mol-1, is 21% less than that of iodine. In comparison, the value of Cl (349) is 6.4% higher than F (328); Br (325) is 6.9% less than Cl; and I (295) is 9.2% less than Br. The marked reduction for At was predicted as being due to spin–orbit interactions. Less reactive than iodine, astatine is the least reactive of the halogens, although its compounds have been synthesized in microscopic amounts and studied as intensively as possible before their radioactive disintegration. The reactions involved have been typically tested with dilute solutions of astatine mixed with larger amounts of iodine. Acting as a carrier, the iodine ensures there is sufficient material for laboratory techniques (such as filtration and precipitation) to work. Like iodine, astatine has been shown to adopt odd-numbered oxidation states ranging from −1 to +7. Only a few compounds with metals have been reported, in the form of astatides of sodium, palladium, silver, thallium, and lead. Some characteristic properties of silver and sodium astatide, and the other hypothetical alkali and alkaline earth astatides, have been estimated by extrapolation from other metal halides. The formation of an astatine compound with hydrogen – usually referred to as hydrogen astatide – was noted by the pioneers of astatine chemistry. As mentioned, there are grounds for instead referring to this compound as astatine hydride. It is easily oxidized; acidification by dilute nitric acid gives the At0 or At+ forms, and the subsequent addition of silver(I) may only partially, at best, precipitate astatine as silver(I) astatide (AgAt). Iodine, in contrast, is not oxidized, and precipitates readily as silver(I) iodide. Astatine is known to bind to boron, carbon, and nitrogen. Various boron cage compounds have been prepared with At–B bonds, these being more stable than At–C bonds. Astatine can replace a hydrogen atom in benzene to form astatobenzene C6H5At; this may be oxidized to C6H5AtCl2 by chlorine. By treating this compound with an alkaline solution of hypochlorite, C6H5AtO2 can be produced. The dipyridine-astatine(I) cation, [At(C5H5N)2]+, forms ionic compounds with perchlorate (a non-coordinating anion) and with nitrate, [At(C5H5N)2]NO3. This cation exists as a coordination complex in which two dative covalent bonds separately link the astatine(I) centre with each of the pyridine rings via their nitrogen atoms. With oxygen, there is evidence of the species AtO− and AtO+ in aqueous solution, formed by the reaction of astatine with an oxidant such as elemental bromine or (in the last case) by sodium persulfate in a solution of perchloric acid. The species previously thought to be has since been determined to be , a hydrolysis product of AtO+ (another such hydrolysis product being AtOOH). The well characterized anion can be obtained by, for example, the oxidation of astatine with potassium hypochlorite in a solution of potassium hydroxide. Preparation of lanthanum triastatate La(AtO3)3, following the oxidation of astatine by a hot Na2S2O8 solution, has been reported. Further oxidation of , such as by xenon difluoride (in a hot alkaline solution) or periodate (in a neutral or alkaline solution), yields the perastatate ion ; this is only stable in neutral or alkaline solutions. Astatine is also thought to be capable of forming cations in salts with oxyanions such as iodate or dichromate; this is based on the observation that, in acidic solutions, monovalent or intermediate positive states of astatine coprecipitate with the insoluble salts of metal cations such as silver(I) iodate or thallium(I) dichromate. Astatine may form bonds to the other chalcogens; these include S7At+ and with sulfur, a coordination selenourea compound with selenium, and an astatine–tellurium colloid with tellurium. Astatine is known to react with its lighter homologs iodine, bromine, and chlorine in the vapor state; these reactions produce diatomic interhalogen compounds with formulas AtI, AtBr, and AtCl. The first two compounds may also be produced in water – astatine reacts with iodine/iodide solution to form AtI, whereas AtBr requires (aside from astatine) an iodine/iodine monobromide/bromide solution. The excess of iodides or bromides may lead to and ions, or in a chloride solution, they may produce species like or via equilibrium reactions with the chlorides. Oxidation of the element with dichromate (in nitric acid solution) showed that adding chloride turned the astatine into a molecule likely to be either AtCl or AtOCl. Similarly, or may be produced. The polyhalides PdAtI2, CsAtI2, TlAtI2, and PbAtI are known or presumed to have been precipitated. In a plasma ion source mass spectrometer, the ions [AtI]+, [AtBr]+, and [AtCl]+ have been formed by introducing lighter halogen vapors into a helium-filled cell containing astatine, supporting the existence of stable neutral molecules in the plasma ion state. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluoride. In 1869, when Dmitri Mendeleev published his periodic table, the space under iodine was empty; after Niels Bohr established the physical basis of the classification of chemical elements, it was suggested that the fifth halogen belonged there. Before its officially recognized discovery, it was called "eka-iodine" (from Sanskrit "eka" – "one") to imply it was one space under iodine (in the same manner as eka-silicon, eka-boron, and others). Scientists tried to find it in nature; given its extreme rarity, these attempts resulted in several false discoveries. The first claimed discovery of eka-iodine was made by Fred Allison and his associates at the Alabama Polytechnic Institute (now Auburn University) in 1931. The discoverers named element 85 "alabamine", and assigned it the symbol Ab, designations that were used for a few years. In 1934, H. G. MacPherson of University of California, Berkeley disproved Allison's method and the validity of his discovery. There was another claim in 1937, by the chemist Rajendralal De. Working in Dacca in British India (now Dhaka in Bangladesh), he chose the name "dakin" for element 85, which he claimed to have isolated as the thorium series equivalent of radium F (polonium-210) in the radium series. The properties he reported for dakin do not correspond to those of astatine; moreover, astatine is not found in the thorium series, and the true identity of dakin is not known. In 1936, the team of Romanian physicist Horia Hulubei and French physicist Yvette Cauchois claimed to have discovered element 85 via X-ray analysis. In 1939, they published another paper which supported and extended previous data. In 1944, Hulubei published a summary of data he had obtained up to that time, claiming it was supported by the work of other researchers. He chose the name "dor", presumably from the Romanian for "longing" [for peace], as World War II had started five years earlier. As Hulubei was writing in French, a language which does not accommodate the "ine" suffix, dor would likely have been rendered in English as "dorine", had it been adopted. In 1947, Hulubei's claim was effectively rejected by the Austrian chemist Friedrich Paneth, who would later chair the IUPAC committee responsible for recognition of new elements. Even though Hulubei's samples did contain astatine, his means to detect it were too weak, by current standards, to enable correct identification. He had also been involved in an earlier false claim as to the discovery of element 87 (francium) and this is thought to have caused other researchers to downplay his work. In 1940, the Swiss chemist Walter Minder announced the discovery of element 85 as the beta decay product of radium A (polonium-218), choosing the name "helvetium" (from , the Latin name of Switzerland). Karlik and Bernert were unsuccessful in reproducing his experiments, and subsequently attributed Minder's results to contamination of his radon stream (radon-222 is the parent isotope of polonium-218). In 1942, Minder, in collaboration with the English scientist Alice Leigh-Smith, announced the discovery of another isotope of element 85, presumed to be the product of thorium A (polonium-216) beta decay. They named this substance "anglo-helvetium", but Karlik and Bernert were again unable to reproduce these results. Later in 1940, Dale R. Corson, Kenneth Ross MacKenzie, and Emilio Segrè isolated the element at the University of California, Berkeley. Instead of searching for the element in nature, the scientists created it by bombarding bismuth-209 with alpha particles in a cyclotron (particle accelerator) to produce, after emission of two neutrons, astatine-211. The discoverers, however, did not immediately suggest a name for the element. The reason for this was that at the time, an element created synthetically in "invisible quantities" that had not yet discovered in nature was not seen as a completely valid one; in addition, chemists were reluctant to recognize radioactive isotopes as legitimately as stable ones. In 1943, astatine was found as a product of two naturally occurring decay chains by Berta Karlik and Traude Bernert, first in the so-called uranium series, and then in the actinium series. (Since then, astatine was also found in a third decay chain, the neptunium series.) Friedrich Paneth in 1946 called to finally recognize synthetic elements, quoting, among other reasons, recent confirmation of their natural occurrence, and proposed that the discoverers of the newly discovered unnamed elements name these elements. In early 1947, "Nature" published the discoverers' suggestions; a letter from Corson, MacKenzie, and Segrè suggested the name "astatine" coming from the Greek "astatos" (αστατος) meaning "unstable", because of its propensity for radioactive decay, with the ending "-ine", found in the names of the four previously discovered halogens. The name was also chosen to continue the tradition of the four stable halogens, where the name referred to a property of the element. Corson and his colleagues classified astatine as a metal on the basis of its analytical chemistry. Subsequent investigators reported iodine-like, cationic, or amphoteric behavior. In a 2003 retrospective, Corson wrote that "some of the properties [of astatine] are similar to iodine … it also exhibits metallic properties, more like its metallic neighbors Po and Bi." There are 39 known isotopes of astatine, with atomic masses (mass numbers) of 191–229. Theoretical modeling suggests that 37 more isotopes could exist. No stable or long-lived astatine isotope has been observed, nor is one expected to exist. Astatine's alpha decay energies follow the same trend as for other heavy elements. Lighter astatine isotopes have quite high energies of alpha decay, which become lower as the nuclei become heavier. Astatine-211 has a significantly higher energy than the previous isotope, because it has a nucleus with 126 neutrons, and 126 is a magic number corresponding to a filled neutron shell. Despite having a similar half-life to the previous isotope (8.1 hours for astatine-210 and 7.2 hours for astatine-211), the alpha decay probability is much higher for the latter: 41.81% against only 0.18%. The two following isotopes release even more energy, with astatine-213 releasing the most energy. For this reason, it is the shortest-lived astatine isotope. Even though heavier astatine isotopes release less energy, no long-lived astatine isotope exists, because of the increasing role of beta decay (electron emission). This decay mode is especially important for astatine; as early as 1950 it was postulated that all isotopes of the element undergo beta decay, though nuclear mass measurements indicate that 215At is in fact beta-stable, as it has the lowest mass of all isobars with "A" = 215. A beta decay mode has been found for all other astatine isotopes except for astatine-213, astatine-214, and astatine-216m. Astatine-210 and lighter isotopes exhibit beta plus decay (positron emission), astatine-216 and heavier isotopes exhibit beta minus decay, and astatine-212 decays via both modes, while astatine-211 undergoes electron capture. The most stable isotope is astatine-210, which has a half-life of 8.1 hours. The primary decay mode is beta plus, to the relatively long-lived (in comparison to astatine isotopes) alpha emitter polonium-210. In total, only five isotopes have half-lives exceeding one hour (astatine-207 to -211). The least stable ground state isotope is astatine-213, with a half-life of 125 nanoseconds. It undergoes alpha decay to the extremely long-lived bismuth-209. Astatine has 24 known nuclear isomers, which are nuclei with one or more nucleons (protons or neutrons) in an excited state. A nuclear isomer may also be called a "meta-state", meaning the system has more internal energy than the "ground state" (the state with the lowest possible internal energy), making the former likely to decay into the latter. There may be more than one isomer for each isotope. The most stable of these nuclear isomers is astatine-202m1, which has a half-life of about 3 minutes, longer than those of all the ground states bar those of isotopes 203–211 and 220. The least stable is astatine-214m1; its half-life of 265 nanoseconds is shorter than those of all ground states except that of astatine-213. Astatine is the rarest naturally occurring element. The total amount of astatine in the Earth's crust (quoted mass 2.36 × 1025 grams) is estimated by some to be less than one gram at any given time. Other sources estimate the amount of ephemeral astatine, present on earth at any given moment, to be up to one ounce. Any astatine present at the formation of the Earth has long since disappeared; the four naturally occurring isotopes (astatine-215, -217, -218 and -219) are instead continuously produced as a result of the decay of radioactive thorium and uranium ores, and trace quantities of neptunium-237. The landmass of North and South America combined, to a depth of 16 kilometers (10 miles), contains only about one trillion astatine-215 atoms at any given time (around 3.5 × 10−10 grams). Astatine-217 is produced via the radioactive decay of neptunium-237. Primordial remnants of the latter isotope—due to its relatively short half-life of 2.14 million years—are no longer present on Earth. However, trace amounts occur naturally as a product of transmutation reactions in uranium ores. Astatine-218 was the first astatine isotope discovered in nature. Astatine-219, with a half-life of 56 seconds, is the longest lived of the naturally occurring isotopes. Isotopes of astatine are sometimes not listed as naturally occurring because of misconceptions that there are no such isotopes, or discrepancies in the literature. Astatine-216 has been counted as a naturally occurring isotope but reports of its observation (which were described as doubtful) have not been confirmed. Astatine was first produced by bombarding bismuth-209 with energetic alpha particles, and this is still the major route used to create the relatively long-lived isotopes astatine-209 through astatine-211. Astatine is only produced in minuscule quantities, with modern techniques allowing production runs of up to 6.6 giga becquerels (about 86 nanograms or 2.47 × 1014 atoms). Synthesis of greater quantities of astatine using this method is constrained by the limited availability of suitable cyclotrons and the prospect of melting the target. Solvent radiolysis due to the cumulative effect of astatine decay is a related problem. With cryogenic technology, microgram quantities of astatine might be able to be generated via proton irradiation of thorium or uranium to yield radon-211, in turn decaying to astatine-211. Contamination with astatine-210 is expected to be a drawback of this method. The most important isotope is astatine-211, the only one in commercial use. To produce the bismuth target, the metal is sputtered onto a gold, copper, or aluminium surface at 50 to 100 milligrams per square centimeter. Bismuth oxide can be used instead; this is forcibly fused with a copper plate. The target is kept under a chemically neutral nitrogen atmosphere, and is cooled with water to prevent premature astatine vaporization. In a particle accelerator, such as a cyclotron, alpha particles are collided with the bismuth. Even though only one bismuth isotope is used (bismuth-209), the reaction may occur in three possible ways, producing astatine-209, astatine-210, or astatine-211. In order to eliminate undesired nuclides, the maximum energy of the particle accelerator is set to a value (optimally 29.17 MeV) above that for the reaction producing astatine-211 (to produce the desired isotope) and below the one producing astatine-210 (to avoid producing other astatine isotopes). Since astatine is the main product of the synthesis, after its formation it must only be separated from the target and any significant contaminants. Several methods are available, "but they generally follow one of two approaches—dry distillation or [wet] acid treatment of the target followed by solvent extraction." The methods summarized below are modern adaptations of older procedures, as reviewed by Kugler and Keller. Pre-1985 techniques more often addressed the elimination of co-produced toxic polonium; this requirement is now mitigated by capping the energy of the cyclotron irradiation beam. The astatine-containing cyclotron target is heated to a temperature of around 650 °C. The astatine volatilizes and is condensed in (typically) a cold trap. Higher temperatures of up to around 850 °C may increase the yield, at the risk of bismuth contamination from concurrent volatilization. Redistilling the condensate may be required to minimize the presence of bismuth (as bismuth can interfere with astatine labeling reactions). The astatine is recovered from the trap using one or more low concentration solvents such as sodium hydroxide, methanol or chloroform. Astatine yields of up to around 80% may be achieved. Dry separation is the method most commonly used to produce a chemically useful form of astatine. The irradiated bismuth (or sometimes bismuth trioxide) target is first dissolved in, for example, concentrated nitric or perchloric acid. Following this first step, the acid can be distilled away to leave behind a white residue that contains both bismuth and the desired astatine product. This residue is then dissolved in a concentrated acid, such as hydrochloric acid. Astatine is extracted from this acid using an organic solvent such as butyl or isopropyl ether, diisopropylether (DIPE), or thiosemicarbazide. Using liquid-liquid extraction, the astatine product can be repeatedly washed with an acid, such as HCl, and extracted into the organic solvent layer. A separation yield of 93% using nitric acid has been reported, falling to 72% by the time purification procedures were completed (distillation of nitric acid, purging residual nitrogen oxides, and redissolving bismuth nitrate to enable liquid–liquid extraction). Wet methods involve "multiple radioactivity handling steps" and have not been considered well suited for isolating larger quantities of astatine. However, wet extraction methods are being examined for use in production of larger quantities of astatine-211, as it is thought that wet extraction methods can provide more consistency. They can enable the production of astatine in a specific oxidation state and may have greater applicability in experimental radiochemistry. Newly formed astatine-211 is the subject of ongoing research in nuclear medicine. It must be used quickly as it decays with a half-life of 7.2 hours; this is long enough to permit multistep labeling strategies. Astatine-211 has potential for targeted alpha-particle therapy, since it decays either via emission of an alpha particle (to bismuth-207), or via electron capture (to an extremely short-lived nuclide, polonium-211, which undergoes further alpha decay), very quickly reaching its stable granddaughter lead-207. Polonium X-rays emitted as a result of the electron capture branch, in the range of 77–92 keV, enable the tracking of astatine in animals and patients. Although astatine-210 has a slightly longer half-life, it is wholly unsuitable because it usually undergoes beta plus decay to the extremely toxic polonium-210. The principal medicinal difference between astatine-211 and iodine-131 (a radioactive iodine isotope also used in medicine) is that iodine-131 emits high-energy beta particles, and astatine does not. Beta particles have much greater penetrating power through tissues than do the much heavier alpha particles. An average alpha particle released by astatine-211 can travel up to 70 µm through surrounding tissues; an average-energy beta particle emitted by iodine-131 can travel nearly 30 times as far, to about 2 mm. The short half-life and limited penetrating power of alpha radiation through tissues offers advantages in situations where the "tumor burden is low and/or malignant cell populations are located in close proximity to essential normal tissues." Significant morbidity in cell culture models of human cancers has been achieved with from one to ten astatine-211 atoms bound per cell. Several obstacles have been encountered in the development of astatine-based radiopharmaceuticals for cancer treatment. World War II delayed research for close to a decade. Results of early experiments indicated that a cancer-selective carrier would need to be developed and it was not until the 1970s that monoclonal antibodies became available for this purpose. Unlike iodine, astatine shows a tendency to dehalogenate from molecular carriers such as these, particularly at sp3 carbon sites (less so from sp2 sites). Given the toxicity of astatine accumulated and retained in the body, this emphasized the need to ensure it remained attached to its host molecule. While astatine carriers that are slowly metabolized can be assessed for their efficacy, more rapidly metabolized carriers remain a significant obstacle to the evaluation of astatine in nuclear medicine. Mitigating the effects of astatine-induced radiolysis of labeling chemistry and carrier molecules is another area requiring further development. A practical application for astatine as a cancer treatment would potentially be suitable for a "staggering" number of patients; production of astatine in the quantities that would be required remains an issue. Animal studies show that astatine, similarly to iodine – although to a lesser extent, perhaps because of its slightly more metallic nature  – is preferentially (and dangerously) concentrated in the thyroid gland. Unlike iodine, astatine also shows a tendency to be taken up by the lungs and spleen, possibly because of in-body oxidation of At– to At+. If administered in the form of a radiocolloid it tends to concentrate in the liver. Experiments in rats and monkeys suggest that astatine-211 causes much greater damage to the thyroid gland than does iodine-131, with repetitive injection of the nuclide resulting in necrosis and cell dysplasia within the gland. Early research suggested that injection of astatine into female rodents caused morphological changes in breast tissue; this conclusion remained controversial for many years. General agreement was later reached that this was likely caused by the effect of breast tissue irradiation combined with hormonal changes due to irradiation of the ovaries. Trace amounts of astatine can be handled safely in fume hoods if they are well-aerated; biological uptake of the element must be avoided.
https://en.wikipedia.org/wiki?curid=901
Atom An atom is the smallest constituent unit of ordinary matter that constitutes a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small, typically around 100 picometers across. They are so small that accurately predicting their behavior using classical physics – as if they were billiard balls, for example – is not possible due to quantum effects. Current atomic models use quantum principles to better explain and predict this behavior. Every atom is composed of a nucleus and one or more electrons bound to the nucleus. The nucleus is made of one or more protons and a number of neutrons. Only the most common variety of hydrogen has no neutrons. Protons and neutrons are called nucleons. More than 99.94% of an atom's mass is in the nucleus. The protons have a positive electric charge whereas the electrons have a negative electric charge. The neutrons have no electric charge. If the number of protons and electrons are equal, then the atom is electrically neutral. If an atom has more or fewer electrons than protons, then it has an overall negative or positive charge, respectively. These atoms are called ions. The electrons of an atom are attracted to the protons in an atomic nucleus by the electromagnetic force. The protons and neutrons in the nucleus are attracted to each other by the nuclear force. This force is usually stronger than the electromagnetic force that repels the positively charged protons from one another. Under certain circumstances, the repelling electromagnetic force becomes stronger than the nuclear force. In this case, the nucleus splits and leaves behind different elements. This is a form of nuclear decay. The number of protons in the nucleus, called the "atomic number", defines to which chemical element the atom belongs. For example, every copper atom contains 29 protons. The number of neutrons defines the isotope of the element. Atoms can attach to one or more other atoms by chemical bonds to form chemical compounds such as molecules or crystals. The ability of atoms to associate and dissociate is responsible for most of the physical changes observed in nature. Chemistry is the discipline that studies these changes. The idea that matter is made up of tiny indivisible particles is a very old idea, appearing in many ancient cultures such as Greece and India. The word "atomos", meaning "uncuttable", was coined by the ancient Greek philosophers Leucippus and his pupil Democritus (5th century BC). These ancient ideas were not based on quantitative evidence. In the early 1800s, John Dalton compiled experimental data gathered by himself and other scientists and observed that chemical elements seemed to combine by mass in ratios of small whole numbers; he called this pattern the "law of multiple proportions". For instance, there are two types of tin oxide: one is 88.1% tin and 11.9% oxygen, and the other is 78.7% tin and 21.3% oxygen. This means that 100 g of tin will combine either with 13.5 g or 27 g of oxygen. 13.5 and 27 form a ratio of 1:2, a ratio of small whole numbers. Similarly, there are two common types of iron oxide: 112 g of iron can combine with either 32 g or 48 g of oxygen, which gives a ratio of 2:3. This recurring pattern in the data suggested that elements always combine in multiples of discrete units, which Dalton concluded were atoms. In the case of tin oxides, for every one tin atom, there are either one or two oxygen atoms (SnO and SnO2). In the case of iron oxides, for every two iron atoms, there are either two or three oxygen atoms (FeO and Fe2O3). Dalton also believed that the concept of atoms could explain why some gases dissolve in water better than other gases. For example, he observed that water absorbs nitrous oxide far better than it absorbs nitrogen. Dalton hypothesized that this may be due to the differences in the mass and configuration of the particles. Indeed, nitrous oxide molecules (N2O) are larger and heavier than nitrogen molecules (N2). In 1827, botanist Robert Brown used a microscope to look at dust grains floating in water and discovered that they moved about erratically, a phenomenon that became known as "Brownian motion". This was thought to be caused by water molecules knocking the grains about. In 1905, Albert Einstein proved the reality of these molecules and their motions by producing the first statistical physics analysis of Brownian motion. French physicist Jean Perrin used Einstein's work to experimentally determine the mass and dimensions of atoms, thereby conclusively verifying Dalton's atomic theory. In 1897, J.J. Thomson discovered that cathode rays are not electromagnetic waves but made of particles that are 1,800 times lighter than hydrogen (the lightest atom). Therefore, they were not atoms, but a new particle, the first "subatomic" particle to be discovered. He originally called these new particles "corpuscles" but they were later renamed "electrons", after particles postulated by George Johnstone Stoney in 1874. Thomson also showed that electrons were identical to particles given off by photoelectric and radioactive materials. It was quickly recognized that electrons are the particles that carry electric currents in metal wires, and carry the negative electric charge within atoms. Thus Thomson overturned the belief that atoms are the indivisible, fundamental particles of matter. J.J. Thomson postulated that the negatively-charged electrons were distributed throughout the atom in a uniform sea of positive charge. This was known as the plum pudding model. In 1909, Hans Geiger and Ernest Marsden, working under the direction of Ernest Rutherford, bombarded metal foil with alpha particles to observe how they scattered. They expected all the alpha particles to pass straight through with little deflection, because Thomson's model said that the charges in the atom are so diffuse that their electric fields could not affect the alpha particles much. Geiger and Marsden spotted alpha particles being deflected by angles greater than 90°, which was supposed to be impossible according to Thomson's model. To explain this, Rutherford proposed that the positive charge of the atom is concentrated in a tiny nucleus at the center. Only such an intense concentration of charge could produce an electric field strong enough to deflect alpha particles that much. While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one type of atom at each position on the periodic table. The term isotope was coined by Margaret Todd as a suitable name for different atoms that belong to the same element. J.J. Thomson created a technique for isotope separation through his work on ionized gases, which subsequently led to the discovery of stable isotopes. In 1913 the physicist Niels Bohr proposed a model in which the electrons of an atom were assumed to orbit the nucleus but could only do so in a finite set of orbits, and could jump between these orbits only in discrete changes of energy corresponding to absorption or radiation of a photon. This quantization was used to explain why the electrons' orbits are stable (given that normally, charges in acceleration, including circular motion, lose kinetic energy which is emitted as electromagnetic radiation, see "synchrotron radiation") and why elements absorb and emit electromagnetic radiation in discrete spectra. Later in the same year Henry Moseley provided additional experimental evidence in favor of Niels Bohr's theory. These results refined Ernest Rutherford's and Antonius Van den Broek's model, which proposed that the atom contains in its nucleus a number of positive nuclear charges that is equal to its (atomic) number in the periodic table. Until these experiments, atomic number was not known to be a physical and experimental quantity. That it is equal to the atomic nuclear charge remains the accepted atomic model today. Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. The Stern–Gerlach experiment of 1922 provided further evidence of the quantum nature of atomic properties. When a beam of silver atoms was passed through a specially shaped magnetic field, the beam was split in a way correlated with the direction of an atom's angular momentum, or spin. As this spin direction is initially random, the beam would be expected to deflect in a random direction. Instead, the beam was split into two directional components, corresponding to the atomic spin being oriented up or down with respect to the magnetic field. In 1925 Werner Heisenberg published the first consistent mathematical formulation of quantum mechanics (matrix mechanics). One year earlier, Louis de Broglie had proposed the de Broglie hypothesis: that all particles behave like waves to some extent, and in 1926 Erwin Schrödinger used this idea to develop the Schrödinger equation, a mathematical model of the atom (wave mechanics) that described the electrons as three-dimensional waveforms rather than point particles. A consequence of using waveforms to describe particles is that it is mathematically impossible to obtain precise values for both the position and momentum of a particle at a given point in time; this became known as the uncertainty principle, formulated by Werner Heisenberg in 1927. In this concept, for a given accuracy in measuring a position one could only obtain a range of probable values for momentum, and vice versa. This model was able to explain observations of atomic behavior that previous models could not, such as certain structural and spectral patterns of atoms larger than hydrogen. Thus, the planetary model of the atom was discarded in favor of one that described atomic orbital zones around the nucleus where a given electron is most likely to be observed. The development of the mass spectrometer allowed the mass of atoms to be measured with increased accuracy. The device uses a magnet to bend the trajectory of a beam of ions, and the amount of deflection is determined by the ratio of an atom's mass to its charge. The chemist Francis William Aston used this instrument to show that isotopes had different masses. The atomic mass of these isotopes varied by integer amounts, called the whole number rule. The explanation for these different isotopes awaited the discovery of the neutron, an uncharged particle with a mass similar to the proton, by the physicist James Chadwick in 1932. Isotopes were then explained as elements with the same number of protons, but different numbers of neutrons within the nucleus. In 1938, the German chemist Otto Hahn, a student of Rutherford, directed neutrons onto uranium atoms expecting to get transuranium elements. Instead, his chemical experiments showed barium as a product. A year later, Lise Meitner and her nephew Otto Frisch verified that Hahn's result were the first experimental "nuclear fission". In 1944, Hahn received the Nobel prize in chemistry. Despite Hahn's efforts, the contributions of Meitner and Frisch were not recognized. In the 1950s, the development of improved particle accelerators and particle detectors allowed scientists to study the impacts of atoms moving at high energies. Neutrons and protons were found to be hadrons, or composites of smaller particles called quarks. The standard model of particle physics was developed that so far has successfully explained the properties of the nucleus in terms of these sub-atomic particles and the forces that govern their interactions. Though the word "atom" originally denoted a particle that cannot be cut into smaller particles, in modern scientific usage the atom is composed of various subatomic particles. The constituent particles of an atom are the electron, the proton and the neutron. The electron is by far the least massive of these particles at , with a negative electrical charge and a size that is too small to be measured using available techniques. It was the lightest particle with a positive rest mass measured, until the discovery of neutrino mass. Under ordinary conditions, electrons are bound to the positively charged nucleus by the attraction created from opposite electric charges. If an atom has more or fewer electrons than its atomic number, then it becomes respectively negatively or positively charged as a whole; a charged atom is called an ion. Electrons have been known since the late 19th century, mostly thanks to J.J. Thomson; see history of subatomic physics for details. Protons have a positive charge and a mass 1,836 times that of the electron, at . The number of protons in an atom is called its atomic number. Ernest Rutherford (1919) observed that nitrogen under alpha-particle bombardment ejects what appeared to be hydrogen nuclei. By 1920 he had accepted that the hydrogen nucleus is a distinct particle within the atom and named it proton. Neutrons have no electrical charge and have a free mass of 1,839 times the mass of the electron, or . Neutrons are the heaviest of the three constituent particles, but their mass can be reduced by the nuclear binding energy. Neutrons and protons (collectively known as nucleons) have comparable dimensions—on the order of —although the 'surface' of these particles is not sharply defined. The neutron was discovered in 1932 by the English physicist James Chadwick. In the Standard Model of physics, electrons are truly elementary particles with no internal structure, whereas protons and neutrons are composite particles composed of elementary particles called quarks. There are two types of quarks in atoms, each having a fractional electric charge. Protons are composed of two up quarks (each with charge +) and one down quark (with a charge of −). Neutrons consist of one up quark and two down quarks. This distinction accounts for the difference in mass and charge between the two particles. The quarks are held together by the strong interaction (or strong force), which is mediated by gluons. The protons and neutrons, in turn, are held to each other in the nucleus by the nuclear force, which is a residuum of the strong force that has somewhat different range-properties (see the article on the nuclear force for more). The gluon is a member of the family of gauge bosons, which are elementary particles that mediate physical forces. All the bound protons and neutrons in an atom make up a tiny atomic nucleus, and are collectively called nucleons. The radius of a nucleus is approximately equal to 1.07  fm, where "A" is the total number of nucleons. This is much smaller than the radius of the atom, which is on the order of 105 fm. The nucleons are bound together by a short-ranged attractive potential called the residual strong force. At distances smaller than 2.5 fm this force is much more powerful than the electrostatic force that causes positively charged protons to repel each other. Atoms of the same element have the same number of protons, called the atomic number. Within a single element, the number of neutrons may vary, determining the isotope of that element. The total number of protons and neutrons determine the nuclide. The number of neutrons relative to the protons determines the stability of the nucleus, with certain isotopes undergoing radioactive decay. The proton, the electron, and the neutron are classified as fermions. Fermions obey the Pauli exclusion principle which prohibits "identical" fermions, such as multiple protons, from occupying the same quantum state at the same time. Thus, every proton in the nucleus must occupy a quantum state different from all other protons, and the same applies to all neutrons of the nucleus and to all electrons of the electron cloud. A nucleus that has a different number of protons than neutrons can potentially drop to a lower energy state through a radioactive decay that causes the number of protons and neutrons to more closely match. As a result, atoms with matching numbers of protons and neutrons are more stable against decay, but with increasing atomic number, the mutual repulsion of the protons requires an increasing proportion of neutrons to maintain the stability of the nucleus. The number of protons and neutrons in the atomic nucleus can be modified, although this can require very high energies because of the strong force. Nuclear fusion occurs when multiple atomic particles join to form a heavier nucleus, such as through the energetic collision of two nuclei. For example, at the core of the Sun protons require energies of 3–10 keV to overcome their mutual repulsion—the coulomb barrier—and fuse together into a single nucleus. Nuclear fission is the opposite process, causing a nucleus to split into two smaller nuclei—usually through radioactive decay. The nucleus can also be modified through bombardment by high energy subatomic particles or photons. If this modifies the number of protons in a nucleus, the atom changes to a different chemical element. If the mass of the nucleus following a fusion reaction is less than the sum of the masses of the separate particles, then the difference between these two values can be emitted as a type of usable energy (such as a gamma ray, or the kinetic energy of a beta particle), as described by Albert Einstein's mass–energy equivalence formula, formula_1, where formula_2 is the mass loss and formula_3 is the speed of light. This deficit is part of the binding energy of the new nucleus, and it is the non-recoverable loss of the energy that causes the fused particles to remain together in a state that requires this energy to separate. The fusion of two nuclei that create larger nuclei with lower atomic numbers than iron and nickel—a total nucleon number of about 60—is usually an exothermic process that releases more energy than is required to bring them together. It is this energy-releasing process that makes nuclear fusion in stars a self-sustaining reaction. For heavier nuclei, the binding energy per nucleon in the nucleus begins to decrease. That means fusion processes producing nuclei that have atomic numbers higher than about 26, and atomic masses higher than about 60, is an endothermic process. These more massive nuclei can not undergo an energy-producing fusion reaction that can sustain the hydrostatic equilibrium of a star. The electrons in an atom are attracted to the protons in the nucleus by the electromagnetic force. This force binds the electrons inside an electrostatic potential well surrounding the smaller nucleus, which means that an external source of energy is needed for the electron to escape. The closer an electron is to the nucleus, the greater the attractive force. Hence electrons bound near the center of the potential well require more energy to escape than those at greater separations. Electrons, like other particles, have properties of both a particle and a wave. The electron cloud is a region inside the potential well where each electron forms a type of three-dimensional standing wave—a wave form that does not move relative to the nucleus. This behavior is defined by an atomic orbital, a mathematical function that characterises the probability that an electron appears to be at a particular location when its position is measured. Only a discrete (or quantized) set of these orbitals exist around the nucleus, as other possible wave patterns rapidly decay into a more stable form. Orbitals can have one or more ring or node structures, and differ from each other in size, shape and orientation. Each atomic orbital corresponds to a particular energy level of the electron. The electron can change its state to a higher energy level by absorbing a photon with sufficient energy to boost it into the new quantum state. Likewise, through spontaneous emission, an electron in a higher energy state can drop to a lower energy state while radiating the excess energy as a photon. These characteristic energy values, defined by the differences in the energies of the quantum states, are responsible for atomic spectral lines. The amount of energy needed to remove or add an electron—the electron binding energy—is far less than the binding energy of nucleons. For example, it requires only 13.6 eV to strip a ground-state electron from a hydrogen atom, compared to 2.23 "million" eV for splitting a deuterium nucleus. Atoms are electrically neutral if they have an equal number of protons and electrons. Atoms that have either a deficit or a surplus of electrons are called ions. Electrons that are farthest from the nucleus may be transferred to other nearby atoms or shared between atoms. By this mechanism, atoms are able to bond into molecules and other types of chemical compounds like ionic and covalent network crystals. By definition, any two atoms with an identical number of "protons" in their nuclei belong to the same chemical element. Atoms with equal numbers of protons but a different number of "neutrons" are different isotopes of the same element. For example, all hydrogen atoms admit exactly one proton, but isotopes exist with no neutrons (hydrogen-1, by far the most common form, also called protium), one neutron (deuterium), two neutrons (tritium) and more than two neutrons. The known elements form a set of atomic numbers, from the single proton element hydrogen up to the 118-proton element oganesson. All known isotopes of elements with atomic numbers greater than 82 are radioactive, although the radioactivity of element 83 (bismuth) is so slight as to be practically negligible. About 339 nuclides occur naturally on Earth, of which 252 (about 74%) have not been observed to decay, and are referred to as "stable isotopes". Only 90 nuclides are stable theoretically, while another 162 (bringing the total to 252) have not been observed to decay, even though in theory it is energetically possible. These are also formally classified as "stable". An additional 34 radioactive nuclides have half-lives longer than 100 million years, and are long-lived enough to have been present since the birth of the solar system. This collection of 286 nuclides are known as primordial nuclides. Finally, an additional 53 short-lived nuclides are known to occur naturally, as daughter products of primordial nuclide decay (such as radium from uranium), or as products of natural energetic processes on Earth, such as cosmic ray bombardment (for example, carbon-14). For 80 of the chemical elements, at least one stable isotope exists. As a rule, there is only a handful of stable isotopes for each of these elements, the average being 3.2 stable isotopes per element. Twenty-six elements have only a single stable isotope, while the largest number of stable isotopes observed for any element is ten, for the element tin. Elements 43, 61, and all elements numbered 83 or higher have no stable isotopes. Stability of isotopes is affected by the ratio of protons to neutrons, and also by the presence of certain "magic numbers" of neutrons or protons that represent closed and filled quantum shells. These quantum shells correspond to a set of energy levels within the shell model of the nucleus; filled shells, such as the filled shell of 50 protons for tin, confers unusual stability on the nuclide. Of the 252 known stable nuclides, only four have both an odd number of protons "and" odd number of neutrons: hydrogen-2 (deuterium), lithium-6, boron-10 and nitrogen-14. Also, only four naturally occurring, radioactive odd–odd nuclides have a half-life over a billion years: potassium-40, vanadium-50, lanthanum-138 and tantalum-180m. Most odd–odd nuclei are highly unstable with respect to beta decay, because the decay products are even–even, and are therefore more strongly bound, due to nuclear pairing effects. The large majority of an atom's mass comes from the protons and neutrons that make it up. The total number of these particles (called "nucleons") in a given atom is called the mass number. It is a positive integer and dimensionless (instead of having dimension of mass), because it expresses a count. An example of use of a mass number is "carbon-12," which has 12 nucleons (six protons and six neutrons). The actual mass of an atom at rest is often expressed in daltons (Da), also called the unified atomic mass unit (u). This unit is defined as a twelfth of the mass of a free neutral atom of carbon-12, which is approximately . Hydrogen-1 (the lightest isotope of hydrogen which is also the nuclide with the lowest mass) has an atomic weight of 1.007825 Da. The value of this number is called the atomic mass. A given atom has an atomic mass approximately equal (within 1%) to its mass number times the atomic mass unit (for example the mass of a nitrogen-14 is roughly 14 Da), but this number will not be exactly an integer except (by definition) in the case of carbon-12. The heaviest stable atom is lead-208, with a mass of . As even the most massive atoms are far too light to work with directly, chemists instead use the unit of moles. One mole of atoms of any element always has the same number of atoms (about ). This number was chosen so that if an element has an atomic mass of 1 u, a mole of atoms of that element has a mass close to one gram. Because of the definition of the unified atomic mass unit, each carbon-12 atom has an atomic mass of exactly 12 Da, and so a mole of carbon-12 atoms weighs exactly 0.012 kg. Atoms lack a well-defined outer boundary, so their dimensions are usually described in terms of an atomic radius. This is a measure of the distance out to which the electron cloud extends from the nucleus. This assumes the atom to exhibit a spherical shape, which is only obeyed for atoms in vacuum or free space. Atomic radii may be derived from the distances between two nuclei when the two atoms are joined in a chemical bond. The radius varies with the location of an atom on the atomic chart, the type of chemical bond, the number of neighboring atoms (coordination number) and a quantum mechanical property known as spin. On the periodic table of the elements, atom size tends to increase when moving down columns, but decrease when moving across rows (left to right). Consequently, the smallest atom is helium with a radius of 32 pm, while one of the largest is caesium at 225 pm. When subjected to external forces, like electrical fields, the shape of an atom may deviate from spherical symmetry. The deformation depends on the field magnitude and the orbital type of outer shell electrons, as shown by group-theoretical considerations. Aspherical deviations might be elicited for instance in crystals, where large crystal-electrical fields may occur at low-symmetry lattice sites. Significant ellipsoidal deformations have been shown to occur for sulfur ions and chalcogen ions in pyrite-type compounds. Atomic dimensions are thousands of times smaller than the wavelengths of light (400–700 nm) so they cannot be viewed using an optical microscope, although individual atoms can be observed using a scanning tunneling microscope. To visualize the minuteness of the atom, consider that a typical human hair is about 1 million carbon atoms in width. A single drop of water contains about 2 sextillion () atoms of oxygen, and twice the number of hydrogen atoms. A single carat diamond with a mass of contains about 10 sextillion (1022) atoms of carbon. If an apple were magnified to the size of the Earth, then the atoms in the apple would be approximately the size of the original apple. Every element has one or more isotopes that have unstable nuclei that are subject to radioactive decay, causing the nucleus to emit particles or electromagnetic radiation. Radioactivity can occur when the radius of a nucleus is large compared with the radius of the strong force, which only acts over distances on the order of 1 fm. The most common forms of radioactive decay are: Other more rare types of radioactive decay include ejection of neutrons or protons or clusters of nucleons from a nucleus, or more than one beta particle. An analog of gamma emission which allows excited nuclei to lose energy in a different way, is internal conversion—a process that produces high-speed electrons that are not beta rays, followed by production of high-energy photons that are not gamma rays. A few large nuclei explode into two or more charged fragments of varying masses plus several neutrons, in a decay called spontaneous nuclear fission. Each radioactive isotope has a characteristic decay time period—the half-life—that is determined by the amount of time needed for half of a sample to decay. This is an exponential decay process that steadily decreases the proportion of the remaining isotope by 50% every half-life. Hence after two half-lives have passed only 25% of the isotope is present, and so forth. Elementary particles possess an intrinsic quantum mechanical property known as spin. This is analogous to the angular momentum of an object that is spinning around its center of mass, although strictly speaking these particles are believed to be point-like and cannot be said to be rotating. Spin is measured in units of the reduced Planck constant (ħ), with electrons, protons and neutrons all having spin ½ ħ, or "spin-½". In an atom, electrons in motion around the nucleus possess orbital angular momentum in addition to their spin, while the nucleus itself possesses angular momentum due to its nuclear spin. The magnetic field produced by an atom—its magnetic moment—is determined by these various forms of angular momentum, just as a rotating charged object classically produces a magnetic field, but the most dominant contribution comes from electron spin. Due to the nature of electrons to obey the Pauli exclusion principle, in which no two electrons may be found in the same quantum state, bound electrons pair up with each other, with one member of each pair in a spin up state and the other in the opposite, spin down state. Thus these spins cancel each other out, reducing the total magnetic dipole moment to zero in some atoms with even number of electrons. In ferromagnetic elements such as iron, cobalt and nickel, an odd number of electrons leads to an unpaired electron and a net overall magnetic moment. The orbitals of neighboring atoms overlap and a lower energy state is achieved when the spins of unpaired electrons are aligned with each other, a spontaneous process known as an exchange interaction. When the magnetic moments of ferromagnetic atoms are lined up, the material can produce a measurable macroscopic field. Paramagnetic materials have atoms with magnetic moments that line up in random directions when no magnetic field is present, but the magnetic moments of the individual atoms line up in the presence of a field. The nucleus of an atom will have no spin when it has even numbers of both neutrons and protons, but for other cases of odd numbers, the nucleus may have a spin. Normally nuclei with spin are aligned in random directions because of thermal equilibrium, but for certain elements (such as xenon-129) it is possible to polarize a significant proportion of the nuclear spin states so that they are aligned in the same direction—a condition called hyperpolarization. This has important applications in magnetic resonance imaging. The potential energy of an electron in an atom is negative, its dependence of its position reaches the minimum (the most absolute value) inside the nucleus, and vanishes when the distance from the nucleus goes to infinity, roughly in an inverse proportion to the distance. In the quantum-mechanical model, a bound electron can only occupy a set of states centered on the nucleus, and each state corresponds to a specific energy level; see time-independent Schrödinger equation for theoretical explanation. An energy level can be measured by the amount of energy needed to unbind the electron from the atom, and is usually given in units of electronvolts (eV). The lowest energy state of a bound electron is called the ground state, i.e. stationary state, while an electron transition to a higher level results in an excited state. The electron's energy raises when "n" increases because the (average) distance to the nucleus increases. Dependence of the energy on is caused not by electrostatic potential of the nucleus, but by interaction between electrons. For an electron to transition between two different states, e.g. ground state to first excited state, it must absorb or emit a photon at an energy matching the difference in the potential energy of those levels, according to the Niels Bohr model, what can be precisely calculated by the Schrödinger equation. Electrons jump between orbitals in a particle-like fashion. For example, if a single photon strikes the electrons, only a single electron changes states in response to the photon; see Electron properties. The energy of an emitted photon is proportional to its frequency, so these specific energy levels appear as distinct bands in the electromagnetic spectrum. Each element has a characteristic spectrum that can depend on the nuclear charge, subshells filled by electrons, the electromagnetic interactions between the electrons and other factors. When a continuous spectrum of energy is passed through a gas or plasma, some of the photons are absorbed by atoms, causing electrons to change their energy level. Those excited electrons that remain bound to their atom spontaneously emit this energy as a photon, traveling in a random direction, and so drop back to lower energy levels. Thus the atoms behave like a filter that forms a series of dark absorption bands in the energy output. (An observer viewing the atoms from a view that does not include the continuous spectrum in the background, instead sees a series of emission lines from the photons emitted by the atoms.) Spectroscopic measurements of the strength and width of atomic spectral lines allow the composition and physical properties of a substance to be determined. Close examination of the spectral lines reveals that some display a fine structure splitting. This occurs because of spin–orbit coupling, which is an interaction between the spin and motion of the outermost electron. When an atom is in an external magnetic field, spectral lines become split into three or more components; a phenomenon called the Zeeman effect. This is caused by the interaction of the magnetic field with the magnetic moment of the atom and its electrons. Some atoms can have multiple electron configurations with the same energy level, which thus appear as a single spectral line. The interaction of the magnetic field with the atom shifts these electron configurations to slightly different energy levels, resulting in multiple spectral lines. The presence of an external electric field can cause a comparable splitting and shifting of spectral lines by modifying the electron energy levels, a phenomenon called the Stark effect. If a bound electron is in an excited state, an interacting photon with the proper energy can cause stimulated emission of a photon with a matching energy level. For this to occur, the electron must drop to a lower energy state that has an energy difference matching the energy of the interacting photon. The emitted photon and the interacting photon then move off in parallel and with matching phases. That is, the wave patterns of the two photons are synchronized. This physical property is used to make lasers, which can emit a coherent beam of light energy in a narrow frequency band. Valency is the combining power of an element. It is equal to number of hydrogen atoms that atom can combine or displace in forming compounds. The outermost electron shell of an atom in its uncombined state is known as the valence shell, and the electrons in that shell are called valence electrons. The number of valence electrons determines the bonding behavior with other atoms. Atoms tend to chemically react with each other in a manner that fills (or empties) their outer valence shells. For example, a transfer of a single electron between atoms is a useful approximation for bonds that form between atoms with one-electron more than a filled shell, and others that are one-electron short of a full shell, such as occurs in the compound sodium chloride and other chemical ionic salts. Many elements display multiple valences, or tendencies to share differing numbers of electrons in different compounds. Thus, chemical bonding between these elements takes many forms of electron-sharing that are more than simple electron transfers. Examples include the element carbon and the organic compounds. The chemical elements are often displayed in a periodic table that is laid out to display recurring chemical properties, and elements with the same number of valence electrons form a group that is aligned in the same column of the table. (The horizontal rows correspond to the filling of a quantum shell of electrons.) The elements at the far right of the table have their outer shell completely filled with electrons, which results in chemically inert elements known as the noble gases. Quantities of atoms are found in different states of matter that depend on the physical conditions, such as temperature and pressure. By varying the conditions, materials can transition between solids, liquids, gases and plasmas. Within a state, a material can also exist in different allotropes. An example of this is solid carbon, which can exist as graphite or diamond. Gaseous allotropes exist as well, such as dioxygen and ozone. At temperatures close to absolute zero, atoms can form a Bose–Einstein condensate, at which point quantum mechanical effects, which are normally only observed at the atomic scale, become apparent on a macroscopic scale. This super-cooled collection of atoms then behaves as a single super atom, which may allow fundamental checks of quantum mechanical behavior. The scanning tunneling microscope is a device for viewing surfaces at the atomic level. It uses the quantum tunneling phenomenon, which allows particles to pass through a barrier that would be insurmountable in the classical perspective. Electrons tunnel through the vacuum between two planar metal electrodes, on each of which is an adsorbed atom, providing a tunneling-current density that can be measured. Scanning one atom (taken as the tip) as it moves past the other (the sample) permits plotting of tip displacement versus lateral separation for a constant current. The calculation shows the extent to which scanning-tunneling-microscope images of an individual atom are visible. It confirms that for low bias, the microscope images the space-averaged dimensions of the electron orbitals across closely packed energy levels—the Fermi level local density of states. An atom can be ionized by removing one of its electrons. The electric charge causes the trajectory of an atom to bend when it passes through a magnetic field. The radius by which the trajectory of a moving ion is turned by the magnetic field is determined by the mass of the atom. The mass spectrometer uses this principle to measure the mass-to-charge ratio of ions. If a sample contains multiple isotopes, the mass spectrometer can determine the proportion of each isotope in the sample by measuring the intensity of the different beams of ions. Techniques to vaporize atoms include inductively coupled plasma atomic emission spectroscopy and inductively coupled plasma mass spectrometry, both of which use a plasma to vaporize samples for analysis. A more area-selective method is electron energy loss spectroscopy, which measures the energy loss of an electron beam within a transmission electron microscope when it interacts with a portion of a sample. The atom-probe tomograph has sub-nanometer resolution in 3-D and can chemically identify individual atoms using time-of-flight mass spectrometry. Spectra of excited states can be used to analyze the atomic composition of distant stars. Specific light wavelengths contained in the observed light from stars can be separated out and related to the quantized transitions in free gas atoms. These colors can be replicated using a gas-discharge lamp containing the same element. Helium was discovered in this way in the spectrum of the Sun 23 years before it was found on Earth. Baryonic matter forms about 4% of the total energy density of the observable Universe, with an average density of about 0.25 particles/m3 (mostly protons and electrons). Within a galaxy such as the Milky Way, particles have a much higher concentration, with the density of matter in the interstellar medium (ISM) ranging from 105 to 109 atoms/m3. The Sun is believed to be inside the Local Bubble, so the density in the solar neighborhood is only about 103 atoms/m3. Stars form from dense clouds in the ISM, and the evolutionary processes of stars result in the steady enrichment of the ISM with elements more massive than hydrogen and helium. Up to 95% of the Milky Way's baryonic matter are concentrated inside stars, where conditions are unfavorable for atomic matter. The total baryonic mass is about 10% of the mass of the galaxy; the remainder of the mass is an unknown dark matter. High temperature inside stars makes most "atoms" fully ionized, that is, separates "all" electrons from the nuclei. In stellar remnants—with exception of their surface layers—an immense pressure make electron shells impossible. Electrons are thought to exist in the Universe since early stages of the Big Bang. Atomic nuclei forms in nucleosynthesis reactions. In about three minutes Big Bang nucleosynthesis produced most of the helium, lithium, and deuterium in the Universe, and perhaps some of the beryllium and boron. Ubiquitousness and stability of atoms relies on their binding energy, which means that an atom has a lower energy than an unbound system of the nucleus and electrons. Where the temperature is much higher than ionization potential, the matter exists in the form of plasma—a gas of positively charged ions (possibly, bare nuclei) and electrons. When the temperature drops below the ionization potential, atoms become statistically favorable. Atoms (complete with bound electrons) became to dominate over charged particles 380,000 years after the Big Bang—an epoch called recombination, when the expanding Universe cooled enough to allow electrons to become attached to nuclei. Since the Big Bang, which produced no carbon or heavier elements, atomic nuclei have been combined in stars through the process of nuclear fusion to produce more of the element helium, and (via the triple alpha process) the sequence of elements from carbon up to iron; see stellar nucleosynthesis for details. Isotopes such as lithium-6, as well as some beryllium and boron are generated in space through cosmic ray spallation. This occurs when a high-energy proton strikes an atomic nucleus, causing large numbers of nucleons to be ejected. Elements heavier than iron were produced in supernovae and colliding neutron stars through the r-process, and in AGB stars through the s-process, both of which involve the capture of neutrons by atomic nuclei. Elements such as lead formed largely through the radioactive decay of heavier elements. Most of the atoms that make up the Earth and its inhabitants were present in their current form in the nebula that collapsed out of a molecular cloud to form the Solar System. The rest are the result of radioactive decay, and their relative proportion can be used to determine the age of the Earth through radiometric dating. Most of the helium in the crust of the Earth (about 99% of the helium from gas wells, as shown by its lower abundance of helium-3) is a product of alpha decay. There are a few trace atoms on Earth that were not present at the beginning (i.e., not "primordial"), nor are results of radioactive decay. Carbon-14 is continuously generated by cosmic rays in the atmosphere. Some atoms on Earth have been artificially generated either deliberately or as by-products of nuclear reactors or explosions. Of the transuranic elements—those with atomic numbers greater than 92—only plutonium and neptunium occur naturally on Earth. Transuranic elements have radioactive lifetimes shorter than the current age of the Earth and thus identifiable quantities of these elements have long since decayed, with the exception of traces of plutonium-244 possibly deposited by cosmic dust. Natural deposits of plutonium and neptunium are produced by neutron capture in uranium ore. The Earth contains approximately atoms. Although small numbers of independent atoms of noble gases exist, such as argon, neon, and helium, 99% of the atmosphere is bound in the form of molecules, including carbon dioxide and diatomic oxygen and nitrogen. At the surface of the Earth, an overwhelming majority of atoms combine to form various compounds, including water, salt, silicates and oxides. Atoms can also combine to create materials that do not consist of discrete molecules, including crystals and liquid or solid metals. This atomic matter forms networked arrangements that lack the particular type of small-scale interrupted order associated with molecular matter. All nuclides with atomic numbers higher than 82 (lead) are known to be radioactive. No nuclide with an atomic number exceeding 92 (uranium) exists on Earth as a primordial nuclide, and heavier elements generally have shorter half-lives. Nevertheless, an "island of stability" encompassing relatively long-lived isotopes of superheavy elements with atomic numbers 110–114 might exist. Predictions for the half-life of the most stable nuclide on the island range from a few minutes to millions of years. In any case, superheavy elements (with "Z" > 104) would not exist due to increasing Coulomb repulsion (which results in spontaneous fission with increasingly short half-lives) in the absence of any stabilizing effects. Each particle of matter has a corresponding antimatter particle with the opposite electrical charge. Thus, the positron is a positively charged antielectron and the antiproton is a negatively charged equivalent of a proton. When a matter and corresponding antimatter particle meet, they annihilate each other. Because of this, along with an imbalance between the number of matter and antimatter particles, the latter are rare in the universe. The first causes of this imbalance are not yet fully understood, although theories of baryogenesis may offer an explanation. As a result, no antimatter atoms have been discovered in nature. In 1996 the antimatter counterpart of the hydrogen atom (antihydrogen) was synthesized at the CERN laboratory in Geneva. Other exotic atoms have been created by replacing one of the protons, neutrons or electrons with other particles that have the same charge. For example, an electron can be replaced by a more massive muon, forming a muonic atom. These types of atoms can be used to test the fundamental predictions of physics.
https://en.wikipedia.org/wiki?curid=902
Arable land Arable land (, "able is to be plowed") is any land capable of being ploughed and used to grow crops. Alternatively, for the purposes of agricultural statistics, the term often has a more precise definition: "Arable land is the land under temporary agricultural crops (multiple-cropped areas are counted only once), temporary meadows for mowing or pasture, land under market and kitchen gardens and land temporarily fallow (less than five years). The abandoned land resulting from shifting cultivation is not included in this category. Data for 'Arable land' are not meant to indicate the amount of land that is potentially cultivable." A more concise definition appearing in the Eurostat glossary similarly refers to actual rather than potential uses: "land worked (ploughed or tilled) regularly, generally under a system of crop rotation". Non-arable land can sometimes be converted to arable land through methods such as loosening and tilling (breaking up) of the soil, though in more extreme cases the degree of modification required to make certain types of land arable can be prohibitively expensive. In Britain, arable land has traditionally been contrasted with pasturable land such as heaths, which could be used for sheep-rearing but not as farmland. According to the Food and Agriculture Organization of the United Nations, in the year 2013, the world's arable land amounted to 1,407 million hectares, out of a total of 4,924 million hectares of land used for agriculture. Agricultural land that is not arable according to the FAO definition above includes: Other non-arable land includes land that is not suitable for any agricultural use. Land that is not arable, in the sense of lacking capability or suitability for cultivation for crop production, has one or more limitations – a lack of sufficient fresh water for irrigation, stoniness, steepness, adverse climate, excessive wetness with impracticality of drainage, and/or excessive salts, among others. Although such limitations may preclude cultivation, and some will in some cases preclude any agricultural use, large areas unsuitable for cultivation may still be agriculturally productive. For example, US NRCS statistics indicate that about 59 percent of US non-federal pasture and unforested rangeland is unsuitable for cultivation, yet such land has value for grazing of livestock. In British Columbia, Canada, 41 percent of the provincial Agricultural Land Reserve area is unsuitable for production of cultivated crops, but is suitable for uncultivated production of forage usable by grazing livestock. Similar examples can be found in many rangeland areas elsewhere. Land incapable of being cultivated for production of crops can sometimes be converted to arable land. New arable land makes more food, and can reduce starvation. This outcome also makes a country more self-sufficient and politically independent, because food importation is reduced. Making non-arable land arable often involves digging new irrigation canals and new wells, aqueducts, desalination plants, planting trees for shade in the desert, hydroponics, fertilizer, nitrogen fertilizer, pesticides, reverse osmosis water processors, PET film insulation or other insulation against heat and cold, digging ditches and hills for protection against the wind, and installing greenhouses with internal light and heat for protection against the cold outside and to provide light in cloudy areas. Such modifications are often prohibitively expensive. An alternative is the seawater greenhouse, which desalinates water through evaporation and condensation using solar energy as the only energy input. This technology is optimized to grow crops on desert land close to the sea. Examples of infertile non-arable land being turned into fertile arable land include: Examples of fertile arable land being turned into infertile land include:
https://en.wikipedia.org/wiki?curid=903
Cable television Cable television is a system of delivering television programming to consumers via radio frequency (RF) signals transmitted through coaxial cables, or in more recent systems, light pulses through fibre-optic cables. This contrasts with broadcast television (also known as terrestrial television), in which the television signal is transmitted over the air by radio waves and received by a television antenna attached to the television; or satellite television, in which the television signal is transmitted by a communications satellite orbiting the Earth and received by a satellite dish on the roof. FM radio programming, high-speed Internet, telephone services, and similar non-television services may also be provided through these cables. Analog television was standard in the 20th century, but since the 2000s, cable systems have been upgraded to digital cable operation. A "cable channel" (sometimes known as a "cable network") is a television network available via cable television. When available through satellite television, including direct broadcast satellite providers such as DirecTV, Dish Network and Sky, as well as via IPTV providers such as Verizon FIOS and AT&T U-verse, this is referred to as a "satellite channel". Alternative terms include "non-broadcast channel" or "programming service", the latter being mainly used in legal contexts. Examples of cable/satellite channels/cable networks available in many countries are HBO, Cinemax, MTV, Cartoon Network, AXN, E!, FX, Discovery Channel, Canal+, Eurosport, Fox Sports, Disney Channel, Nickelodeon, CNN International, and ESPN. The abbreviation CATV is often used for cable television. It originally stood for "Community Access Television" or "Community Antenna Television", from cable television's origins in 1948. In areas where over-the-air TV reception was limited by distance from transmitters or mountainous terrain, large "community antennas" were constructed, and cable was run from them to individual homes. To receive cable television at a given location, cable distribution lines must be available on the local utility poles or underground utility lines. Coaxial cable brings the signal to the customer's building through a "service drop", an overhead or underground cable. If the subscriber's building does not have a cable service drop, the cable company will install one. The standard cable used in the U.S. is RG-6, which has a 75 ohm impedance, and connects with a type F connector. The cable company's portion of the wiring usually ends at a distribution box on the building exterior, and built-in cable wiring in the walls usually distributes the signal to jacks in different rooms to which televisions are connected. Multiple cables to different rooms are split off the incoming cable with a small device called a splitter. There are two standards for cable television; older analog cable, and newer digital cable which can carry data signals used by digital television receivers such as HDTV equipment. All cable companies in the United States have switched to or are in the course of switching to digital cable television since it was first introduced in the late 1990s. Most cable companies require a set-top box or a slot on one's TV set for conditional access module cards to view their cable channels, even on newer televisions with digital cable QAM tuners, because most digital cable channels are now encrypted, or "scrambled", to reduce cable service theft. A cable from the jack in the wall is attached to the input of the box, and an output cable from the box is attached to the television, usually the RF-IN or composite input on older TVs. Since the set-top box only decodes the single channel that is being watched, each television in the house requires a separate box. Some unencrypted channels, usually traditional over-the-air broadcast networks, can be displayed without a receiver box. The cable company will provide set top boxes based on the level of service a customer purchases, from basic set top boxes with a standard definition picture connected through the standard coaxial connection on the TV, to high-definition wireless DVR receivers connected via HDMI or component. Older analog television sets are "cable ready" and can receive the old analog cable without a set-top box. To receive digital cable channels on an analog television set, even unencrypted ones, requires a different type of box, a digital television adapter supplied by the cable company or purchased by the subscriber. Another new distribution method that takes advantage of the low cost high quality DVB distribution to residential areas, uses TV gateways to convert the DVB-C, DVB-C2 stream to IP for distribution of TV over IP network in the home. In the most common system, multiple television channels (as many as 500, although this varies depending on the provider's available channel capacity) are distributed to subscriber residences through a coaxial cable, which comes from a trunkline supported on utility poles originating at the cable company's local distribution facility, called the "headend". Many channels can be transmitted through one coaxial cable by a technique called frequency division multiplexing. At the headend, each television channel is translated to a different frequency. By giving each channel a different frequency "slot" on the cable, the separate television signals do not interfere with each other. At an outdoor cable box on the subscriber's residence the company's service drop cable is connected to cables distributing the signal to different rooms in the building. At each television, the subscriber's television or a set-top box provided by the cable company translates the desired channel back to its original frequency (baseband), and it is displayed onscreen. Due to widespread cable theft in earlier analog systems, the signals are typically encrypted on modern digital cable systems, and the set-top box must be activated by an activation code sent by the cable company before it will function, which is only sent after the subscriber signs up. If the subscriber fails to pay their bill, the cable company can send a signal to deactivate the subscriber's box, preventing reception. There are also usually "upstream" channels on the cable to send data from the customer box to the cable headend, for advanced features such as requesting pay-per-view shows or movies, cable internet access, and cable telephone service. The "downstream" channels occupy a band of frequencies from approximately 50 MHz to 1 GHz, while the "upstream" channels occupy frequencies of 5 to 42 MHz. Subscribers pay with a monthly fee. Subscribers can choose from several levels of service, with "premium" packages including more channels but costing a higher rate. At the local headend, the feed signals from the individual television channels are received by dish antennas from communication satellites. Additional local channels, such as local broadcast television stations, educational channels from local colleges, and community access channels devoted to local governments (PEG channels) are usually included on the cable service. Commercial advertisements for local business are also inserted in the programming at the headend (the individual channels, which are distributed nationally, also have their own nationally oriented commercials). Modern cable systems are large, with a single network and headend often serving an entire metropolitan area. Most systems use hybrid fiber-coaxial (HFC) distribution; this means the trunklines that carry the signal from the headend to local neighborhoods are optical fiber to provide greater bandwidth and also extra capacity for future expansion. At the headend, the electrical signal is translated into an optical signal and sent through the fiber. The fiber trunkline goes to several "distribution hubs", from which multiple fibers fan out to carry the signal to boxes called "optical nodes" in local communities. At the optical node, the optical signal is translated back into an electrical signal and carried by coaxial cable distribution lines on utility poles, from which cables branch out to a series of signal amplifiers and line extenders. These devices carry the signal to customers via passive RF devices called taps. Cable television began in the United States as a commercial business in 1950, although there were small-scale systems by hobbyists in the 1940s. The early systems simply received weak (broadcast) channels, amplified them, and sent them over unshielded wires to the subscribers, limited to a community or to adjacent communities. The receiving antenna would be taller than any individual subscriber could afford, thus bringing in stronger signals; in hilly or mountainous terrain it would be placed at a high elevation. At the outset, cable systems only served smaller communities without television stations of their own, and which could not easily receive signals from stations in cities because of distance or hilly terrain. In Canada, however, communities with their own signals were fertile cable markets, as viewers wanted to receive American signals. Rarely, as in the college town of Alfred, New York, U.S. cable systems retransmitted Canadian channels. Although early (VHF) television receivers could receive 12 channels (2–13), the maximum number of channels that could be broadcast in one city was 7: channels 2, 4, either 5 or 6, 7, 9, 11 and 13, as receivers at the time were unable to receive strong (local) signals on adjacent channels without distortion. (There were frequency gaps between 4 and 5, and between 6 and 7, which allowed both to be used in the same city). As equipment improved, all twelve channels could be utilized, except where a local VHF television station broadcast. Local broadcast channels were not usable for signals deemed to be priority, but technology allowed low-priority signals to be placed on such channels by synchronizing their blanking intervals. Similarly, a local VHF station could not be carried on its broadcast channel as the signals would arrive at the TV set slightly separated in time, causing "ghosting". The bandwidth of the amplifiers also was limited, meaning frequencies over 250 MHz were difficult to transmit to distant portions of the coaxial network, and UHF channels could not be used at all. To expand beyond 12 channels, non-standard "midband" channels had to be used, located between the FM band and Channel 7, or "superband" beyond Channel 13 up to about 300 MHz; these channels initially were only accessible using separate tuner boxes that sent the chosen channel into the TV set on Channel 2, 3 or 4. Initially, UHF broadcast stations were at a disadvantage because the standard TV sets in use at the time we’re unable to receive their channels. Around 1966 the FCC mandated that all TV sets sold after a certain date were required to have the capability of receiving UHF channels. Before being added to the cable box itself, these midband channels were used for early incarnations of pay TV, e.g. The Z Channel (Los Angeles) and HBO but transmitted in the clear i.e. not scrambled as standard TV sets of the period could not pick up the signal nor could the average consumer `de-tune' the normal stations to be able to receive it. Once tuners that could receive select mid-band and super-band channels began to be incorporated into standard television sets, broadcasters were forced to either install scrambling circuitry or move these signals further out of the range of reception for early cable-ready TVs and VCRs. However, once consumer sets had the ability to receive all 181 FCC allocated channels, premium broadcasters were left with no choice but to scramble. Unfortunately for pay-TV operators, the descrambling circuitry was often published in electronics hobby magazines such as "Popular Science" and "Popular Electronics" allowing anybody with anything more than a rudimentary knowledge of broadcast electronics to be able to build their own and receive the programming without cost. Later, the cable operators began to carry FM radio stations, and encouraged subscribers to connect their FM stereo sets to cable. Before stereo and bilingual TV sound became common, Pay-TV channel sound was added to the FM stereo cable line-ups. About this time, operators expanded beyond the 12-channel dial to use the "midband" and "superband" VHF channels adjacent to the "high band" 7–13 of North American television frequencies. Some operators as in Cornwall, Ontario, used a dual distribution network with Channels 2–13 on each of the two cables. During the 1980s, United States regulations not unlike public, educational, and government access (PEG) created the beginning of cable-originated live television programming. As cable penetration increased, numerous cable-only TV stations were launched, many with their own news bureaus that could provide more immediate and more localized content than that provided by the nearest network newscast. Such stations may use similar on-air branding as that used by the nearby broadcast network affiliate, but the fact that these stations do not broadcast over the air and are not regulated by the FCC, their call signs are meaningless. These stations evolved partially into today's over-the-air digital subchannels, where a main broadcast TV station e.g. NBS 37* would – in the case of no local CNB or ABS station being available – rebroadcast the programming from a nearby affiliate but fill in with its own news and other community programming to suit its own locale. Many live local programs with local interests were subsequently created all over the United States in most major television markets in the early 1980s. This evolved into today's many cable-only broadcasts of diverse programming, including cable-only produced television movies and miniseries. Cable specialty channels, starting with channels oriented to show movies and large sporting or performance events, diversified further, and "narrowcasting" became common. By the late 1980s, cable-only signals outnumbered broadcast signals on cable systems, some of which by this time had expanded beyond 35 channels. By the mid-1980s in Canada, cable operators were allowed by the regulators to enter into distribution contracts with cable networks on their own. By the 1990s, tiers became common, with customers able to subscribe to different tiers to obtain different selections of additional channels above the basic selection. By subscribing to additional tiers, customers could get specialty channels, movie channels, and foreign channels. Large cable companies used addressable descramblers to limit access to premium channels for customers not subscribing to higher tiers, however the above magazines often published workarounds for that technology as well. During the 1990s, the pressure to accommodate the growing array of offerings resulted in digital transmission that made more efficient use of the VHF signal capacity; fibre optics was common to carry signals into areas near the home, where coax could carry higher frequencies over the short remaining distance. Although for a time in the 1980s and 1990s, television receivers and VCRs were equipped to receive the mid-band and super-band channels. Due to the fact that the descrambling circuitry was for a time present in these tuners, depriving the cable operator of much of their revenue, such cable-ready tuners are rarely used now – requiring a return to the set-top boxes used from the 1970s onward. The conversion to digital broadcasting has put all signals – broadcast and cable – into digital form, rendering analog cable television service mostly obsolete, functional in an ever-dwindling supply of select markets. Analog television sets are still accommodated, but their tuners are mostly obsolete, oftentimes dependent entirely on the set-top box. Cable television is mostly available in North America, Europe, Australia, South Asia and East Asia, and less so in South America and the Middle East. Cable television has had little success in Africa, as it is not cost-effective to lay cables in sparsely populated areas. So-called "Wireless Cable" microwave-based systems are used instead. Coaxial cables are capable of bi-directional carriage of signals as well as the transmission of large amounts of data. Cable television signals use only a portion of the bandwidth available over coaxial lines. This leaves plenty of space available for other digital services such as cable internet, cable telephony and wireless services, using both unlicensed and licensed spectrum. Broadband internet access is achieved over coaxial cable by using cable modems to convert the network data into a type of digital signal that can be transferred over coaxial cable. One problem with some cable systems is the older amplifiers placed along the cable routes are unidirectional thus in order to allow for uploading of data the customer would need to use an analog telephone modem to provide for the upstream connection. This limited the upstream speed to 31.2 Kbp/s and prevented the always-on convenience broadband internet typically provides. Many large cable systems have upgraded or are upgrading their equipment to allow for bi-directional signals, thus allowing for greater upload speed and always-on convenience, though these upgrades are expensive. In North America, Australia and Europe, many cable operators have already introduced cable telephone service, which operates just like existing fixed line operators. This service involves installing a special telephone interface at the customer's premises that converts the analog signals from the customer's in-home wiring into a digital signal, which is then sent on the local loop (replacing the analog last mile, or plain old telephone service (POTS)) to the company's switching center, where it is connected to the public switched telephone network (PSTN). The biggest obstacle to cable telephone service is the need for nearly 100% reliable service for emergency calls. One of the standards available for digital cable telephony, PacketCable, seems to be the most promising and able to work with the quality of service (QOS) demands of traditional analog plain old telephone service (POTS) service. The biggest advantage to digital cable telephone service is similar to the advantage of digital cable, namely that data can be compressed, resulting in much less bandwidth used than a dedicated analog circuit-switched service. Other advantages include better voice quality and integration to a Voice over Internet Protocol (VoIP) network providing cheap or unlimited nationwide and international calling. In many cases, digital cable telephone service is separate from cable modem service being offered by many cable companies and does not rely on Internet Protocol (IP) traffic or the Internet. Traditional cable television providers and traditional telecommunication companies increasingly compete in providing voice, video and data services to residences. The combination of television, telephone and Internet access is commonly called "triple play", regardless of whether CATV or telcos offer it.
https://en.wikipedia.org/wiki?curid=7587
Cholera Cholera is an infection of the small intestine by some strains of the bacterium "Vibrio cholerae". Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea that lasts a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure. Cholera is caused by a number of types of "Vibrio cholerae", with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked seafood is a common source. Humans are the only animal affected. Risk factors for the disease include poor sanitation, not enough clean drinking water, and poverty. There are concerns that rising sea levels will increase rates of disease. Cholera can be diagnosed by a stool test. A rapid dipstick test is available but is not as accurate. Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months. They have the added benefit of protecting against another type of diarrhea caused by "E. coli". The primary treatment is oral rehydration therapy—the replacement of fluids with slightly sweet and salty solutions. Rice-based solutions are preferred. Zinc supplementation is useful in children. In severe cases, intravenous fluids, such as Ringer's lactate, may be required, and antibiotics may be beneficial. Testing to see which antibiotic the cholera is susceptible to can help guide the choice. Cholera affects an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. Although it is classified as a pandemic , it is rare in the developed world. Children are mostly affected. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5% but may be as high as 50%. No access to treatment results in a higher death rate. Descriptions of cholera are found as early as the 5th century BC in Sanskrit. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology. Seven large outbreaks have occurred over the last 200 years with millions of deaths. The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a person's skin may turn bluish-gray from extreme loss of fluids. Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children. Cholera bacteria have been found in shellfish and plankton. Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in the developing world it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as "Vibrio cholerae" accumulates in planktonic crustaceans and the oysters eat the zooplankton. People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of "V. cholerae" in the environment. The source of the contamination is typically other cholera sufferers when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person. "V. cholerae" also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of "V. cholerae". Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of "V. cholerae" to be cultured on standard media, a phenotype referred to as 'viable but non-culturable' (VBNC) or more conservatively 'active but non-culturable' (ABNC). One study indicates that the culturability of "V. cholerae" drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence. Both toxic and non-toxic strains exist. Non-toxic strains can acquire toxicity through a temperate bacteriophage. About 100million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals' susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to experience a severe case if they become infected. Any individual, even a healthy adult in middle age, can experience a severe case, and each person's case should be measured by the loss of fluids, preferably in consultation with a professional health care provider. The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are thus not affected by cystic fibrosis) are more resistant to "V. cholerae" infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection. When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive. Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins that they express in response to the changed chemical surroundings. On reaching the intestinal wall, "V. cholerae" start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of "V. cholerae" bacteria out into the drinking water of the next host if proper sanitation measures are not in place. The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into "V. cholerae" by horizontal gene transfer. Virulent strains of "V. cholerae" carry a variant of a temperate bacteriophage called CTXφ. Microbiologists have studied the genetic mechanisms by which the "V. cholerae" bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless an appropriate mixture of dilute salt water and sugar is taken to replace the blood's water and salts lost in the diarrhea. By inserting separate, successive sections of "V. cholerae" DNA into the DNA of other bacteria, such as "E. coli" that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which "V. cholerae" responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of "V. cholerae" virulence determinants. In responding to the chemical environment at the intestinal wall, the "V. cholerae" bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine." Amplified fragment length polymorphism fingerprinting of the pandemic isolates of "V. cholerae" has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent. In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in "in vitro" studies. A rapid dipstick test is available to determine the presence of "V. cholerae". In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment is usually started without or before confirmation by laboratory analysis. Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is "V. cholerae" O1. If "V. cholerae" serogroup O1 is not isolated, the laboratory should test for "V. cholerae" O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory. Infection with "V. cholerae" O139 should be reported and handled in the same manner as that caused by "V. cholerae" O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States. The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas. Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to nearly universal advanced water treatment and sanitation practices present there, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries. Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted: Handwashing with soap or ash after using a toilet and before handling food or eating is also recommended for cholera prevention by WHO Africa. Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities. A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective as a single dose. One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, , it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment. Developed for use in Bangladesh, the "sari filter" is a simple and cost-effective appropriate technology method for reducing the contamination of drinking water. Used sari cloth is preferable but other types of used cloth can be used with some effect, though the effectiveness will vary significantly. Used cloth is more effective than new cloth, as the repeated washing reduces the space between the fibers . Water collected in this way has a greatly reduced pathogen count - though it will not necessarily be perfectly safe, it is an improvement for poor people with limited options. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a "sari" four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable. Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently." The most common error in caring for patients with cholera is to underestimate the speed and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringer's lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a person's body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola, are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake. If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste. As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This may be done by consuming foods high in potassium, like bananas or coconut water. Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration. Doxycycline is typically used first line, although some strains of "V. cholerae" have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported. Antibiotics improve outcomes in those who are both severely and not severely dehydrated. Azithromycin and tetracycline may work better than doxycycline or ciprofloxacin. In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world. If people with cholera are treated quickly and properly, the mortality rate is less than 1%; however, with untreated cholera, the mortality rate rises to 50–60%. For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill. Cholera affects an estimated 3–5 million people worldwide, and causes 58,000–130,000 deaths a year . This occurs mainly in the developing world. In the early 1980s, death rates are believed to have been greater than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. Cholera remains both epidemic and endemic in many areas of the world. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world". Although much is known about the mechanisms behind the spread of cholera, this has not led to a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread, but bodies of water can serve as a reservoir, and seafood shipped long distances can spread the disease. Cholera was not known in the Americas for most of the 20th century, but it reappeared towards the end of that century. The word cholera is from "kholera" from χολή "kholē" "bile". Cholera likely has its origins in the Indian subcontinent as evidenced by its prevalence in the region for centuries. The disease appears in the European literature as early as 1642, from the Dutch physician Jakob de Bondt's description it in his De Medicina Indorum. (The "Indorum" of the title refers to the East Indies. He also gave first European descriptions of other diseases.) Early outbreaks in the Indian subcontinent are believed to have been the result of poor living conditions as well as the presence of pools of still water, both of which provide ideal conditions for cholera to thrive. The disease first spread by trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world, (hence the name "Asiatic cholera"). Seven cholera pandemics have occurred in the past 200 years, with the seventh pandemic originating in Indonesia in 1961. The first cholera pandemic occurred in the Bengal region of India, near Calcutta starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, the Middle East, Europe, and Eastern Africa. The movement of British Army and Navy ships and personnel is believed to have contributed to the range of the pandemic, since the ships carried people with the disease to the shores of the Indian Ocean, from Africa to Indonesia, and north to China and Japan. The second pandemic lasted from 1826 to 1837 and particularly affected North America and Europe due to the result of advancements in transportation and global trade, and increased human migration, including soldiers. The third pandemic erupted in 1846, persisted until 1860, extended to North Africa, and reached South America, for the first time specifically affecting Brazil. The fourth pandemic lasted from 1863 to 1875 spread from India to Naples and Spain. The fifth pandemic was from 1881–1896 and started in India and spread to Europe, Asia, and South America. The sixth pandemic started 1899–1923. These epidemics were less fatal due to a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics, while other areas, like Germany in 1892 (primarily the city of Hamburg where more than 8.600 people died) and Naples from 1910–1911, also experienced severe outbreaks. The seventh pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed "El Tor", which still persists () in developing countries. Since it became widespread in the 19th century, cholera has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people perished of the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, was the first to identify the importance of contaminated water as its cause in 1854. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but still heavily affects populations in developing countries. In the past, vessels flew a yellow quarantine flag if any crew members or passengers were suffering from cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days. In modern sets of international maritime signal flags, the quarantine flag is yellow and black. Historically many different claimed remedies have existed in folklore. Many of the older remedies were based on the miasma theory. Some believed that abdominal chilling made one more susceptible and flannel and cholera belts were routine in army kits. In the 1854–1855 outbreak in Naples homeopathic camphor was used according to Hahnemann. T. J. Ritter's "Mother's Remedies" book lists tomato syrup as a home remedy from northern America. Elecampane was recommended in the United Kingdom according to William Thomas Fernie. Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. The United States, for example, used to have a severe cholera problem similar to those in some developing countries. There were three large cholera outbreaks in the 1800s, which can be attributed to "Vibrio cholerae"'s spread through interior waterways like the Erie Canal and routes along the Eastern Seaboard. The island of Manhattan in New York City touched the Atlantic Ocean, where cholera collected just off the coast. At this time, New York City did not have as effective a sanitation system as it does today, so cholera was able to spread. Cholera morbus is a historical term that was used to refer to gastroenteritis rather than specifically cholera. The bacterium was isolated in 1854 by Italian anatomist Filippo Pacini,
https://en.wikipedia.org/wiki?curid=7591
Caldera A caldera is a large cauldron-like hollow that forms shortly after the emptying of a magma chamber/reservoir in a volcanic eruption. When large volumes of magma are erupted over a short time, structural support for the rock above the magma chamber is lost. The ground surface then collapses downward into the emptied or partially emptied magma chamber, leaving a massive depression at the surface (from one to dozens of kilometers in diameter). Although sometimes described as a crater, the feature is actually a type of sinkhole, as it is formed through subsidence and collapse rather than an explosion or impact. Only seven caldera-forming collapses are known to have occurred since 1900, most recently at Bárðarbunga volcano, Iceland in 2014. The term "caldera" comes from Spanish ', and Latin ', meaning "cooking pot". In some texts the English term "cauldron" is also used. The term "caldera" was introduced into the geological vocabulary by the German geologist Leopold von Buch when he published his memoirs of his 1815 visit to the Canary Islands, where he first saw the Las Cañadas caldera on Tenerife, with Montaña Teide dominating the landscape, and then the Caldera de Taburiente on La Palma. A collapse is triggered by the emptying of the magma chamber beneath the volcano, sometimes as the result of a large explosive volcanic eruption (see Tambora in 1815), but also during effusive eruptions on the flanks of a volcano (see Piton de la Fournaise in 2007) or in a connected fissure system (see Bárðarbunga in 2014–2015). If enough magma is ejected, the emptied chamber is unable to support the weight of the volcanic edifice above it. A roughly circular fracture, the "ring fault", develops around the edge of the chamber. Ring fractures serve as feeders for fault intrusions which are also known as ring dikes. Secondary volcanic vents may form above the ring fracture. As the magma chamber empties, the center of the volcano within the ring fracture begins to collapse. The collapse may occur as the result of a single cataclysmic eruption, or it may occur in stages as the result of a series of eruptions. The total area that collapses may be hundreds or thousands of square kilometers. Some calderas are known to host rich ore deposits. Metal-rich fluids can circulate through the caldera, forming hydrothermal ore deposits of metals such as lead, silver, gold, mercury, lithium and uranium. One of the world's best-preserved mineralized calderas is the Sturgeon Lake Caldera in northwestern Ontario, Canada, which formed during the Neoarchean era about 2.7 billion years ago. If the magma is rich in silica, the caldera is often filled in with ignimbrite, tuff, rhyolite, and other igneous rocks. Silica-rich magma has a high viscosity, and therefore does not flow easily like basalt. As a result, gases tend to become trapped at high pressure within the magma. When the magma approaches the surface of the Earth, the rapid off-loading of overlying material causes the trapped gases to decompress rapidly, thus triggering explosive destruction of the magma and spreading volcanic ash over wide areas. Further lava flows may be erupted. If volcanic activity continues, the center of the caldera may be uplifted in the form of a "resurgent dome" such as is seen at Cerro Galán, Lake Toba, Yellowstone, etc., by subsequent intrusion of magma. A "silicic" or "rhyolitic caldera" may erupt hundreds or even thousands of cubic kilometers of material in a single event. Even small caldera-forming eruptions, such as Krakatoa in 1883 or Mount Pinatubo in 1991, may result in significant local destruction and a noticeable drop in temperature around the world. Large calderas may have even greater effects. When Yellowstone Caldera last erupted some 650,000 years ago, it released about 1,000 km3 of material (as measured in dense rock equivalent (DRE)), covering a substantial part of North America in up to two metres of debris. By comparison, when Mount St. Helens erupted in 1980, it released ~1.2 km3 (DRE) of ejecta. The ecological effects of the eruption of a large caldera can be seen in the record of the Lake Toba eruption in Indonesia. About 74,000 years ago, this Indonesian volcano released about dense-rock equivalent of ejecta. This was the largest known eruption during the ongoing Quaternary period (the last 2.6 million years) and the largest known explosive eruption during the last 25 million years. In the late 1990s, anthropologist Stanley Ambrose proposed that a volcanic winter induced by this eruption reduced the human population to about 2,000–20,000 individuals, resulting in a population bottleneck. More recently, Lynn Jorde and Henry Harpending proposed that the human species was reduced to approximately 5,000-10,000 people. There is no direct evidence, however, that either theory is correct, and there is no evidence for any other animal decline or extinction, even in environmentally sensitive species. There is evidence that human habitation continued in India after the eruption. Eruptions forming even larger calderas are known, especially La Garita Caldera in the San Juan Mountains of Colorado, where the Fish Canyon Tuff was blasted out in eruptions about 27.8 million years ago. At some points in geological time, rhyolitic calderas have appeared in distinct clusters. The remnants of such clusters may be found in places such as the San Juan Mountains of Colorado (formed during the Oligocene, Miocene, and Pliocene epochs) or the Saint Francois Mountain Range of Missouri (erupted during the Proterozoic eon). Some volcanoes, such as the large shield volcanoes Kīlauea and Mauna Loa on the island of Hawaii, form calderas in a different fashion. The magma feeding these volcanoes is basalt, which is silica poor. As a result, the magma is much less viscous than the magma of a rhyolitic volcano, and the magma chamber is drained by large lava flows rather than by explosive events. The resulting calderas are also known as subsidence calderas and can form more gradually than explosive calderas. For instance, the caldera atop Fernandina Island collapsed in 1968 when parts of the caldera floor dropped . Since the early 1960s, it has been known that volcanism has occurred on other planets and moons in the Solar System. Through the use of manned and unmanned spacecraft, volcanism has been discovered on Venus, Mars, the Moon, and Io, a satellite of Jupiter. None of these worlds have plate tectonics, which contributes approximately 60% of the Earth's volcanic activity (the other 40% is attributed to hotspot volcanism). Caldera structure is similar on all of these planetary bodies, though the size varies considerably. The average caldera diameter on Venus is . The average caldera diameter on Io is close to , and the mode is ; Tvashtar Paterae is likely the largest caldera with a diameter of . The average caldera diameter on Mars is , smaller than Venus. Calderas on Earth are the smallest of all planetary bodies and vary from as a maximum. The Moon has an outer shell of low-density crystalline rock that is a few hundred kilometers thick, which formed due to a rapid creation. The craters of the Moon have been well preserved through time and were once thought to have been the result of extreme volcanic activity, but actually were formed by meteorites, nearly all of which took place in the first few hundred million years after the Moon formed. Around 500 million years afterward, the Moon's mantle was able to be extensively melted due to the decay of radioactive elements. Massive basaltic eruptions took place generally at the base of large impact craters. Also, eruptions may have taken place due to a magma reservoir at the base of the crust. This forms a dome, possibly the same morphology of a shield volcano where calderas universally are known to form. Although caldera-like structures are rare on the Moon, they are not completely absent. The Compton-Belkovich Volcanic Complex on the far side of the Moon is thought to be a caldera, possibly an ash-flow caldera. The volcanic activity of Mars is concentrated in two major provinces: Tharsis and Elysium. Each province contains a series of giant shield volcanoes that are similar to what we see on Earth and likely are the result of mantle hot spots. The surfaces are dominated by lava flows, and all have one or more collapse calderas. Mars has the largest volcano in the Solar System, Olympus Mons, which is more than three times the height of Mount Everest, with a diameter of 520 km (323 miles). The summit of the mountain has six nested calderas. Because there is no plate tectonics on Venus, heat is mainly lost by conduction through the lithosphere. This causes enormous lava flows, accounting for 80% of Venus' surface area. Many of the mountains are large shield volcanoes that range in size from in diameter and high. More than 80 of these large shield volcanoes have summit calderas averaging across. Io, unusually, is heated by solid flexing due to the tidal influence of Jupiter and Io's orbital resonance with neighboring large moons Europa and Ganymede, which keep its orbit slightly eccentric. Unlike any of the planets mentioned, Io is continuously volcanically active. For example, the NASA "Voyager 1" and "Voyager 2" spacecraft detected nine erupting volcanoes while passing Io in 1979. Io has many calderas with diameters tens of kilometers across.
https://en.wikipedia.org/wiki?curid=7592
Calculator An electronic calculator is typically a portable electronic device used to perform calculations, ranging from basic arithmetic to complex mathematics. The first solid-state electronic calculator was created in the early 1960s. Pocket-sized devices became available in the 1970s, especially after the Intel 4004, the first microprocessor, was developed by Intel for the Japanese calculator company Busicom. They later became used commonly within the petroleum industry (oil and gas). Modern electronic calculators vary from cheap, give-away, credit-card-sized models to sturdy desktop models with built-in printers. They became popular in the mid-1970s as the incorporation of integrated circuits reduced their size and cost. By the end of that decade, prices had dropped to the point where a basic calculator was affordable to most and they became common in schools. Computer operating systems as far back as early Unix have included interactive calculator programs such as dc and hoc, and calculator functions are included in almost all personal digital assistant (PDA) type devices, the exceptions being a few dedicated address book and dictionary devices. In addition to general purpose calculators, there are those designed for specific markets. For example, there are scientific calculators which include trigonometric and statistical calculations. Some calculators even have the ability to do computer algebra. Graphing calculators can be used to graph functions defined on the real line, or higher-dimensional Euclidean space. , basic calculators cost little, but scientific and graphing models tend to cost more. In 1986, calculators still represented an estimated 41% of the world's general-purpose hardware capacity to compute information. By 2007, this had diminished to less than 0.05%. Electronic calculators contain a keyboard with buttons for digits and arithmetical operations; some even contain "00" and "000" buttons to make larger or smaller numbers easier to enter. Most basic calculators assign only one digit or operation on each button; however, in more specific calculators, a button can perform multi-function working with key combinations. Calculators usually have liquid-crystal displays (LCD) as output in place of historical light-emitting diode (LED) displays and vacuum fluorescent displays (VFD); details are provided in the section "Technical improvements". Large-sized figures are often used to improve readability; while using decimal separator (usually a point rather than a comma) instead of or in addition to vulgar fractions. Various symbols for function commands may also be shown on the display. Fractions such as are displayed as decimal approximations, for example rounded to . Also, some fractions (such as , which is ; to 14 significant figures) can be difficult to recognize in decimal form; as a result, many scientific calculators are able to work in vulgar fractions or mixed numbers. Calculators also have the ability to store numbers into computer memory. Basic calculators usually store only one number at a time; more specific types are able to store many numbers represented in variables. The variables can also be used for constructing formulas. Some models have the ability to extend memory capacity to store more numbers; the extended memory address is termed an array index. Power sources of calculators are: batteries, solar cells or mains electricity (for old models), turning on with a switch or button. Some models even have no turn-off button but they provide some way to put off (for example, leaving no operation for a moment, covering solar cell exposure, or closing their lid). Crank-powered calculators were also common in the early computer era. The following keys are common to most pocket calculators. While the arrangement of the digits is standard, the positions of other keys vary from model to model; the illustration is an example. In general, a basic electronic calculator consists of the following components: Clock rate of a processor chip refers to the frequency at which the central processing unit (CPU) is running. It is used as an indicator of the processor's speed, and is measured in "clock cycles per second" or the SI unit hertz (Hz). For basic calculators, the speed can vary from a few hundred hertz to the kilohertz range. A basic explanation as to how calculations are performed in a simple four-function calculator: To perform the calculation , one presses keys in the following sequence on most calculators:     . Other functions are usually performed using repeated additions or subtractions. Most pocket calculators do all their calculations in BCD rather than a floating-point representation. BCD is common in electronic systems where a numeric value is to be displayed, especially in systems consisting solely of digital logic, and not containing a microprocessor. By employing BCD, the manipulation of numerical data for display can be greatly simplified by treating each digit as a separate single sub-circuit. This matches much more closely the physical reality of display hardware—a designer might choose to use a series of separate identical seven-segment displays to build a metering circuit, for example. If the numeric quantity were stored and manipulated as pure binary, interfacing to such a display would require complex circuitry. Therefore, in cases where the calculations are relatively simple, working throughout with BCD can lead to a simpler overall system than converting to and from binary. The same argument applies when hardware of this type uses an embedded microcontroller or other small processor. Often, smaller code results when representing numbers internally in BCD format, since a conversion from or to binary representation can be expensive on such limited processors. For these applications, some small processors feature BCD arithmetic modes, which assist when writing routines that manipulate BCD quantities. Where calculators have added functions (such as square root, or trigonometric functions), software algorithms are required to produce high precision results. Sometimes significant design effort is needed to fit all the desired functions in the limited memory space available in the calculator chip, with acceptable calculation time. The fundamental difference between a calculator and computer is that a computer can be programmed in a way that allows the program to take different branches according to intermediate results, while calculators are pre-designed with specific functions (such as addition, multiplication, and logarithms) built in. The distinction is not clear-cut: some devices classed as programmable calculators have programming functions, sometimes with support for programming languages (such as RPL or TI-BASIC). For instance, instead of a hardware multiplier, a calculator might implement floating point mathematics with code in read-only memory (ROM), and compute trigonometric functions with the CORDIC algorithm because CORDIC does not require much multiplication. Bit serial logic designs are more common in calculators whereas bit parallel designs dominate general-purpose computers, because a bit serial design minimizes chip complexity, but takes many more clock cycles. This distinction blurs with high-end calculators, which use processor chips associated with computer and embedded systems design, more so the Z80, MC68000, and ARM architectures, and some custom designs specialized for the calculator market. The first known tools used to aid arithmetic calculations were: bones (used to tally items), pebbles, and counting boards, and the abacus, known to have been used by Sumerians and Egyptians before 2000 BC. Except for the Antikythera mechanism (an "out of the time" astronomical device), development of computing tools arrived near the start of the 17th century: the geometric-military compass (by Galileo), logarithms and Napier bones (by Napier), and the slide rule (by Edmund Gunter). In 1642, the Renaissance saw the invention of the mechanical calculator (by Wilhelm Schickard and several decades later Blaise Pascal), a device that was at times somewhat over-promoted as being able to perform all four arithmetic operations with minimal human intervention. Pascal's calculator could add and subtract two numbers directly and thus, if the tedium could be borne, multiply and divide by repetition. Schickard's machine, constructed several decades earlier, used a clever set of mechanised multiplication tables to ease the process of multiplication and division with the adding machine as a means of completing this operation. (Because they were different inventions with different aims a debate about whether Pascal or Schickard should be credited as the "inventor" of the adding machine (or calculating machine) is probably pointless.) Schickard and Pascal were followed by Gottfried Leibniz who spent forty years designing a four-operation mechanical calculator, the stepped reckoner, inventing in the process his leibniz wheel, but who couldn't design a fully operational machine. There were also five unsuccessful attempts to design a calculating clock in the 17th century. The 18th century saw the arrival of some notable improvements, first by Poleni with the first fully functional calculating clock and four-operation machine, but these machines were almost always "one of the kind". Luigi Torchi invented the first direct multiplication machine in 1834: this was also the second key-driven machine in the world, following that of James White (1822). It was not until the 19th century and the Industrial Revolution that real developments began to occur. Although machines capable of performing all four arithmetic functions existed prior to the 19th century, the refinement of manufacturing and fabrication processes during the eve of the industrial revolution made large scale production of more compact and modern units possible. The Arithmometer, invented in 1820 as a four-operation mechanical calculator, was released to production in 1851 as an adding machine and became the first commercially successful unit; forty years later, by 1890, about 2,500 arithmometers had been sold plus a few hundreds more from two arithmometer clone makers (Burkhardt, Germany, 1878 and Layton, UK, 1883) and Felt and Tarrant, the only other competitor in true commercial production, had sold 100 comptometers. It wasn't until 1902 that the familiar push-button user interface was developed, with the introduction of the Dalton Adding Machine, developed by James L. Dalton in the United States. In 1921, Edith Clarke invented the "Clarke calculator", a simple graph-based calculator for solving line equations involving hyperbolic functions. This allowed electrical engineers to simplify calculations for inductance and capacitance in power transmission lines. The Curta calculator was developed in 1948 and, although costly, became popular for its portability. This purely mechanical hand-held device could do addition, subtraction, multiplication and division. By the early 1970s electronic pocket calculators ended manufacture of mechanical calculators, although the Curta remains a popular collectable item. The first mainframe computers, using firstly vacuum tubes and later transistors in the logic circuits, appeared in the 1940s and 1950s. This technology was to provide a stepping stone to the development of electronic calculators. The Casio Computer Company, in Japan, released the Model "14-A" calculator in 1957, which was the world's first all-electric (relatively) compact calculator. It did not use electronic logic but was based on relay technology, and was built into a desk. In October 1961, the world's first "all-electronic desktop" calculator, the British Bell Punch/Sumlock Comptometer ANITA (A New Inspiration To Arithmetic/Accounting) was announced. This machine used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. Two models were displayed, the Mk VII for continental Europe and the Mk VIII for Britain and the rest of the world, both for delivery from early 1962. The Mk VII was a slightly earlier design with a more complicated mode of multiplication, and was soon dropped in favour of the simpler Mark VIII. The ANITA had a full keyboard, similar to mechanical comptometers of the time, a feature that was unique to it and the later Sharp CS-10A among electronic calculators. The ANITA weighed roughly due to its large tube system. Bell Punch had been producing key-driven mechanical calculators of the comptometer type under the names "Plus" and "Sumlock", and had realised in the mid-1950s that the future of calculators lay in electronics. They employed the young graduate Norbert Kitz, who had worked on the early British Pilot ACE computer project, to lead the development. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology of the ANITA was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a cathode ray tube (CRT), and introduced Reverse Polish Notation (RPN) to the calculator market for a price of $2200, which was about three times the cost of an electromechanical calculator of the time. Like Bell Punch, Friden was a manufacturer of mechanical calculators that had decided that the future lay in electronics. In 1964 more all-transistor electronic calculators were introduced: Sharp introduced the CS-10A, which weighed and cost 500,000 yen ($), and Industria Macchine Elettroniche of Italy introduced the IME 84, to which several extra keyboard and display units could be connected so that several people could make use of it (but apparently not at the same time). There followed a series of electronic calculator models from these and other manufacturers, including Canon, Mathatronics, Olivetti, SCM (Smith-Corona-Marchant), Sony, Toshiba, and Wang. The early calculators used hundreds of germanium transistors, which were cheaper than silicon transistors, on multiple circuit boards. Display types used were CRT, cold-cathode Nixie tubes, and filament lamps. Memory technology was usually based on the delay line memory or the magnetic core memory, though the Toshiba "Toscal" BC-1411 appears to have used an early form of dynamic RAM built from discrete components. Already there was a desire for smaller and less power-hungry machines. Bulgaria's ELKA 6521, introduced in 1965, was developed by the Central Institute for Calculation Technologies and built at the Elektronika factory in Sofia. The name derives from "ELektronen KAlkulator", and it weighed around . It is the first calculator in the world which includes the square root function. Later that same year were released the ELKA 22 (with a luminescent display) and the ELKA 25, with an in-built printer. Several other models were developed until the first pocket model, the ELKA 101, was released in 1974. The writing on it was in Roman script, and it was exported to western countries. The first desktop "programmable calculators" were produced in the mid-1960s. They included the Mathatronics Mathatron (1964) and the Olivetti Programma 101 (late 1965) which were solid-state, desktop, printing, floating point, algebraic entry, programmable, stored-program electronic calculators. Both could be programmed by the end user and print out their results. The Programma 101 saw much wider distribution and had the added feature of offline storage of programs via magnetic cards. Another early programmable desktop calculator (and maybe the first Japanese one) was the Casio (AL-1000) produced in 1967. It featured a nixie tubes display and had transistor electronics and ferrite core memory. The "Monroe Epic" programmable calculator came on the market in 1967. A large, printing, desk-top unit, with an attached floor-standing logic tower, it could be programmed to perform many computer-like functions. However, the only "branch" instruction was an implied unconditional branch (GOTO) at the end of the operation stack, returning the program to its starting instruction. Thus, it was not possible to include any conditional branch (IF-THEN-ELSE) logic. During this era, the absence of the conditional branch was sometimes used to distinguish a programmable calculator from a computer. The first Soviet programmable desktop calculator ISKRA 123, powered by the power grid, was released at the start of the 1970s. The electronic calculators of the mid-1960s were large and heavy desktop machines due to their use of hundreds of transistors on several circuit boards with a large power consumption that required an AC power supply. There were great efforts to put the logic required for a calculator into fewer and fewer integrated circuits (chips) and calculator electronics was one of the leading edges of semiconductor development. U.S. semiconductor manufacturers led the world in large scale integration (LSI) semiconductor development, squeezing more and more functions into individual integrated circuits. This led to alliances between Japanese calculator manufacturers and U.S. semiconductor companies: Canon Inc. with Texas Instruments, Hayakawa Electric (later renamed Sharp Corporation) with North-American Rockwell Microelectronics (later renamed Rockwell International), Busicom with Mostek and Intel, and General Instrument with Sanyo. By 1970, a calculator could be made using just a few chips of low power consumption, allowing portable models powered from rechargeable batteries. The first handheld calculator was a 1967 prototype called "Cal Tech", whose development was led by Jack Kilby at Texas Instruments in a research project to produce a portable calculator. It could add, multiply, subtract, and divide, and its output device was a paper tape. As a result of the "Cal-Tech" project, Texas Instruments was granted master patents on portable calculators. The first commercially produced portable calculators appeared in Japan in 1970, and were soon marketed around the world. These included the Sanyo ICC-0081 "Mini Calculator", the Canon Pocketronic, and the Sharp QT-8B "micro Compet". The Canon Pocketronic was a development from the "Cal-Tech" project. It had no traditional display; numerical output was on thermal paper tape. Sharp put in great efforts in size and power reduction and introduced in January 1971 the Sharp EL-8, also marketed as the Facit 1111, which was close to being a pocket calculator. It weighed 1.59 pounds (721 grams), had a vacuum fluorescent display, rechargeable NiCad batteries, and initially sold for US $395. However, integrated circuit development efforts culminated in early 1971 with the introduction of the first "calculator on a chip", the MK6010 by Mostek, followed by Texas Instruments later in the year. Although these early hand-held calculators were very costly, these advances in electronics, together with developments in display technology (such as the vacuum fluorescent display, LED, and LCD), led within a few years to the cheap pocket calculator available to all. In 1971 Pico Electronics. and General Instrument also introduced their first collaboration in ICs, a full single chip calculator IC for the Monroe Royal Digital III calculator. Pico was a spinout by five GI design engineers whose vision was to create single chip calculator ICs. Pico and GI went on to have significant success in the burgeoning handheld calculator market. The first truly pocket-sized electronic calculator was the Busicom LE-120A "HANDY", which was marketed early in 1971. Made in Japan, this was also the first calculator to use an LED display, the first hand-held calculator to use a single integrated circuit (then proclaimed as a "calculator on a chip"), the Mostek MK6010, and the first electronic calculator to run off replaceable batteries. Using four AA-size cells the LE-120A measures . The first European-made pocket-sized calculator, DB 800 is made in May 1971 by Digitron in Buje, Croatia (former Yugoslavia) with four functions and an eight-digit display and special characters for a negative number and a warning that the calculation has too many digits to display. The first American-made pocket-sized calculator, the Bowmar 901B (popularly termed "The Bowmar Brain"), measuring , came out in the Autumn of 1971, with four functions and an eight-digit red LED display, for $240, while in August 1972 the four-function Sinclair Executive became the first slimline pocket calculator measuring and weighing . It retailed for around £79 ($194 at the time). By the end of the decade, similar calculators were priced less than £5 ($). The first Soviet Union made pocket-sized calculator, the "Elektronika B3-04" was developed by the end of 1973 and sold at the start of 1974. One of the first low-cost calculators was the Sinclair Cambridge, launched in August 1973. It retailed for £29.95 ($), or £5 ($) less in kit form. The Sinclair calculators were successful because they were far cheaper than the competition; however, their design led to slow and inaccurate computations of transcendental functions. Meanwhile, Hewlett-Packard (HP) had been developing a pocket calculator. Launched in early 1972, it was unlike the other basic four-function pocket calculators then available in that it was the first pocket calculator with "scientific" functions that could replace a slide rule. The $395 HP-35, along with nearly all later HP engineering calculators, used reverse Polish notation (RPN), also called postfix notation. A calculation like "8 plus 5" is, using RPN, performed by pressing , , , and ; instead of the algebraic infix notation: , , , . It had 35 buttons and was based on Mostek Mk6020 chip. The first Soviet "scientific" pocket-sized calculator the "B3-18" was completed by the end of 1975. In 1973, Texas Instruments (TI) introduced the SR-10, ("SR" signifying slide rule) an "algebraic entry" pocket calculator using scientific notation for $150. Shortly after the SR-11 featured an added key for entering Pi (π). It was followed the next year by the SR-50 which added log and trig functions to compete with the HP-35, and in 1977 the mass-marketed TI-30 line which is still produced. In 1978 a new company, Calculated Industries arose which focused on specialized markets. Their first calculator, the Loan Arranger (1978) was a pocket calculator marketed to the Real Estate industry with preprogrammed functions to simplify the process of calculating payments and future values. In 1985, CI launched a calculator for the construction industry called the Construction Master which came preprogrammed with common construction calculations (such as angles, stairs, roofing math, pitch, rise, run, and feet-inch fraction conversions). This would be the first in a line of construction related calculators. The first programmable pocket calculator was the HP-65, in 1974; it had a capacity of 100 instructions, and could store and retrieve programs with a built-in magnetic card reader. Two years later the HP-25C introduced "continuous memory", i.e., programs and data were retained in CMOS memory during power-off. In 1979, HP released the first "alphanumeric", programmable, "expandable" calculator, the HP-41C. It could be expanded with random access memory (RAM, for memory) and read-only memory (ROM, for software) modules, and peripherals like bar code readers, microcassette and floppy disk drives, paper-roll thermal printers, and miscellaneous communication interfaces (RS-232, HP-IL, HP-IB). The first Soviet pocket battery-powered programmable calculator, Elektronika "B3-21", was developed by the end of 1976 and released at the start of 1977. The successor of B3-21, the Elektronika B3-34 wasn't backward compatible with B3-21, even if it kept the reverse Polish notation (RPN). Thus B3-34 defined a new command set, which later was used in a series of later programmable Soviet calculators. Despite very limited abilities (98 bytes of instruction memory and about 19 stack and addressable registers), people managed to write all kinds of programs for them, including adventure games and libraries of calculus-related functions for engineers. Hundreds, perhaps thousands, of programs were written for these machines, from practical scientific and business software, which were used in real-life offices and labs, to fun games for children. The Elektronika MK-52 calculator (using the extended B3-34 command set, and featuring internal EEPROM memory for storing programs and external interface for EEPROM cards and other periphery) was used in Soviet spacecraft program (for Soyuz TM-7 flight) as a backup of the board computer. This series of calculators was also noted for a large number of highly counter-intuitive mysterious undocumented features, somewhat similar to "synthetic programming" of the American HP-41, which were exploited by applying normal arithmetic operations to error messages, jumping to nonexistent addresses and other methods. A number of respected monthly publications, including the popular science magazine "Nauka i Zhizn" ("Наука и жизнь", "Science and Life"), featured special columns, dedicated to optimization methods for calculator programmers and updates on undocumented features for hackers, which grew into a whole esoteric science with many branches, named "yeggogology" ("еггогология"). The error messages on those calculators appear as a Russian word "YEGGOG" ("ЕГГОГ") which, unsurprisingly, is translated to "Error". A similar hacker culture in the USA revolved around the HP-41, which was also noted for a large number of undocumented features and was much more powerful than B3-34. Through the 1970s the hand-held electronic calculator underwent rapid development. The red LED and blue/green vacuum fluorescent displays consumed a lot of power and the calculators either had a short battery life (often measured in hours, so rechargeable nickel-cadmium batteries were common) or were large so that they could take larger, higher capacity batteries. In the early 1970s liquid-crystal displays (LCDs) were in their infancy and there was a great deal of concern that they only had a short operating lifetime. Busicom introduced the Busicom "LE-120A "HANDY"" calculator, the first pocket-sized calculator and the first with an LED display, and announced the Busicom "LC" with LCD. However, there were problems with this display and the calculator never went on sale. The first successful calculators with LCDs were manufactured by Rockwell International and sold from 1972 by other companies under such names as: Dataking "LC-800", Harden "DT/12", Ibico "086", Lloyds "40", Lloyds "100", Prismatic "500" (a.k.a. "P500"), Rapid Data "Rapidman 1208LC". The LCDs were an early form using the "Dynamic Scattering Mode DSM" with the numbers appearing as bright against a dark background. To present a high-contrast display these models illuminated the LCD using a filament lamp and solid plastic light guide, which negated the low power consumption of the display. These models appear to have been sold only for a year or two. A more successful series of calculators using a reflective DSM-LCD was launched in 1972 by Sharp Inc with the Sharp "EL-805", which was a slim pocket calculator. This, and another few similar models, used Sharp's "Calculator On Substrate" (COS) technology. An extension of one glass plate needed for the liquid crystal display was used as a substrate to mount the needed chips based on a new hybrid technology. The COS technology may have been too costly since it was only used in a few models before Sharp reverted to conventional circuit boards. In the mid-1970s the first calculators appeared with field-effect, "twisted nematic" (TN) LCDs with dark numerals against a grey background, though the early ones often had a yellow filter over them to cut out damaging ultraviolet rays. The advantage of LCDs is that they are passive light modulators reflecting light, which require much less power than light-emitting displays such as LEDs or VFDs. This led the way to the first credit-card-sized calculators, such as the Casio "Mini Card LC-78" of 1978, which could run for months of normal use on button cells. There were also improvements to the electronics inside the calculators. All of the logic functions of a calculator had been squeezed into the first "calculator on a chip" integrated circuits (ICs) in 1971, but this was leading edge technology of the time and yields were low and costs were high. Many calculators continued to use two or more ICs, especially the scientific and the programmable ones, into the late 1970s. The power consumption of the integrated circuits was also reduced, especially with the introduction of CMOS technology. Appearing in the Sharp "EL-801" in 1972, the transistors in the logic cells of CMOS ICs only used any appreciable power when they changed state. The LED and VFD displays often required added driver transistors or ICs, whereas the LCDs were more amenable to being driven directly by the calculator IC itself. With this low power consumption came the possibility of using solar cells as the power source, realised around 1978 by calculators such as the Royal "Solar 1", Sharp "EL-8026", and Teal "Photon". At the start of the 1970s, hand-held electronic calculators were very costly, at two or three weeks' wages, and so were a luxury item. The high price was due to their construction requiring many mechanical and electronic components which were costly to produce, and production runs that were too small to exploit economies of scale. Many firms saw that there were good profits to be made in the calculator business with the margin on such high prices. However, the cost of calculators fell as components and their production methods improved, and the effect of economies of scale was felt. By 1976, the cost of the cheapest four-function pocket calculator had dropped to a few dollars, about 1/20th of the cost five years before. The results of this were that the pocket calculator was affordable, and that it was now difficult for the manufacturers to make a profit from calculators, leading to many firms dropping out of the business or closing down. The firms that survived making calculators tended to be those with high outputs of higher quality calculators, or producing high-specification scientific and programmable calculators. The first calculator capable of symbolic computing was the HP-28C, released in 1987. It could, for example, solve quadratic equations symbolically. The first graphing calculator was the Casio fx-7000G released in 1985. The two leading manufacturers, HP and TI, released increasingly feature-laden calculators during the 1980s and 1990s. At the turn of the millennium, the line between a graphing calculator and a handheld computer was not always clear, as some very advanced calculators such as the TI-89, the Voyage 200 and HP-49G could differentiate and integrate functions, solve differential equations, run word processing and PIM software, and connect by wire or IR to other calculators/computers. The HP 12c financial calculator is still produced. It was introduced in 1981 and is still being made with few changes. The HP 12c featured the reverse Polish notation mode of data entry. In 2003 several new models were released, including an improved version of the HP 12c, the "HP 12c platinum edition" which added more memory, more built-in functions, and the addition of the algebraic mode of data entry. Calculated Industries competed with the HP 12c in the mortgage and real estate markets by differentiating the key labeling; changing the “I”, “PV”, “FV” to easier labeling terms such as "Int", "Term", "Pmt", and not using the reverse Polish notation. However, CI's more successful calculators involved a line of construction calculators, which evolved and expanded in the 1990s to present. According to Mark Bollman, a mathematics and calculator historian and associate professor of mathematics at Albion College, the "Construction Master is the first in a long and profitable line of CI construction calculators" which carried them through the 1980s, 1990s, and to the present. Personal computers often come with a calculator utility program that emulates the appearance and functions of a calculator, using the graphical user interface to portray a calculator. One such example is Windows Calculator. Most personal data assistants (PDAs) and smartphones also have such a feature. In most countries, students use calculators for schoolwork. There was some initial resistance to the idea out of fear that basic or elementary arithmetic skills would suffer. There remains disagreement about the importance of the ability to perform calculations "in the head", with some curricula restricting calculator use until a certain level of proficiency has been obtained, while others concentrate more on teaching estimation methods and problem-solving. Research suggests that inadequate guidance in the use of calculating tools can restrict the kind of mathematical thinking that students engage in. Others have argued that calculator use can even cause core mathematical skills to atrophy, or that such use can prevent understanding of advanced algebraic concepts. In December 2011 the UK's Minister of State for Schools, Nick Gibb, voiced concern that children can become "too dependent" on the use of calculators. As a result, the use of calculators is to be included as part of a review of the Curriculum. In the United States, many math educators and boards of education have enthusiastically endorsed the National Council of Teachers of Mathematics (NCTM) standards and actively promoted the use of classroom calculators from kindergarten through high school.
https://en.wikipedia.org/wiki?curid=7593
Cash register A cash register or till is a mechanical or electronic device for registering and calculating transactions at a point of sale. It is usually attached to a drawer for storing cash and other valuables. A modern cash register is usually attached to a printer that can print out receipts for record-keeping purposes. An early mechanical cash register was invented by James Ritty and John Birch following the American Civil War. James was the owner of a saloon in Dayton, Ohio, USA, and wanted to stop employees from pilfering his profits. The Ritty Model I was invented in 1879 after seeing a tool that counted the revolutions of the propeller on a steamship. With the help of James' brother John Ritty, they patented it in 1883. It was called "Ritty's Incorruptible Cashier" and it was invented to stop cashiers from pilfering and eliminate employee theft and embezzlement. Early mechanical registers were entirely mechanical, without receipts. The employee was required to ring up every transaction on the register, and when the total key was pushed, the drawer opened and a bell would ring, alerting the manager to a sale taking place. Those original machines were nothing but simple adding machines. Since the registration is done with the process of returning change, according to Bill Bryson odd pricing came about because by charging odd amounts like 49 and 99 cents (or 45 and 95 cents when nickels are more used than pennies), the cashier very probably had to open the till for the penny change and thus announce the sale. Shortly after the patent, Ritty became overwhelmed with the responsibilities of running two businesses, so he sold all of his interests in the cash register business to Jacob H. Eckert of Cincinnati, a china and glassware salesman, who formed the National Manufacturing Company. In 1884 Eckert sold the company to John H. Patterson, who renamed the company the National Cash Register Company and improved the cash register by adding a paper roll to record sales transactions, thereby creating the journal for internal bookkeeping purposes, and the receipt for external bookkeeping purposes. The original purpose of the receipt was enhanced fraud protection. The business owner could read the receipts to ensure that cashiers charged customers the correct amount for each transaction and did not embezzle the cash drawer. It also prevents a customer from defrauding the business by falsely claiming receipt of a lesser amount of change or a transaction that never happened in the first place. The first evidence of an actual cash register was used in Coalton, Ohio, at the old mining company. In 1906, while working at the National Cash Register company, inventor Charles F. Kettering designed a cash register with an electric motor. A leading designer, builder, manufacturer, seller and exporter of cash registers from the 1950s until the 1970s was London-based (and later Brighton-based) Gross Cash Registers Ltd., founded by brothers Sam and Henry Gross. Their cash registers were particularly popular around the time of decimalisation in Britain in early 1971, Henry having designed one of the few known models of cash register which could switch currencies from £sd to £p so that retailers could easily change from one to the other on or after Decimal Day. Sweda also had decimal-ready registers where the retailer used a special key on Decimal Day for the conversion. In some jurisdictions the law also requires customers to collect the receipt and keep it at least for a short while after leaving the shop, again to check that the shop records sales, so that it cannot evade sales taxes. Often cash registers are attached to scales, barcode scanners, checkstands, and debit card or credit card terminals. Increasingly, dedicated cash registers are being replaced with general purpose computers with POS software. Cash registers use bitmap characters for printing. Today, point of sale systems scan the barcode (usually EAN or UPC) for each item, retrieve the price from a database, calculate deductions for items on sale (or, in British retail terminology, "special offer", "multibuy" or "buy one, get one free"), calculate the sales tax or VAT, calculate differential rates for preferred customers, actualize inventory, time and date stamp the transaction, record the transaction in detail including each item purchased, record the method of payment, keep totals for each product or type of product sold as well as total sales for specified periods, and do other tasks as well. These POS terminals will often also identify the cashier on the receipt, and carry additional information or offers. Currently, many cash registers are individual computers. They may be running traditionally in-house software or general purpose software such as DOS. Many of the newer ones have touch screens. They may be connected to computerized point of sale networks using any type of protocol. Such systems may be accessed remotely for the purpose of obtaining records or troubleshooting. Many businesses also use tablet computers as cash registers, utilizing the sale system as downloadable app-software. Cash registers include a key labeled "No Sale", abbreviated "NS" on many modern electronic cash registers. Its function is to open the drawer, printing a receipt stating "No Sale" and recording in the register log that the register was opened. Some cash registers require a numeric password or physical key to be used when attempting to open the till. A cash register's drawer can only be opened by an instruction from the cash register except when using special keys, generally held by the owner and some employees (e.g. manager). This reduces the amount of contact most employees have with cash and other valuables. It also reduces risks of an employee taking money from the drawer without a record and the owner's consent, such as when a customer does not expressly ask for a receipt but still has to be given change (cash is more easily checked against recorded sales than inventory). A cash drawer is usually a compartment underneath a cash register in which the cash from transactions is kept. The drawer typically contains a removable till. The till is usually a plastic or wooden tray divided into compartments used to store each denomination of bank notes and coins separately in order to make counting easier. The removable till allows money to be removed from the sales floor to a more secure location for counting and creating bank deposits. Some modern cash drawers are individual units separate from the rest of the cash register. A cash drawer is usually of strong construction and may be integral with the register or a separate piece that the register sits atop. It slides in and out of its lockable box and is secured by a spring-loaded catch. When a transaction that involves cash is completed, the register sends an electrical impulse to a solenoid to release the catch and open the drawer. Cash drawers that are integral to a stand-alone register often have a manual release catch underneath to open the drawer in the event of a power failure. More advanced cash drawers have eliminated the manual release in favor of a cylinder lock, requiring a key to manually open the drawer. The cylinder lock usually has several positions: locked, unlocked, online (will open if an impulse is given), and release. The release position is an intermittent position with a spring to push the cylinder back to the unlocked position. In the "locked" position, the drawer will remain latched even when an electric impulse is sent to the solenoid. Due to the increasing number of notes and varieties of notes, many cash drawers are designed to store notes upright & facing forward, instead of the traditional flat & facing upright position. This enables faster access to each note and allows more varieties of notes to be stored. Sometimes the cashier will even divide the notes without any physical divider at all. Some cash drawers are flip top in design, where they flip open instead of sliding out like an ordinary drawer, resembling a cashbox instead. An often used non-sale function is the aforementioned "no sale". In case of needing to correct change given to the customer, or to make change from a neighboring register, this function will open the cash drawer of the register. Where non-management staff are given access, management can scrutinize the count of "no sales" in the log to look for suspicious patterns. Generally requiring a management key, besides programming prices into the register, are the report functions. An "X" report will read the current sales figures from memory and produce a paper printout. A "Z" report will act like an "X" report, except that counters will be reset to zero. Registers will typically feature a numerical pad, QWERTY or custom keyboard, touch screen interface, or a combination of these input methods for the cashier to enter products and fees by hand and access information necessary to complete the sale. For older registers as well as at restaurants and other establishments that do not sell barcoded items, the manual input may be the only method of interacting with the register. While customization was previously limited to larger chains that could afford to have physical keyboards custom-built for their needs, the customization of register inputs is now more widespread with the use of touch screens that can display a variety of point of sale software. Modern cash registers may be connected to a handheld or stationary barcode reader so that a customer's purchases can be more rapidly scanned than would be possible by keying numbers into the register by hand. The use of scanners should also help prevent errors that result from manually entering the product's barcode or pricing. At grocers, the register's scanner may be combined with a scale for measuring product that is sold by weight. Cashiers are often required to provide a receipt to the customer after a purchase has been made. Registers typically use thermal printers to print receipts, although older dot matrix printers are still in use at some retailers. Alternatively, retailers can forgo issuing paper receipts in some jurisdictions by instead asking the customer for an email to which their receipt can be sent. The receipts of larger retailers tend to include unique barcodes or other information identifying the transaction so that the receipt can be scanned to facilitate returns or other customer services. In stores that use electronic article surveillance, a pad or other surface will be attached to the register that deactivates security devices embedded in or attached to the items being purchased. This will prevent a customer's purchase from setting off security alarms at the store's exit. Some corporations and supermarkets have introduced self-checkout machines, where the customer is trusted to scan the barcodes (or manually identify uncoded items like fruit), and place the items into a bagging area. The bag is weighed, and the machine halts the checkout when the weight of something in the bag does not match the weight in the inventory database. Normally, an employee is watching over several such checkouts to prevent theft or exploitation of the machines' weaknesses (for example, intentional misidentification of expensive produce or dry goods). Payment on these machines is accepted by debit card/credit card, or cash via coin slot and bank note scanner. Store employees are also needed to authorize "age-restricted" purchases, such as alcohol, solvents or knives, which can either be done remotely by the employee observing the self-checkout, or by means of a "store login" which the operator has to enter.
https://en.wikipedia.org/wiki?curid=7594
Processor design Processor design is the design engineering task of creating a processor, a key component of computer hardware. It is a subfield of computer engineering (design, development and implementation) and electronics engineering (fabrication). The design process involves choosing an instruction set and a certain execution paradigm (e.g. VLIW or RISC) and results in a microarchitecture, which might be described in e.g. VHDL or Verilog. For microprocessor design, this description is then manufactured employing some of the various semiconductor device fabrication processes, resulting in a die which is bonded onto a chip carrier. This chip carrier is then soldered onto, or inserted into a socket on, a printed circuit board (PCB). The mode of operation of any processor is the execution of lists of instructions. Instructions typically include those to compute or manipulate data values using registers, change or retrieve values in read/write memory, perform relational tests between data values and to control program flow. CPU design is divided into design of the following components: CPUs designed for high-performance markets might require custom (optimized or application specific (see below)) designs for each of these items to achieve frequency, power-dissipation, and chip-area goals whereas CPUs designed for lower performance markets might lessen the implementation burden by acquiring some of these items by purchasing them as intellectual property. Control logic implementation techniques (logic synthesis using CAD tools) can be used to implement datapaths, register files, and clocks. Common logic styles used in CPU design include unstructured random logic, finite-state machines, microprogramming (common from 1965 to 1985), and Programmable logic arrays (common in the 1980s, no longer common). Device types used to implement the logic include: A CPU design project generally has these major tasks: Re-designing a CPU core to a smaller die-area helps to shrink everything (a "photomask shrink"), resulting in the same number of transistors on a smaller die. It improves performance (smaller transistors switch faster), reduces power (smaller wires have less parasitic capacitance) and reduces cost (more CPUs fit on the same wafer of silicon). Releasing a CPU on the same size die, but with a smaller CPU core, keeps the cost about the same but allows higher levels of integration within one very-large-scale integration chip (additional cache, multiple CPUs or other components), improving performance and reducing overall system cost. As with most complex electronic designs, the logic verification effort (proving that the design does not have bugs) now dominates the project schedule of a CPU. Key CPU architectural innovations include index register, cache, virtual memory, instruction pipelining, superscalar, CISC, RISC, virtual machine, emulators, microprogram, and stack. A variety of have been proposed, including reconfigurable logic, clockless CPUs, computational RAM, and optical computing. Benchmarking is a way of testing CPU speed. Examples include SPECint and SPECfp, developed by Standard Performance Evaluation Corporation, and ConsumerMark developed by the Embedded Microprocessor Benchmark Consortium EEMBC. Some of the commonly used metrics include: There may be tradeoffs in optimizing some of these metrics. In particular, many design techniques that make a CPU run faster make the "performance per watt", "performance per dollar", and "deterministic response" much worse, and vice versa. There are several different markets in which CPUs are used. Since each of these markets differ in their requirements for CPUs, the devices designed for one market are in most cases inappropriate for the other markets. The vast majority of revenues generated from CPU sales is for general purpose computing, that is, desktop, laptop, and server computers commonly used in businesses and homes. In this market, the Intel IA-32 and the 64-bit version x86-64 architecture dominate the market, with its rivals PowerPC and SPARC maintaining much smaller customer bases. Yearly, hundreds of millions of IA-32 architecture CPUs are used by this market. A growing percentage of these processors are for mobile implementations such as netbooks and laptops. Since these devices are used to run countless different types of programs, these CPU designs are not specifically targeted at one type of application or one function. The demands of being able to run a wide range of programs efficiently has made these CPU designs among the more advanced technically, along with some disadvantages of being relatively costly, and having high power consumption. In 1984, most high-performance CPUs required four to five years to develop. Scientific computing is a much smaller niche market (in revenue and units shipped). It is used in government research labs and universities. Before 1990, CPU design was often done for this market, but mass market CPUs organized into large clusters have proven to be more affordable. The main remaining area of active hardware design and research for scientific computing is for high-speed data transmission systems to connect mass market CPUs. As measured by units shipped, most CPUs are embedded in other machinery, such as telephones, clocks, appliances, vehicles, and infrastructure. Embedded processors sell in the volume of many billions of units per year, however, mostly at much lower price points than that of the general purpose processors. These single-function devices differ from the more familiar general-purpose CPUs in several ways: The embedded CPU family with the largest number of total units shipped is the 8051, averaging nearly a billion units per year. The 8051 is widely used because it is very inexpensive. The design time is now roughly zero, because it is widely available as commercial intellectual property. It is now often embedded as a small part of a larger system on a chip. The silicon cost of an 8051 is now as low as US$0.001, because some implementations use as few as 2,200 logic gates and take 0.0127 square millimeters of silicon. As of 2009, more CPUs are produced using the ARM architecture instruction set than any other 32-bit instruction set. The ARM architecture and the first ARM chip were designed in about one and a half years and 5 human years of work time. The 32-bit Parallax Propeller microcontroller architecture and the first chip were designed by two people in about 10 human years of work time. The 8-bit AVR architecture and first AVR microcontroller was conceived and designed by two students at the Norwegian Institute of Technology. The 8-bit 6502 architecture and the first MOS Technology 6502 chip were designed in 13 months by a group of about 9 people. The 32 bit Berkeley RISC I and RISC II architecture and the first chips were mostly designed by a series of students as part of a four quarter sequence of graduate courses. This design became the basis of the commercial SPARC processor design. For about a decade, every student taking the 6.004 class at MIT was part of a team—each team had one semester to design and build a simple 8 bit CPU out of 7400 series integrated circuits. One team of 4 students designed and built a simple 32 bit CPU during that semester. Some undergraduate courses require a team of 2 to 5 students to design, implement, and test a simple CPU in a FPGA in a single 15-week semester. The MultiTitan CPU was designed with 2.5 man years of effort, which was considered "relatively little design effort" at the time. 24 people contributed to the 3.5 year MultiTitan research project, which included designing and building a prototype CPU. For embedded systems, the highest performance levels are often not needed or desired due to the power consumption requirements. This allows for the use of processors which can be totally implemented by logic synthesis techniques. These synthesized processors can be implemented in a much shorter amount of time, giving quicker time-to-market.
https://en.wikipedia.org/wiki?curid=7597
Cocktail A cocktail is an alcoholic mixed drink, which is either a combination of spirits, or one or more spirits mixed with other ingredients such as fruit juice, flavored syrup, or cream. There are various types of cocktails, based on the number and kind of ingredients added. The origins of the cocktail are debated. The Oxford Dictionaries define cocktail as "An alcoholic drink consisting of a spirit or spirits mixed with other ingredients, such as fruit juice or cream". A cocktail can contain alcohol, a sugar, and a bitter/citrus. When a mixed drink contains only a distilled spirit and a mixer, such as soda or fruit juice, it is a highball. Many of the International Bartenders Association Official Cocktails are highballs. When a mixed drink contains only a distilled spirit and a liqueur, it is a duo, and when it adds a mixer, it is a trio. Additional ingredients may be sugar, honey, milk, cream, and various herbs. Mixed drinks without alcohol that resemble cocktails are known as "mocktails" or "virgin cocktails". The origin of the word cocktail is disputed. The first recorded use of cocktail not referring to a horse is found in "The Morning Post and Gazetteer" in London, England, March 20, 1798: Mr. Pitt, two petit vers of "L'huile de Venus" Ditto, one of "perfeit amour" Ditto, "cock-tail" (vulgarly called ginger) "The Oxford English Dictionary" cites the word as originating in the U.S. The first recorded use of "cocktail" as a beverage (possibly non-alcoholic) in the United States appears in "The Farmer's Cabinet", April 28, 1803: The first definition of cocktail known to be an alcoholic beverage appeared in "The Balance and Columbian Repository" (Hudson, New York) May 13, 1806; editor Harry Croswell answered the question, "What is a cocktail?": Etymologist Anatoly Liberman endorses as "highly probable" the theory advanced by Låftman (1946), which Liberman summarizes as follows: In his book "Imbibe!" (2007), David Wondrich also speculates that "cocktail" is a reference to a practice for perking up an old horse by means of a ginger suppository so that the animal would "cock its tail up and be frisky." Several authors have theorized that cocktail may be a corruption of cock ale. There is a lack of clarity on the origins of cocktails. Traditionally cocktails were a mixture of spirits, sugar, water, and bitters. By the 1860s, however, a cocktail frequently included a liqueur. The first publication of a bartenders' guide which included cocktail recipes was in 1862 – "How to Mix Drinks; or, The Bon Vivant's Companion", by "Professor" Jerry Thomas. In addition to recipes for punches, sours, slings, cobblers, shrubs, toddies, flips, and a variety of other mixed drinks were 10 recipes for "cocktails". A key ingredient differentiating cocktails from other drinks in this compendium was the use of bitters. Mixed drinks popular today that conform to this original meaning of "cocktail" include the Old Fashioned whiskey cocktail, the Sazerac cocktail, and the Manhattan cocktail. The ingredients listed (spirits, sugar, water, and bitters) match the ingredients of an Old Fashioned, which originated as a term used by late 19th century bar patrons to distinguish cocktails made the "old-fashioned" way from newer, more complex cocktails. In the 1869 recipe book "Cooling Cups and Dainty Drinks", by William Terrington, cocktails are described as: The term highball appears during the 1890s to distinguish a drink composed only of a distilled spirit and a mixer. The first "cocktail party" ever thrown was allegedly by Mrs. Julius S. Walsh Jr. of St. Louis, Missouri, in May 1917. Walsh invited 50 guests to her home at noon on a Sunday. The party lasted an hour, until lunch was served at 1 pm. The site of this first cocktail party still stands. In 1924, the Roman Catholic Archdiocese of St. Louis bought the Walsh mansion at 4510 Lindell Boulevard, and it has served as the local archbishop's residence ever since. During Prohibition in the United States (1920–1933), when alcoholic beverages were illegal, cocktails were still consumed illegally in establishments known as speakeasies. The quality of the liquor available during Prohibition was much worse than previously. There was a shift from whiskey to gin, which does not require aging and is therefore easier to produce illicitly. Honey, fruit juices, and other flavorings served to mask the foul taste of the inferior liquors. Sweet cocktails were easier to drink quickly, an important consideration when the establishment might be raided at any moment. With wine and beer less readily available, liquor-based cocktails took their place, even becoming the centerpiece of the new cocktail party. Cocktails became less popular in the late 1960s and through the 1970s, until resurging in the 1980s with vodka often substituting the original gin in drinks such as the martini. Traditional cocktails began to make a comeback in the 2000s, and by the mid-2000s there was a renaissance of cocktail culture in a style typically referred to as mixology that draws on traditional cocktails for inspiration but utilizes novel ingredients and often complex flavors. Lists Devices for producing and imbibing Media
https://en.wikipedia.org/wiki?curid=7599
Coptic Orthodox Church of Alexandria The Coptic Orthodox Church of Alexandria () is an Oriental Orthodox Christian church based in Egypt, Africa and the Middle East. The head of the Church and the See of Alexandria is the Patriarch of Alexandria on the Holy See of Saint Mark, who also carries the title of Coptic Pope. The See of Alexandria is titular, and today the Coptic Pope presides from Saint Mark's Coptic Orthodox Cathedral in the Abbassia District in Cairo. The church follows the Alexandrian Rite for its liturgy, prayer and devotional patrimony. With approximately 10 million members worldwide, it is the country's largest Christian church. According to its tradition, the Coptic Church was established by Saint Mark, an apostle and evangelist, during the middle of the 1st century (c. AD 42). Due to disputes concerning the nature of Christ, it split from the rest of Christendom after the Council of Chalcedon in AD 451, resulting in a rivalry with the Byzantine Orthodox Church. In the 4–7th centuries the Coptic Church gradually expanded due to the Christianization of the Aksumite empire and of two of the three Nubian kingdoms, Nobatia and Alodia, while the third Nubian kingdom, Makuria, recognized the Coptic patriarch after initially being aligned to the Byzantine Orthodox Church. After AD 639 Egypt was ruled by its Islamic conquerors from Arabia, and the treatment of the Coptic Christians ranged from tolerance to open persecution. In the 12th century, the church relocated its seat from Alexandria to Cairo. The same century also saw the Copts become a religious minority. During the 14th and 15th centuries, Nubian Christianity was supplanted by Islam. In 1959, the Ethiopian Orthodox Tewahedo Church was granted autocephaly or independence. This was extended to the Eritrean Orthodox Tewahedo Church in 1998 following the successful Eritrean War of Independence from Ethiopia. Since the Arab Spring in 2011, the Copts have been suffering increased religious discrimination and violence. The Egyptian Church is traditionally believed to be founded by St Mark at around AD 42, and regards itself as the subject of many prophecies in the Old Testament. Isaiah the prophet, in Chapter 19, Verse 19 says "In that day there will be an altar to the in the midst of the land of Egypt, and a pillar to the at its border". The first Christians in Egypt were common people who spoke Egyptian Coptic. There were also Alexandrian Jewish people such as Theophilus, whom Saint Luke the Evangelist addresses in the introductory chapter of his gospel. When the church was founded by Saint Mark during the reign of the Roman emperor Nero, a great multitude of native Egyptians (as opposed to Greeks or Jews) embraced the Christian faith. Christianity spread throughout Egypt within half a century of Saint Mark's arrival in Alexandria, as is clear from the New Testament writings found in Bahnasa, in Middle Egypt, which date around the year AD 200, and a fragment of the Gospel of John, written in Coptic, which was found in Upper Egypt and can be dated to the first half of the 2nd century. In the 2nd century, Christianity began to spread to the rural areas, and scriptures were translated into the local languages, namely Coptic. The Coptic language is a universal language used in Coptic churches in every country. It is derived from ancient Egyptian and uses Greek letters. Many of the hymns in the liturgy are in Coptic and have been passed down for several thousand years. The language is used to preserve Egypt's original language, which was banned by the Arab invaders, who ordered Arabic to be used instead. Some examples of these hymns are , , and many more. The Catechetical School of Alexandria is the oldest catechetical school in the world. St. Jerome records that the Christian School of Alexandria was founded by Saint Mark himself. Around AD 190, under the leadership of the scholar Pantanaeus, the school of Alexandria became an important institution of religious learning, where students were taught by scholars such as Athenagoras, Clement, Didymus, and the native Egyptian Origen, who was considered the father of theology and who was also active in the field of commentary and comparative Biblical studies. Many scholars such as Jerome visited the school of Alexandria to exchange ideas and to communicate directly with its scholars. The scope of this school was not limited to theological subjects; science, mathematics and humanities were also taught there. The question-and-answer method of commentary began there, and 15 centuries before Braille, wood-carving techniques were in use there by blind scholars to read and write. The Theological college of the catechetical school was re-established in 1893. The new school currently has campuses in Ireland, Cairo, New Jersey, and Los Angeles, where Coptic priests-to-be and other qualified men and women are taught among other subjects Christian theology, history, the Coptic language and art – including chanting, music, iconography, and tapestry. Many Egyptian Christians went to the desert during the 3rd century, and remained there to pray and work and dedicate their lives to seclusion and worship of God. This was the beginning of the monastic movement, which was organized by Anthony the Great, Saint Paul of Thebes, the world's first anchorite, Saint Macarius the Great and Saint Pachomius the Cenobite in the 4th century. Christian monasticism was born in Egypt and was instrumental in the formation of the Coptic Orthodox Church character of submission, simplicity and humility, thanks to the teachings and writings of the Great Fathers of Egypt's Deserts. By the end of the 5th century, there were hundreds of monasteries, and thousands of cells and caves scattered throughout the Egyptian desert. A great number of these monasteries are still flourishing and have new vocations to this day. All Christian monasticism stems, either directly or indirectly, from the Egyptian example: Saint Basil the Great Archbishop of Caesarea of Cappadocia, founder and organizer of the monastic movement in Asia Minor, visited Egypt around AD 357 and his rule is followed by the Eastern Orthodox Churches; Saint Jerome who translated the Bible into Latin, came to Egypt, while en route to Jerusalem, around AD 400 and left details of his experiences in his letters; Benedict founded the Benedictine Order in the 6th century on the model of Saint Pachomius, but in a stricter form. Countless pilgrims have visited the "Desert Fathers" to emulate their spiritual, disciplined lives. In the 4th century, an Alexandrian presbyter named Arius began a theological dispute about the nature of Christ that spread throughout the Christian world and is now known as Arianism. The Ecumenical Council of Nicea AD 325 was convened by Constantine after the Pope Alexander I of Alexandria requested to hold a Council to respond to heresies, under the presidency of Saint Hosius of Cordova to resolve the dispute. This eventually led to the formulation of the Symbol of Faith, also known as the Nicene Creed. The Creed, which is now recited throughout the Christian world, was based largely on the teaching put forth by a man who eventually would become Pope Saint Athanasius of Alexandria, the chief opponent of Arius. In the year AD 381, Pope Timothy I of Alexandria presided over the second ecumenical council known as the Ecumenical Council of Constantinople, to judge Macedonius, who denied the Divinity of the Holy Spirit. This council completed the Nicene Creed with this confirmation of the divinity of the Holy Spirit: We believe in the Holy Spirit, the Lord, the Giver of Life, who proceeds from the Father, who with the Father through the Son is worshiped and glorified who spoke by the Prophets and in One, Holy, Catholic, and Apostolic church. We confess one Baptism for the remission of sins and we look for the resurrection of the dead and the life of the coming age, Amen. Another theological dispute in the 5th century occurred over the teachings of Nestorius, the Patriarch of Constantinople who taught that God the Word was not hypostatically joined with human nature, but rather dwelt in the man Jesus. As a consequence of this, he denied the title "Mother of God" "(Theotokos)" to the Virgin Mary, declaring her instead to be "Mother of Christ" "Christotokos". When reports of this reached the Apostolic Throne of Saint Mark, Pope Saint Cyril I of Alexandria acted quickly to correct this breach with orthodoxy, requesting that Nestorius repent. When he would not, the Synod of Alexandria met in an emergency session and a unanimous agreement was reached. Pope Cyril I of Alexandria, supported by the entire See, sent a letter to Nestorius known as "The Third Epistle of Saint Cyril to Nestorius." This epistle drew heavily on the established Patristic Constitutions and contained the most famous article of Alexandrian Orthodoxy: "The Twelve Anathemas of Saint Cyril." In these anathemas, Cyril excommunicated anyone who followed the teachings of Nestorius. For example, "Anyone who dares to deny the Holy Virgin the title "Theotokos" is Anathema!" Nestorius however, still would not repent and so this led to the convening of the First Ecumenical Council of Ephesus (AD 431), over which Cyril presided. The Council confirmed the teachings of Saint Athanasius and confirmed the title of Mary as "Mother of God". It also clearly stated that anyone who separated Christ into two hypostases was anathema, as Cyril had said that there is "One Nature [and One Hypostasis] for God the Word Incarnate" ("Mia Physis tou Theou Logou Sesarkōmenē"). Also, the introduction to the creed was formulated as follows: We magnify you O Mother of the True Light and we glorify you O saint and Mother of God "(Theotokos)" for you have borne unto us the Saviour of the world. Glory to you O our Master and King: Christ, the pride of the Apostles, the crown of the martyrs, the rejoicing of the righteous, firmness of the churches and the forgiveness of sins. We proclaim the Holy Trinity in One Godhead: we worship Him, we glorify Him, Lord have mercy, Lord have mercy, Lord bless us, Amen. [not dissimilar to the "Axion Estin" Chant still used in Orthodoxy] When in AD 451 Emperor Marcian attempted to heal divisions in the Church, the response of Pope Dioscorus – the Pope of Alexandria who was later exiled – was that the emperor should not intervene in the affairs of the Church. It was at Chalcedon that the emperor, through the Imperial delegates, enforced harsh disciplinary measures against Pope Dioscorus in response to his boldness. In AD 449, Pope Dioscorus headed the 2nd Council of Ephesus, called the "Robber Council" by Chalcedonian historians. It held to the Miaphysite formula which upheld the Christology of "One Incarnate Nature of God the Word" (Greek: μία φύσις Θεοῦ Λόγου σεσαρκωμένη ("mia physis Theou Logou sesarkōmenē")), and upheld the heretic Eutyches claiming he was orthodox. The Council of Chalcedon summoned Dioscorus three times to appear at the council, after which he was deposed. The Council of Chalcedon further deposed him for his support of Eutyches, but not necessarily for Eutychian Monophysitism. Dioscorus appealed to the conciliar fathers to allow for a more Miaphysite interpretation of Christology at the council, but was denied. Following his being deposed, the Coptic Church and its faithful felt unfairly underrepresented at the council and oppressed politically by the Byzantine Empire. After the Byzantines appointed Proterius of Alexandria as Patriarch to represent the Chalcedonian Church, the Coptic Church appointed their own Patriarch Timothy Aelurus and broke from the Chalcedonian communion. The Council of Chalcedon, from the perspective of the Alexandrine Christology, has deviated from the approved Cyrillian terminology and declared that Christ was one hypostasis in two natures. However, in the Nicene-Constantinopolitan Creed, "Christ was conceived of the Holy Spirit and of the Virgin Mary," thus the foundation of the definition according to the Non-Chalcedonian adherents, according to the Christology of Cyril of Alexandria is valid. There is a change in the Non-Chalcedonian definition here, as the Nicene creed clearly uses the terms "of", rather than "in." In terms of Christology, the Oriental Orthodox (Non-Chalcedonians) understanding is that Christ is "One Nature—the Logos Incarnate," "of" the full humanity and full divinity. The Chalcedonians' understanding is that Christ is "recognized in" two natures, full humanity and full divinity. Oriental Orthodoxy contends that such a formulation is no different from what the Nestorians teach. This is the doctrinal perception that makes the apparent difference which separated the Oriental Orthodox from the Eastern Orthodox. The Council's findings were rejected by many of the Christians on the fringes of the Byzantine Empire, including Egyptians, Syriacs, Armenians, and others. From that point onward, Alexandria would have two patriarchs: the non-Chalcedonian native Egyptian one, now known as the Coptic Pope of Alexandria and Patriarch of All Africa on the Holy Apostolic See of St. Mark, and the Melkite or Imperial Patriarch, now known as the Greek Orthodox Patriarch of Alexandria. Almost the entire Egyptian population rejected the terms of the Council of Chalcedon and remained faithful to the native Egyptian Church (now known as the Coptic Orthodox Church of Alexandria). Those who supported the Chalcedonian definition remained in communion with the other leading imperial churches of Rome and Constantinople. The non-Chalcedonian party became what is today called the Oriental Orthodox Church. The Coptic Orthodox Church of Alexandria regards itself as having been misunderstood at the Council of Chalcedon. There was an opinion in the Church that viewed that perhaps the Council understood the Church of Alexandria correctly, but wanted to curtail the existing power of the Alexandrine Hierarch, especially after the events that happened several years before at Constantinople from Pope Theophilus of Alexandria towards Patriarch John Chrysostom and the unfortunate turnouts of the Second Council of Ephesus in AD 449, where Eutychus misled Pope Dioscorus and the Council in confessing the Orthodox Faith in writing and then renouncing it after the Council, which in turn, had upset Rome, especially that the Tome which was sent was not read during the Council sessions. To make things even worse, the Tome of Pope Leo of Rome was, according to the Alexandria School of Theology, particularly in regards to the definition of Christology, considered influenced by Nestorian heretical teachings. So, due to the above-mentioned, especially in the consecutive sequences of events, the Hierarchs of Alexandria were considered holding too much of power from one hand, and on the other hand, due to the conflict of the Schools of Theology, there would be an impasse and a scapegoat, i.e. Pope Dioscorus. The Tome of Leo has been widely criticized (surprisingly by Roman Catholic and Eastern Orthodox scholars) in the past 50 years as a much less than perfect orthodox theological doctrine. By anathematizing Pope Leo because of the tone and content of his tome, as per Alexandrine Theology perception, Pope Dioscorus was found guilty of doing so without due process; in other words, the Tome of Leo was not a subject of heresy in the first place, but it was a question of questioning the reasons behind not having it either acknowledged or read at the Second Council of Ephesus in AD 449. Pope Dioscorus of Alexandria was never labeled as heretic by the council's canons. Copts also believe that the Pope of Alexandria was forcibly prevented from attending the third congregation of the council from which he was ousted, apparently the result of a conspiracy tailored by the Roman delegates. Before the current positive era of Eastern and Oriental Orthodox dialogues, Chalcedonians sometimes used to call the non-Chalcedonians "Monophysites", though the Coptic Orthodox Church in reality regards Monophysitism as a heresy. The Chalcedonian doctrine in turn came to be known as "Dyophysite". A term that comes closer to Coptic Orthodoxy is Miaphysite, which refers to a conjoined nature for Christ, both human and divine, united indivisibly in the Incarnate Logos. The Coptic Orthodox Church of Alexandria believes that Christ is perfect in His divinity, and He is perfect in His humanity, but His divinity and His humanity were united in one nature called "the nature of the incarnate word", which was reiterated by Saint Cyril of Alexandria. Copts, thus, believe in two natures "human" and "divine" that are united in one hypostasis "without mingling, without confusion, and without alteration". These two natures "did not separate for a moment or the twinkling of an eye" (Coptic Liturgy of Saint Basil of Caesarea). Prior to Chalcedon, the Imperial Church's main division stemmed from Nestorianism, eventually leading the Church of the East to declare its independence in AD 424. After the Council of Chalcedon in AD 451, the Coptic Church and its hierarchy felt suspicious of what they believed were Nestorian elements within the Chalcedonian Church. As a result, the anti-Chalcedon partisan, Timotheos Aelurus, consigned himself to depose the Chalcedonian Pope of Alexandria, Proterius of Alexandria, and to set himself up as the Pope of Alexandria in opposition to the Chalcedonian Church. Copts suffered under the rule of the Byzantine Empire. The Melkite Patriarchs, appointed by the emperors as both spiritual leaders and civil governors, massacred those Egyptians they considered heretics. Many were tortured and martyred in attempts to force their acceptance of the Chalcedonian terms, but the Egyptians remained loyal to the Cyrillian Miaphysitism. One of the most renowned Egyptian saints of the period is Saint Samuel the Confessor. The Muslim invasion of Egypt took place in AD 639. Relying on eyewitness testimony, Bishop John of Nikiu in his Chronicle provides a graphic account of the invasion from a Coptic perspective. Although the Chronicle has only been preserved in an Ethiopic (Ge'ez) text, some scholars believe that it was originally written in Coptic. John's account is critical of the invaders who he says "despoiled the Egyptians of their possessions and dealt cruelly with them", and he vividly details the atrocities committed by the Muslims against the native population during the conquest:And when with great toil and exertion they had cast down the walls of the city, they forthwith made themselves masters of it, and put to the sword thousands of its inhabitants and of the soldiers, and they gained an enormous booty, and took the women and children captive and divided them amongst themselves, and they made that city a desolation. Though critical of the Muslim commander (Amr ibn al-As), who, during the campaign, he says "had no mercy on the Egyptians, and did not observe the covenant they had made with him, for he was of a barbaric race", he does note that following the completion of the conquest, Amr "took none of the property of the Churches, and he committed no act of spoilation or plunder, and he preserved them throughout all his days." Despite the political upheaval, the Egyptian population remained mainly Christian. However, gradual conversions to Islam over the centuries had changed Egypt from a Christian to a largely Muslim country by the end of the 12th century. Another scholar writes that a combination of "repression of Coptic revolts", Arab-Muslim immigration, and Coptic conversion to Islam resulted in the demographic decline of the Copts. Egypt's Umayyad rulers taxed Christians at a higher rate than Muslims, driving merchants towards Islam and undermining the economic base of the Coptic Church. Although the Coptic Church did not disappear, the Umayyad tax policies made it difficult for the church to retain the Egyptian elites. The Church suffered greatly under the many regimes of Islamic rule. Sometime during the 2nd millennium AD, the leadership of the Church, including the Pope, moved from Alexandria to Cairo. In 1798, the French invaded Egypt unsuccessfully and the British helped the Turks to regain power over Egypt under the Muhammad Ali dynasty. The position of Copts began to improve early in the 19th century under the stability and tolerance of the Muhammad Ali Dynasty. The Coptic community ceased to be regarded by the state as an administrative unit. In 1855 the jizya tax was abolished by Sa'id Pasha. Shortly thereafter, the Copts started to serve in the Egyptian army. Towards the end of the 19th century, the Coptic Church underwent phases of new development. In 1853, Pope Cyril IV established the first modern Coptic schools, including the first Egyptian school for girls. He also founded a printing press, which was only the second national press in the country. The Pope established very friendly relations with other denominations, to the extent that when the Greek Patriarch in Egypt had to absent himself from the country for a long period of time, he left his Church under the guidance of the Coptic Patriarch. The Theological College of the School of Alexandria was reestablished in 1893. It began its new history with five students, one of whom was later to become its dean. Today it has campuses in Alexandria and Cairo, and in various dioceses throughout Egypt, as well as outside Egypt. It has campuses in New Jersey, Los Angeles, Sydney, Melbourne, and London, where potential clergymen and other qualified men and women study many subjects, including theology, church history, missionary studies, and the Coptic language. In 1959, the Ethiopian Orthodox Tewahedo Church was granted its first own Patriarch by Pope Cyril VI. Furthermore, the Eritrean Orthodox Tewahedo Church similarly became independent of the Ethiopian Orthodox Tewahedo Church in 1994, when four bishops were consecrated by Pope Shenouda III of Alexandria to form the basis of a local Holy Synod of the Eritrean Church. In 1998, the Eritrean Orthodox Tewahedo Church gained its autocephaly from the Coptic Orthodox Church when its first Patriarch was enthroned by Pope Shenouda III of Alexandria. These three churches remain in full communion with each other and with the other Oriental Orthodox churches. The Ethiopian Orthodox Tewahedo Church and the Eritrean Orthodox Tewahedo Church do acknowledge the Honorary Supremacy of the Coptic Orthodox Patriarch of Alexandria, since the Church of Alexandria is technically their Mother Church. Upon their selection, both Patriarchs (Ethiopian & Eritrean) must receive the approval and communion from the Holy Synod of the Apostolic See of Alexandria before their enthronement. Since the 1980s theologians from the Oriental (non-Chalcedonian) Orthodox and Eastern (Chalcedonian) Orthodox churches have been meeting in a bid to resolve theological differences, and have concluded that many of the differences are caused by the two groups using different terminology to describe the same thing (see Agreed Official Statements on Christology with the Eastern Orthodox Churches). In the summer of 2001, the Coptic Orthodox and Greek Orthodox Patriarchates of Alexandria agreed to mutually recognize baptisms performed in each other's churches, making re-baptisms unnecessary, and to recognize the sacrament of marriage as celebrated by the other. Previously, if a Coptic Orthodox and Greek Orthodox wanted to get married, the marriage had to be performed twice, once in each church, for it to be recognized by both. Now it can be done in only one church and be recognized by both. According to Christian Tradition and Canon Law, the Coptic Orthodox Church of Alexandria only ordains men to the priesthood and episcopate, and if they wish to be married, they must be married before they are ordained. In this respect they follow the same practices as all other Oriental Orthodox Churches, as well as all of Eastern Orthodox Churches. Traditionally, the Coptic language was used in church services, and the scriptures were written in the Coptic alphabet. However, due to the Arabisation of Egypt, service in churches started to witness increased use of Arabic, while preaching is done entirely in Arabic. Native languages are used, in conjunction with Coptic, during services outside Egypt. The liturgical calendar of the Coptic Orthodox Church is the Coptic calendar (also called the Alexandrian Calendar). This calendar is based on the Egyptian calendar of Ancient Egypt. Coptic Orthodox Christians celebrate Christmas on 29 Koiak, which corresponds to 7 January in the Gregorian Calendar and 25 December in the Julian Calendar. Coptic Christmas was adopted as an official national holiday in Egypt in 2002. In Tahrir Square, Cairo, on Wednesday 2 February 2011, Coptic Christians joined hands to provide a protective cordon around their Muslim neighbors during salat (prayers) in the midst of the 2011 Egyptian Revolution. On 17 March 2012, the Coptic Orthodox Pope, Pope Shenouda III died, leaving many Copts mourning and worrying as tensions rose with Muslims. Pope Shenouda III constantly met with Muslim leaders in order to create peace. Many were worried about Muslims controlling Egypt as the Muslim Brotherhood won 70% of the parliamentary elections. On 4 November 2012, Bishop Tawadros was chosen as the 118th Pope. In a ritual filled with prayer, chants and incense at Abbasiya cathedral in Cairo, the 60-year-old bishop's name was picked by a blindfolded child from a glass bowl in which the names of two other candidates had also been placed. The enthronement was scheduled on 18 November 2012. Available Egyptian census figures and other third party survey reports have not reported more than 4 million Coptic Orthodox Christians in Egypt. However media and other agencies, sometimes taking into account the claims of the Church itself, generally approximate the Coptic Orthodox population at 10% of the Egyptian population or 10 million people. The majority of them live in Egypt under the jurisdiction of the Coptic Orthodox Church of Alexandria. Since 2006, Egyptian censuses have not reported on religion and church leaders have alleged that Christians were under-counted in government surveys. In 2017, a government owned newspaper Al Ahram estimated the percentage of Copts at 10 to 15% and the membership claimed by the Coptic Orthodox Church is in the range of 20 to 25 million. There are also significant numbers in the diaspora outside Africa in countries such as the United States, Canada, Australia, France, and Germany. The exact number of Egyptian born Coptic Orthodox Christians in the diaspora is hard to determine and is roughly estimated to be close to 1 million. There are between 150,000 and 200,000 adherents in Sudan. Although under the jurisdiction of the Coptic Orthodox Church, these adherents are not considered Copts, since they are not ethnic Egyptians. While Copts have cited instances of persecution throughout their history, Human Rights Watch has noted "growing religious intolerance" and sectarian violence against Coptic Christians in recent years, and a failure by the Egyptian government to effectively investigate properly and prosecute those responsible. Over a hundred Egyptian copts have been killed in sectarian clashes from 2011 to 2017, and many homes and businesses destroyed. In just one province (Minya), 77 cases of sectarian attacks on Copts between 2011 and 2016 have been documented by the Egyptian Initiative for Personal Rights. The abduction and disappearance of Coptic Christian women and girls also remains a serious ongoing problem. Besides Egypt, the Church of Alexandria has jurisdiction over all of Africa. In addition, the Ethiopian Orthodox Tewahedo Church and Eritrean Orthodox Tewahedo Church are daughter churches of the Coptic Orthodox Church of Alexandria. Both the Patriarchate of Addis Ababa and all Ethiopia and the Patriarchate of Asmara and all Eritrea acknowledge the supremacy of honor and dignity of the Pope of Alexandria on the basis that both patriarchates were established by the Throne of Alexandria and that they have their roots in the Apostolic Church of Alexandria, and acknowledge that Saint Mark the Apostle is the founder of their Churches through the heritage and Apostolic evangelization of the Fathers of Alexandria. Ethiopia received Christianity next to Jerusalem, through Jesus's own apostle, only a year after Jesus was crucified (Acts 8: 26–39). Christianity became a national religion of Ethiopia, under the dominion of the Church of Alexandria, in the 4th century. The first bishop of Ethiopia, Saint Frumentius, was consecrated as Bishop of Axum by Pope Athanasius of Alexandria in AD 328. From then on, until 1959, the Pope of Alexandria, as Patriarch of All Africa, always named an Egyptian (a Copt) to be the Archbishop of the Ethiopian Church. On 13 July 1948, the Coptic Church of Alexandria and the Ethiopian Orthodox Tewahedo Church reached an agreement concerning the relationship between the two churches. In 1950, the Ethiopian Orthodox Tewahedo Church was granted autocephaly by Pope Joseph II of Alexandria, head of the Coptic Orthodox Church. Five Ethiopian bishops were immediately consecrated by the Pope of Alexandria and Patriarch of All Africa, and were empowered to elect a new Patriarch for their church. This promotion was completed when Joseph II consecrated the first Ethiopian-born Archbishop, Abuna Basilios, as head of the Ethiopian Church on 14 January 1951. In 1959, Pope Cyril VI of Alexandria crowned Abuna Basilios as the first Patriarch of Ethiopia. Patriarch Basilios died in 1971, and was succeeded on the same year by Abuna Theophilos. With the fall of Emperor Haile Selassie I of Ethiopia in 1974, the new Marxist government arrested Abuna Theophilos and secretly executed him in 1979. The Ethiopian government then ordered the Ethiopian Church to elect Abuna Takla Haymanot as Patriarch of Ethiopia. The Coptic Orthodox Church refused to recognize the election and enthronement of Abuna Takla Haymanot on the grounds that the Synod of the Ethiopian Church had not removed Abuna Theophilos, and that the Ethiopian government had not publicly acknowledged his death, and he was thus still legitimate Patriarch of Ethiopia. Formal relations between the two churches were halted, although they remained in communion with each other. After the death of Abuna Takla Haymanot in 1988, Abune Merkorios who had close ties to the Derg (Communist) government was elected Patriarch of Ethiopia. Following the fall of the Derg regime in 1991, Abune Merkorios abdicated under public and governmental pressure and went to exile in the United States. The newly elected Patriarch, Abune Paulos was officially recognized by the Coptic Orthodox Church of Alexandria in 1992 as the legitimate Patriarch of Ethiopia. Formal relations between the Coptic Church of Alexandria and the Ethiopian Orthodox Tewahedo Church were resumed on 13 July 2007. Abune Paulos died in August 2012. Following the independence of Eritrea from Ethiopia in 1993, the newly independent Eritrean government appealed to Pope Shenouda III of Alexandria for Eritrean Orthodox autocephaly. In 1994, Pope Shenouda ordained Abune Phillipos as first Archbishop of Eritrea. The Eritrean Orthodox Tewahedo Church obtained autocephaly on 7 May 1998, and Abune Phillipos was subsequently consecrated as first Patriarch of Eritrea. The two churches remain in full communion with each other and with the other Oriental Orthodox Churches, although the Coptic Orthodox Church of Alexandria, along with the Ethiopian Orthodox Tewahedo Church does not recognize the deposition of the third Patriarch of Eritrea, Abune Antonios. The Coptic Orthodox Church has a presence in many countries outside Egypt, including: The patriarch of Alexandria was originally known merely as bishop of Alexandria. However, this title continued to evolve as the Church grew under Theophilus and his nephew and successor Cyril (AD 376–444), and especially in the 5th century when the Church developed its hierarchy. The bishop of Alexandria, being the successor of the first bishop in Roman Egypt consecrated by Saint Mark, was honored by the other bishops as first among equals "primus inter pares". Under the sixth canon of the Council of Nicaea, Cyril was raised to prelate or chief bishop at the head of the episcopates of Egypt, Libya, and the Pentapolis without the existence of intermediate archbishops as existed in other ecclesiastic provinces. He had the privilege of choosing and consecrating bishops. The title of "pope" has been attributed to the Patriarch of Alexandria since the episcopate of Heraclas, the 13th Patriarch of Alexandria. All the clergy of Alexandria and Lower Egypt honored him with the title "papas", which means "father" as the archbishop and metropolitan having authority over all bishops, within the Egyptian province, who are under his jurisdiction. Alexandria, while the ecclesiastical and provincial capital, also had the distinction as being the place where Saint Mark was martyred. The title "Patriarch" originally referred to a clan leader or head of a familial lineage. Ecclesiastically it means a bishop of high rank and was originally used as a title for the bishops of Rome, Constantinople, Jerusalem, Antioch, and Alexandria. For the Coptic patriarch, this title was "Patriarch of Alexandria and all Africa on the Holy Apostolic Throne of Saint Mark the Evangelist," that is "of Egypt". The title of "Patriarch" was first used around the time of the Third Ecumenical Council of Ephesus, convened in AD 431, and ratified at Chalcedon in AD 451. Only the Patriarch of Alexandria has the double title of "Pope" and "Patriarch" among the Eastern Orthodox and Oriental Orthodox ecumenical church heads. The Coptic Orthodox patriarchate of Alexandria is governed by its Holy Synod, which is headed by the Patriarch of Alexandria. Under his authority are the metropolitan archbishops, metropolitan bishops, diocesan bishops, patriarchal exarchs, missionary bishops, auxiliary bishops, suffragan bishops, assistant bishops, chorbishops and the patriarchal vicars for the Church of Alexandria. They are organized as follows:
https://en.wikipedia.org/wiki?curid=7601
The Family International The Family International (TFI) is a cult which was founded in Huntington Beach, California, US in 1968. It was originally named Teens for Christ and it later gained notoriety as The Children of God (COG). It was later renamed and reorganized as The Family of Love, which was eventually shortened to The Family. It is currently named The Family. TFI initially spread a message of salvation, apocalypticism, spiritual "revolution and happiness" and distrust of the outside world, which the members called "The System". In 1976, it began a method of evangelism called Flirty Fishing that used sex to "show God's love and mercy" and win converts, resulting in controversy. TFI's founder and prophetic leader, David Berg (who was first called "Moses David" in the Texas press), gave himself the titles of "King", "The Last Endtime Prophet", "Moses", and "David". He communicated with his followers via "Mo Letters"—letters of instruction and counsel on myriad spiritual and practical subjects—until his death in late 1994. After his death, his widow Karen Zerby became the leader of TFI, taking the titles of "Queen" and "Prophetess". She married Steve Kelly (also known as Peter Amsterdam), an assistant of Berg's whom Berg had handpicked as her "consort". Kelly took the title of "King Peter" and became the face of TFI, speaking in public more often than either David Berg or Karen Zerby. There have been multiple allegations of child sexual abuse made by past members. Members of The Children of God (COG) founded communes, first called colonies (now referred to as homes), in various cities. They would proselytize in the streets and distribute pamphlets. Leaders within COG were referred to as "The Chain". The founder of the movement, David Brandt Berg (1919–1994), was a former Christian and Missionary Alliance pastor. Berg communicated with his followers by writing letters. He published nearly 3,000 letters over a period of 24 years, referred to as the "Mo Letters". In a letter written in January 1972, Berg stated that he was God's prophet for the contemporary world, attempting to further solidify his spiritual authority within the group. Berg's letters also contained public acknowledgement of his own failings and weaknesses. By 1972, COG had 130 communities around the world. The Children of God was abolished in February 1978. Berg reorganized the movement amid reports of serious misconduct and financial mismanagement, The Chain's abuse of authority, and disagreements within it about the continued use of Flirty Fishing. The group was also accused of sexually abusing and raping minors within the organization, with considerable evidence to support this claim. One-eighth of the total membership left the movement. Those who remained became part of a reorganized movement called the Family of Love, and later, The Family. The majority of the group's beliefs remained the same. The Family of Love era was characterized by international expansion. In 1976, before the dissolution of The Children of God, David Berg had introduced a new proselytizing method called Flirty Fishing (or FFing), which encouraged female members to "show God's love" through sexual relationships with potential converts. Flirty Fishing was practiced by members of Berg's inner circle starting in 1973, and was introduced to the general membership in 1976 and became common practice within the group. In some areas flirty fishers used escort agencies to meet potential converts. According to TFI "over 100,000 received God's gift of salvation through Jesus, and some chose to live the life of a disciple and missionary" as a result of Flirty Fishing. Researcher Bill Bainbridge obtained data from TFI suggesting that, from 1974 until 1987, members had sexual contact with 223,989 people while practicing Flirty Fishing. In March 1989, TF issued a statement that, in "early 1985", an urgent memorandum had been sent to all members "reminding them that any such activities [adult–child sexual contact] are "strictly forbidden" within our group" (emphasis in original), and such activities were grounds for immediate excommunication from the group. In January 2005, Claire Borowik, a spokesperson for TFI, stated that:[d]ue to the fact that our current zero-tolerance policy regarding sexual interaction between adults and underage minors was not in our literature published before 1986, we came to the realization that during a transitional stage of our movement, from 1978 until 1986, there were cases when some minors were subject to sexually inappropriate advances ... This was corrected officially in 1986, when any contact between an adult and minor (any person under 21 years of age) was declared an excommunicable offense. After Berg's death in October 1994, Karen Zerby (known in the group as Mama Maria, Queen Maria, Maria David, or Maria Fontaine), assumed leadership of the group. In February 1995, the group introduced the "Love Charter", which defined the rights and responsibilities of Charter Members and Homes. The Charter also included the "Fundamental Family Rules", a summary of rules and guidelines from past TF publications which were still in effect. In the 1994–95 British court case, the Rt. Hon. Lord Justice Alan Ward ruled that the group, including some of its top leaders, had in the past engaged in abusive sexual practices involving minors and had also used severe corporal punishment and sequestration of minors. He found that by 1995 TF had abandoned these practices and concluded that they were a safe environment for children. Nevertheless, he did require that the group cease all corporal punishment of children in the United Kingdom and denounce any of Berg's writings that were "responsible for children in TF having been subjected to sexually inappropriate behaviour". The Love Charter is The Family's set governing document that entails each member's rights, responsibilities and requirements, while the "Missionary Member Statutes" and "Fellow Member Statutes" were written for the governance of TFI's Missionary member and Fellow Member circles, respectively. FD Homes were reviewed every six months against a published set of criteria. The Love Charter increased the number of single family homes as well as homes that relied on jobs such as self-employment. TFI's recent teachings are based on beliefs they term the "new [spiritual] weapons". TFI members believe that they are soldiers in the spiritual war of good versus evil for the souls and hearts of men. These include angels, departed humans, other religious and mythical figures, and even celebrities; for example the goddess Aphrodite, the Snowman, Merlin, the Sphinx, Elvis, Marilyn Monroe, Audrey Hepburn, Richard Nixon, and Winston Churchill. TFI believes that the Biblical passage "I will give you the keys of the kingdom of heaven, and whatsoever you bind on earth will be bound in heaven, and whatsoever you loose on earth will be loosed in heaven" (), refers to an increasing amount of spiritual authority that was given to Peter and the early disciples. According to TFI beliefs, this passage refers to keys that were hidden and unused in the centuries that followed, but were again revealed through Karen Zerby as more power to pray and obtain miracles. TFI members call on the various Keys of the Kingdom for extra effect during prayer. The Keys, like most TFI beliefs, were published in magazines that looked like comic-books in order to make them teachable to children. These beliefs are still generally held and practiced, even after the "reboot" documents of 2010. This is a term TFI members use to describe their intimate, sexual relationship with Jesus. TFI describes its "Loving Jesus" teaching as a radical form of bridal theology. They believe the church of followers is Christ's bride, called to love and serve him with wifely fervor. But they take bridal theology further, encouraging members to imagine Jesus is joining them during sexual intercourse and masturbation. Male members are cautioned to visualize themselves as women, in order to avoid a homosexual relationship with Jesus. Many TFI publications, and spirit messages claimed to be from Jesus himself, elaborate this intimate, sexual relation they believe Jesus desires and needs. TFI imagines itself as his special "bride" in graphic poetry, guided visualizations, artwork, and songs. Some TFI literature is not brought into conservative countries for fear it may be classified at customs as pornography. The literature outlining this view of Jesus and his desire for a sexual relationship with believers was edited for younger teens, then further edited for children. Second-generation adults (known as "SGAs") are adults born or reared in TFI. Anti-TFI sentiment has been publicly expressed by some who have left the group; examples include sisters Celeste Jones, Kristina Jones, and Juliana Buhring, who wrote a book on their lives in TFI. TFI members are expected to respect legal and civil authorities where they live. Members have typically cooperated with appointed authorities, even during the police and social-service raids of their communities in the early 1990s. The group has been criticized by the press and the anti-cult movement. In 1971, an organization called FREECOG was founded by concerned parents and others, including deprogrammer Ted Patrick, to "free" members of the COG from their involvement in the group. Academics were divided, with some categorizing TFI as a "new religious movement", and others, such as Benjamin Beit-Hallahmi and John Huxley, labeling the group a "cult".
https://en.wikipedia.org/wiki?curid=7602
Code of Hammurabi The Code of Hammurabi is a well-preserved Babylonian code of law of ancient Mesopotamia, dated to about 1754 BC (Middle Chronology). It is one of the oldest deciphered writings of significant length in the world. The sixth Babylonian king, Hammurabi, enacted the code. A partial copy exists on a 2.25-metre-tall (7.5 ft) stone stele. It consists of 282 laws, with scaled punishments, adjusting "an eye for an eye, a tooth for a tooth" () as graded based on social stratification depending on social status and gender, of slave versus free, man versus woman. Nearly half of the code deals with matters of contract, establishing the wages to be paid to an ox driver or a surgeon for example. Other provisions set the terms of a transaction, the liability of a builder for a house that collapses, or property that is damaged while left in the care of another. A third of the code addresses issues concerning household and family relationships such as inheritance, divorce, paternity, and reproductive behavior. Only one provision appears to impose obligations on a government official; this provision establishes that a judge who alters his decision after it is written down is to be fined and removed from the bench permanently. A few provisions address issues related to military service. The code was discovered by modern archaeologists in 1901, and its translation published in 1902 by Jean-Vincent Scheil. This nearly complete example of the code is carved into a diorite stele in the shape of a huge index finger, tall. The code is inscribed in the Akkadian language, using cuneiform script carved into the stele. The material was imported into Sumeria from Magan - today the area covered by the United Arab Emirates and Oman. It is currently on display in the Louvre, with replicas in numerous institutions, including the Oriental Institute at the University of Chicago, the Northwestern Pritzker School of Law in Chicago, the Clendening History of Medicine Library & Museum at the University of Kansas Medical Center, the library of the Theological University of the Reformed Churches in the Netherlands, the Pergamon Museum of Berlin, the Arts Faculty of the University of Leuven in Belgium, the National Museum of Iran in Tehran, the Department of Anthropology, National Museum of Natural History, Smithsonian Institution, the University Museum at the University of Pennsylvania, the Pushkin State Museum of Fine Arts in Russia, the Prewitt-Allen Archaeological Museum at Corban University, Garrett-Evangelical Theological Seminary, and Museum of the Bible in Washington, DC. Hammurabi ruled from 1792 to 1750 BC (according to the middle chronology). At the head of the stone slab is Hammurabi receiving the law from Shamash, and in the preface, he states, "Anu and Bel called by name me, Hammurabi, the exalted prince, who feared God, to bring about the rule of righteousness in the land, to destroy the wicked and the evil-doers; so that the strong should not harm the weak; so that I should rule over the black-headed people like Shamash, and enlighten the land, to further the well-being of mankind." The laws were arranged in 44 columns and 28 paragraphs; some follow along the rules of "an eye for an eye". It was taken as plunder by the Elamite king Shutruk-Nahhunte in the 12th century BC and was taken to Susa in Elam (located in the present-day Khuzestan Province of Iran), where it was no longer available to the Babylonian people. However, when Cyrus the Great brought both Babylon and Susa under the rule of his Persian Empire and placed copies of the document in the Library of Sippar, the text became available for all the peoples of the vast Persian Empire to view. In 1901, Egyptologist Gustave Jéquier, a member of an expedition headed by Jacques de Morgan, found the stele containing the Code of Hammurabi during archaeological excavations at the ancient site of Susa in Khuzestan. The stele unearthed in 1901 had many laws scraped off by Shutruk-Naknunte. Early estimates pegged the number of missing laws at 34, however the exact number is still not determined and only 30 have been discovered so far. The common belief is that the code contained 282 laws in total. The Code of Hammurabi was one of the only sets of laws in the ancient Near East and also one of the first forms of law. The code of laws was arranged in orderly groups, so that all who read the laws would know what was required of them. Earlier collections of laws include the Code of Ur-Nammu, king of Ur (c. 2050 BC), the Laws of Eshnunna (c.1930 BC) and the codex of Lipit-Ishtar of Isin (c.1870 BC), while later ones include the Hittite laws, the Assyrian laws, and Mosaic Law. These codes come from similar cultures in a relatively small geographical area, and they have passages that resemble each other. The Code of Hammurabi is the longest surviving text from the Old Babylonian period. The code has been seen as an early example of a fundamental law, regulating a government – i.e., a primitive constitution. The code is also one of the earliest examples of the idea of presumption of innocence, and it also suggests that both the accused and accuser have the opportunity to provide evidence. The occasional nature of many provisions suggests that the code may be better understood as a codification of Hammurabi's supplementary judicial decisions, and that, by memorializing his wisdom and justice, its purpose may have been the self-glorification of Hammurabi rather than a modern legal code or constitution. However, its copying in subsequent generations indicates that it was used as a model of legal and judicial reasoning. While the Code of Hammurabi was trying to achieve equality, biases still existed against those categorized in the lower end of the social spectrum and some of the punishments and justice could be gruesome. The magnitude of criminal penalties often was based on the identity and gender of both the person committing the crime and the victim. The Code issues justice following the three classes of Babylonian society: property owners, freed men, and slaves. Punishments for someone assaulting someone from a lower class were far lighter than if they had assaulted someone of equal or higher status. For example, if a doctor killed a rich patient, he would have his hands cut off, but if he killed a slave, only financial restitution was required. Women could also receive punishments that their male counterparts would not, as men were permitted to have affairs with their servants and slaves, whereas married women would be harshly punished for committing adultery. Various copies of portions of the Code of Hammurabi have been found on baked clay tablets, some possibly older than the celebrated basalt stele now in the Louvre. The Prologue of the Code of Hammurabi (the first 305 inscribed squares on the stele) is on such a tablet, also at the Louvre (Inv #AO 10237). Some gaps in the list of benefits bestowed on cities recently annexed by Hammurabi may imply that it is older than the famous stele (currently dated to the early 18th century BC). Likewise, the Museum of the Ancient Orient, part of the Istanbul Archaeology Museums, also has a "Code of Hammurabi" clay tablet, dated to 1790 BC (in Room 5, Inv # Ni 2358). In July 2010, archaeologists reported that a fragmentary Akkadian cuneiform tablet was discovered at Tel Hazor, Israel, containing a c.1700 BC text that was said to be partly parallel to portions of the Hammurabi code. The Hazor law code fragments are currently being prepared for publication by a team from the Hebrew University of Jerusalem. Today, approximately 275 laws from Hammurabi’s Code are known. Each law is written in two parts: A specific situation or case is outlined, then a corresponding decision is given. One of the best known laws from Hammurabi's code was: Hammurabi had many other punishments, as well. If a son strikes his father, his hands shall be hewn off. Translations vary. The laws covered such subjects as:
https://en.wikipedia.org/wiki?curid=7604
Rum and Coke Rum and Coke, or the Cuba libre (; , "Free Cuba"), is a highball cocktail consisting of cola, rum, and in many recipes lime juice on ice. Traditionally, the cola ingredient is Coca-Cola ("Coke"), and the alcohol is a light rum such as Bacardi. However, the drink may be made with various types of rums and cola brands, and lime juice may or may not be included. The cocktail originated in the early 20th century in Cuba, after the country won independence in the Spanish–American War. It subsequently became popular across Cuba, the United States, and other countries. Its simple recipe and inexpensive, ubiquitous ingredients have made it one of the world's most popular alcoholic drinks. Drink critics often consider the drink mediocre, but it has been noted for its historical significance. The drink was created in Cuba in the early 1900s, but its exact origins are not known with certainty. It became popular shortly after 1900, when bottled Coca-Cola was first imported into Cuba from the United States. Its origin is associated with the heavy U.S. presence in Cuba following the Spanish–American War of 1898; the drink's traditional name, "Cuba libre" (Free Cuba), was the slogan of the Cuban independence movement. The Cuba libre is sometimes said to have been created during the Spanish–American War. However, this predates the first distribution of Coca-Cola to Cuba in 1900. A drink called a "Cuba libre" was indeed known in 1898, but this was a mix of water and brown sugar. Fausto Rodriguez, a Bacardi advertising executive, claimed to have been present when the drink was first poured, and produced a notarized affidavit to that effect in 1965. According to Rodriguez, this took place in August 1900, when he was a 14-year-old messenger working for a member of the U.S. Army Signal Corps in Havana. One day at a local bar, Rodriguez's employer ordered Bacardi rum mixed with Coca-Cola. This intrigued a nearby group of American soldiers, who ordered a round for themselves, giving birth to a popular new drink. Bacardi published Rodriguez's affidavit in a "Life" magazine ad in 1966. However, Rodriguez's status as a Bacardi executive has led some commentators to doubt the veracity of his story. Another story states that the drink was first created in 1902 at Havana's Restaurant El Floridita to celebrate the anniversary of Cuban independence. The drink became a staple in Cuba, catching on due to the pervasiveness of its ingredients. Havana was already known for its iced drinks in the 19th century, as it was one of the few warm-weather cities that had abundant stores of ice shipped down from colder regions. Bacardi and other Cuban rums also boomed after independence brought in large numbers of foreign tourists and investors, as well as new opportunities for exporting alcohol. Light rums such as Bacardi became favored for cocktails, as they were considered to mix better than harsher dark rums. Coca-Cola had been a common mixer in the United States ever since it was first bottled in 1886, and it became a ubiquitous drink in many countries after it was first exported in 1900. Rum and Coke quickly spread from Cuba to the United States. In the early 20th century the cocktail, like Coca-Cola itself, was most popular in the Southern United States. During the Prohibition era from 1922–1933, Coca-Cola became a favored mixer for disguising the taste of low-quality rums, as well as other liquors. In 1921 H. L. Mencken jokingly wrote of a South Carolina variant called the "jump stiddy", which consisted of Coca-Cola mixed with denatured alcohol drained from automobile radiators. After Prohibition, rum and Coke became prevalent in the Northern and Western U.S. as well, and in both high-brow and low-brow circles. Rum and Coke achieved a new level of popularity during World War II. Starting in 1940, the United States established a series of outposts in the British West Indies to defend against the German Navy. The American presence created cross-cultural demand, with American servicemen and the locals developing tastes for each other's products. In particular, American military personnel took to Caribbean rum due to its inexpensiveness, while Coca-Cola became especially prevalent in the islands thanks to the company shipping it out with the military. Within the United States, imported rum became increasingly popular, as government quotas for industrial alcohol reduced the output of American distillers of domestic liquors. In 1943, Lord Invader's Calypso song "Rum and Coca-Cola" drew further attention to the drink in Trinidad. The song was an adaptation of Lionel Belasco's 1904 composition "L'Année Passée" with new lyrics about American soldiers in Trinidad cavorting with local girls and drinking rum and Coke. Comedian Morey Amsterdam plagiarized "Rum and Coca-Cola" and licensed it to the Andrews Sisters as his own work. The Andrews Sisters' version was a major hit in 1945 and further boosted the popularity of rum and Coke, especially in the military. Lord Invader and the owners of Belasco's composition successfully sued Amsterdam for the song's rights. During the Cuban Revolution in 1959, Bacardi fled to Puerto Rico. The following year, the U.S. placed an embargo against Cuba which prohibited the importation of Cuban products, while Cuba likewise banned the importation of American products. With Cuban-made rum unavailable in the U.S. and Coca-Cola largely unavailable in Cuba, it became difficult to make a rum and Coke with its traditional ingredients in either country. The rum and Coke is very popular; Bacardi estimates that it is the world's second most popular alcoholic drink. Its popularity derives from the ubiquity and low cost of the main ingredients, and the fact that it is very easy to make. As it can be made with any quantity or style of rum, it is simple to prepare and difficult to ruin. Drink critics often have a low opinion of the cocktail. Writer Wayne Curtis called it "a drink of inspired blandness", while Jason Wilson of "The Washington Post" called it "a lazy person's drink". Troy Patterson of "Slate" called it "the classic mediocre Caribbean-American highball", which "became a classic despite not being especially good". Charles A. Coulombe considers the Cuba libre a historically important drink, writing that it is "a potent symbol of a changing world order – the marriage of rum, lubricant of the old colonial empires, and Coca-Cola, icon of modern American global capitalism". Additionally, both rum and Coca-Cola are made from Caribbean ingredients, and became global commodities through European and American commerce. According to Coulombe, the drink "seems to reflect perfectly the historical elements of the modern world". Recipes vary somewhat in measures and additional ingredients, but the main ingredients are always rum and cola. The International Bartenders Association recipe calls for 5 centiliters of light rum, 12 cl of cola, and 1 cl of fresh lime juice on ice. However, any amount and proportion of rum and cola may be used. Additionally, while light rum is traditional, dark rums and other varieties are also common. Different colas are also often used; in Cuba, as Coca-Cola has not been imported since the U.S. embargo of 1960, the domestic TuKola is used in Cuba libres. Lime is traditionally included, though it is often left out, especially when the order is for just "rum and Coke". Some early recipes called for lime juice to be mixed in; others included lime only as a garnish. Other early recipes called for additional ingredients such as gin and bitters. Some sources consider lime essential for a drink to be a Cuba libre, which they distinguish from a mere rum and Coke. However, lime is frequently included even in orders for "rum and Coke". When aged añejo rum is used, the drink is sometimes called a Cubata, a name also used informally in Spain for any Cuba libre. Some modern recipes inspired by older ones include additional ingredients such as bitters. Some call for other colas such as Mexican Coke (which uses cane sugar instead of high-fructose corn syrup) or Moxie. More elaborate variants with additional ingredients include the Cinema Highball, which uses rum infused with buttered popcorn and mixed with cola. Another is the Mandeville cocktail, which includes light and dark rum, cola, and citrus juice along with Pernod absinthe and grenadine.
https://en.wikipedia.org/wiki?curid=7605
Cosmic censorship hypothesis The weak and the strong cosmic censorship hypotheses are two mathematical conjectures about the structure of gravitational singularities arising in general relativity. Singularities that arise in the solutions of Einstein's equations are typically hidden within event horizons, and therefore cannot be observed from the rest of spacetime. Singularities that are not so hidden are called "naked". The weak cosmic censorship hypothesis was conceived by Roger Penrose in 1969 and posits that no naked singularities exist in the universe. Since the physical behavior of singularities is unknown, if singularities can be observed from the rest of spacetime, causality may break down, and physics may lose its predictive power. The issue cannot be avoided, since according to the Penrose–Hawking singularity theorems, singularities are inevitable in physically reasonable situations. Still, in the absence of naked singularities, the universe, as described by the general theory of relativity, is deterministic: it is possible to predict the entire evolution of the universe (possibly excluding some finite regions of space hidden inside event horizons of singularities), knowing only its condition at a certain moment of time (more precisely, everywhere on a spacelike three-dimensional hypersurface, called the Cauchy surface). Failure of the cosmic censorship hypothesis leads to the failure of determinism, because it is yet impossible to predict the behavior of spacetime in the causal future of a singularity. Cosmic censorship is not merely a problem of formal interest; some form of it is assumed whenever black hole event horizons are mentioned. The hypothesis was first formulated by Roger Penrose in 1969, and it is not stated in a completely formal way. In a sense it is more of a research program proposal: part of the research is to find a proper formal statement that is physically reasonable and that can be proved to be true or false (and that is sufficiently general to be interesting). Because the statement is not a strictly formal one, there is sufficient latitude for (at least) two independent formulations, a weak form, and a strong form. The weak and the strong cosmic censorship hypotheses are two conjectures concerned with the global geometry of spacetimes. The weak cosmic censorship hypothesis asserts there can be no singularity visible from future null infinity. In other words, singularities need to be hidden from an observer at infinity by the event horizon of a black hole. Mathematically, the conjecture states that, for generic initial data, the maximal Cauchy development possesses a complete future null infinity. The strong cosmic censorship hypothesis asserts that, generically, general relativity is a deterministic theory, in the same sense that classical mechanics is a deterministic theory. In other words, the classical fate of all observers should be predictable from the initial data. Mathematically, the conjecture states that the maximal Cauchy development of generic compact or asymptotically flat initial data is locally inextendible as a regular Lorentzian manifold. This version was disproven in 2018 by Mihalis Dafermos and Jonathan Luk for the Cauchy horizon of a charged, rotating black hole. The two conjectures are mathematically independent, as there exist spacetimes for which weak cosmic censorship is valid but strong cosmic censorship is violated and, conversely, there exist spacetimes for which weak cosmic censorship is violated but strong cosmic censorship is valid. The Kerr metric, corresponding to a black hole of mass formula_1 and angular momentum formula_2, can be used to derive the effective potential for particle orbits restricted to the equator (as defined by rotation). This potential looks like: where formula_4 is the coordinate radius, formula_5 and formula_6 are the test-particle's conserved energy and angular momentum respectively (constructed from the Killing vectors). To preserve "cosmic censorship", the black hole is restricted to the case of formula_7. For there to exist an event horizon around the singularity, the requirement formula_7 must be satisfied. This amounts to the angular momentum of the black hole being constrained to below a critical value, outside of which the horizon would disappear. The following thought experiment is reproduced from Hartle's "Gravity": There are a number of difficulties in formalizing the hypothesis: In 1991, John Preskill and Kip Thorne bet against Stephen Hawking that the hypothesis was false. Hawking conceded the bet in 1997, due to the discovery of the special situations just mentioned, which he characterized as "technicalities". Hawking later reformulated the bet to exclude those technicalities. The revised bet is still open (although Hawking died in 2018), the prize being "clothing to cover the winner's nakedness". An exact solution to the scalar-Einstein equations formula_11 which forms a counterexample to many formulations of the cosmic censorship hypothesis was found by Mark D. Roberts in 1985: where formula_13 is a constant.
https://en.wikipedia.org/wiki?curid=7609
Catholic (term) The word Catholic (usually written with uppercase "C" in English when referring to religious matters; derived via Late Latin "catholicus", from the Greek adjective ("katholikos"), meaning "universal") comes from the Greek phrase ("katholou"), meaning "on the whole", "according to the whole" or "in general", and is a combination of the Greek words meaning "about" and meaning "whole". The first use of "Catholic" was by the church father Saint Ignatius of Antioch in his "Letter to the Smyrnaeans" (circa 110 AD). In the context of Christian ecclesiology, it has a rich history and several usages. The word in English can mean either "of the Catholic faith" or "relating to the historic doctrine and practice of the Western Church". Many Christians use it to refer more broadly to the whole Christian Church or to all believers in Jesus Christ regardless of denominational affiliation; it can also more narrowly refer to Catholicity, which encompasses several historic churches sharing major beliefs. "Catholicos", the title used for the head of some churches in Eastern Christian traditions, is derived from the same linguistic origin. In non-ecclesiastical use, it derives its English meaning directly from its root, and is currently used to mean the following: The term has been incorporated into the name of the largest Christian communion, the Catholic Church (also called the Roman Catholic Church). All of the three main branches of Christianity in the East (Eastern Orthodox Church, Oriental Orthodox Church and Church of the East) had always identified themselves as "Catholic" in accordance with Apostolic traditions and the Nicene Creed. Anglicans, Lutherans, and some Methodists also believe that their churches are "Catholic" in the sense that they too are in continuity with the original universal church founded by the Apostles. However, each church defines the scope of the "Catholic Church" differently. For instance, the Roman Catholic, Eastern Orthodox, Oriental Orthodox churches, and Church of the East, each maintain that their own denomination is identical with the original universal church, from which all other denominations broke away. Distinguishing beliefs of Catholicity, the beliefs of most Christians who call themselves "Catholic", include the episcopal polity, that bishops are considered the highest order of ministers within the Christian religion, as well as the Nicene Creed of AD 381. In particular, along with unity, sanctity, and apostolicity, catholicity is considered one of Four Marks of the Church, found in the line of the Nicene Creed: "I believe in one holy catholic and apostolic Church." During the medieval and modern times, additional distinctions arose regarding the use of the terms "Western Catholic" and "Eastern Catholic". Before the East–West Schism of 1054, those terms had just the basic geographical meanings, since only one undivided Catholicity existed, uniting the Latin speaking Christians of West and the Greek speaking Christians of the East. After the Schism, terminology became much more complicated, resulting in the creation of parallel and conflicting terminological systems. The Greek adjective "katholikos", the origin of the term "catholic", means "universal". Directly from the Greek, or via Late Latin "catholicus", the term "catholic" entered many other languages, becoming the base for the creation of various theological terms such as "catholicism" and "catholicity" (Late Latin "catholicismus", "catholicitas"). The term "catholicism" is the English form of Late Latin "catholicismus", an abstract noun based on the adjective "catholic". The Modern Greek equivalent ("") is back-formed and usually refers to the Catholic Church. The terms "catholic", "catholicism" and "catholicity" is closely related to the use of the term "Catholic Church". (See Catholic Church (disambiguation) for more uses.) The earliest evidence of the use of that term is the "Letter to the Smyrnaeans" that Ignatius of Antioch wrote in about 108 to Christians in Smyrna. Exhorting Christians to remain closely united with their bishop, he wrote: "Wherever the bishop shall appear, there let the multitude [of the people] also be; even as, wherever Jesus Christ is, there is the Catholic Church." From the second half of the second century, the word "catholic" began to be used to mean "orthodox" (non-heretical), "because Catholics claimed to teach the whole truth, and to represent the whole Church, while heresy arose out of the exaggeration of some one truth and was essentially partial and local". In 380, Emperor Theodosius I limited use of the term "Catholic Christian" exclusively to those who followed the same faith as Pope Damasus I of Rome and Pope Peter of Alexandria. Numerous other early writers including Cyril of Jerusalem (c. 315–386), Augustine of Hippo (354–430) further developed the use of the term "catholic" in relation to Christianity. The earliest recorded evidence of the use of the term "Catholic Church" is the "Letter to the Smyrnaeans" that Ignatius of Antioch wrote in about 107 to Christians in Smyrna. Exhorting Christians to remain closely united with their bishop, he wrote: "Wherever the bishop shall appear, there let the multitude [of the people] also be; even as, wherever Jesus Christ is, there is the Catholic Church." Of the meaning for Ignatius of this phrase J.H. Srawley wrote: This is the earliest occurrence in Christian literature of the phrase 'the Catholic Church' (ἡ καθολικὴ ἐκκλησία). The original sense of the word is 'universal'. Thus Justin Martyr ("Dial". 82) speaks of the 'universal or general resurrection', using the words ἡ καθολικὴ ἀνάστασις. Similarly here the Church universal is contrasted with the particular Church of Smyrna. Ignatius means by the Catholic Church 'the aggregate of all the Christian congregations' (Swete, "Apostles Creed", p. 76). So too the letter of the Church of Smyrna is addressed to all the congregations of the Holy Catholic Church in every place. And this primitive sense of 'universal' the word has never lost, although in the latter part of the second century it began to receive the secondary sense of 'orthodox' as opposed to 'heretical'. Thus it is used in an early Canon of Scripture, the Muratorian fragment ("circa" 170 A.D.), which refers to certain heretical writings as 'not received in the Catholic Church'. So too Cyril of Jerusalem, in the fourth century, says that the Church is called Catholic not only 'because it is spread throughout the world', but also 'because it teaches completely and without defect all the doctrines which ought to come to the knowledge of men'. This secondary sense arose out of the original meaning because Catholics claimed to teach the whole truth, and to represent the whole Church, while heresy arose out of the exaggeration of some one truth and was essentially partial and local. By "Catholic Church" Ignatius designated the universal church. Ignatius considered that certain heretics of his time, who disavowed that Jesus was a material being who actually suffered and died, saying instead that "he only seemed to suffer" (Smyrnaeans, 2), were not really Christians. The term is also used in the "Martyrdom of Polycarp" (155) and in the Muratorian fragment (about 177). As mentioned in the above quotation from J.H. Srawley, Cyril of Jerusalem (c. 315–386), who is venerated as a saint by the Roman Catholic Church, the Eastern Orthodox Church, and the Anglican Communion, distinguished what he called the "Catholic Church" from other groups who could also refer to themselves as an ἐκκλησία (assembly or church): Since the word Ecclesia is applied to different things (as also it is written of the multitude in the theatre of the Ephesians, "And when he had thus spoken, he dismissed the Assembly" (Acts 19:41), and since one might properly and truly say that there is a "Church of evil doers", I mean the meetings of the heretics, the Marcionists and Manichees, and the rest, for this cause the Faith has securely delivered to you now the Article, "And in one Holy Catholic Church"; that you may avoid their wretched meetings, and ever abide with the Holy Church Catholic in which you were regenerated. And if ever you are sojourning in cities, inquire not simply where the Lord's House is (for the other sects of the profane also attempt to call their own dens houses of the Lord), nor merely where the Church is, but where is the Catholic Church. For this is the peculiar name of this Holy Church, the mother of us all, which is the spouse of our Lord Jesus Christ, the Only-begotten Son of God(Catechetical Lectures, XVIII, 26). Theodosius I, Emperor from 379 to 395, declared "Catholic" Christianity the official religion of the Roman Empire, declaring in the Edict of Thessalonica of 27 February 380: It is our desire that all the various nations which are subject to our clemency and moderation, should continue the profession of that religion which was delivered to the Romans by the divine Apostle Peter, as it has been preserved by faithful tradition and which is now professed by the Pontiff Damasus and by Peter, Bishop of Alexandria, a man of apostolic holiness. According to the apostolic teaching and the doctrine of the Gospel, let us believe in the one Deity of the Father, Son and Holy Spirit, in equal majesty and in a holy Trinity. We authorize the followers of this law to assume the title "Catholic" Christians; but as for the others, since in our judgment they are foolish madmen, we decree that they shall be branded with the ignominious name of heretics, and shall not presume to give their conventicles the name of churches. They will suffer in the first place the chastisement of the divine condemnation, and in the second the punishment which our authority, in accordance with the will of heaven, will decide to inflict. Theodosian Code XVI.i.2 Jerome wrote to Augustine of Hippo in 418: "You are known throughout the world; Catholics honour and esteem you as the one who has established anew the ancient Faith" Only slightly later, Saint Augustine of Hippo (354–430) also used the term "Catholic" to distinguish the "true" church from heretical groups: In the Catholic Church, there are many other things which most justly keep me in her bosom. The consent of peoples and nations keeps me in the Church; so does her authority, inaugurated by miracles, nourished by hope, enlarged by love, established by age. The succession of priests keeps me, beginning from the very seat of the Apostle Peter, to whom the Lord, after His resurrection, gave it in charge to feed His sheep (Jn 21:15–19), down to the present episcopate. And so, lastly, does the very name of Catholic, which, not without reason, amid so many heresies, the Church has thus retained; so that, though all heretics wish to be called Catholics, yet when a stranger asks where the Catholic Church meets, no heretic will venture to point to his own chapel or house. Such then in number and importance are the precious ties belonging to the Christian name which keep a believer in the Catholic Church, as it is right they should ... With you, where there is none of these things to attract or keep me... No one shall move me from the faith which binds my mind with ties so many and so strong to the Christian religion... For my part, I should not believe the gospel except as moved by the authority of the Catholic Church. —St. Augustine (354–430): "Against the Epistle of Manichaeus called Fundamental", chapter 4: Proofs of the Catholic Faith. A contemporary of Augustine, St. Vincent of Lerins, wrote in 434 (under the pseudonym Peregrinus) a work known as the "Commonitoria" ("Memoranda"). While insisting that, like the human body, church doctrine develops while truly keeping its identity (sections 54–59, chapter XXIII), he stated: During early centuries of Christian history, majority of Christians who followed doctrines represented in Nicene Creed were bound by one common and undivided Catholicity that was uniting the Latin speaking Christians of West and the Greek speaking Christians of the East. In those days, terms "eastern Catholic" and "western Catholic" had their basic geographical meanings, generally corresponding to existing linguistic distinctions between Greek East and Latin West. In spite of various and quite frequent theological and ecclesiastical disagreements between major Christian sees, common Catholicity was preserved until the great disputes that arose between 9th and 11th century. After the East–West Schism, the notion of common Catholicity was broken and each side started to develop its own terminological practice. All major theological and ecclesiastical disputes in the Christian East or West have been commonly accompanied by attempts of arguing sides to deny each other the right to use the word "Catholic" as term of self-designation. After the acceptance of Filioque clause into the Nicene Creed by the Rome, Orthodox Christians in the East started to refer to adherents of Filioquism in the West just as "Latins" considering them no longer to be "Catholics". The dominant view in the Eastern Orthodox Church, that all Western Christians who accepted Filioque interpolation and unorthodox Pneumatology ceased to be Catholics, was held and promoted by famous Eastern Orthodox canonist Theodore Balsamon who was patriarch of Antioch. He wrote in 1190: On the other side of the widening rift, Eastern Orthodox were considered by western theologians to be "Schismatics". Relations between East and West were further estranged by the tragic events of the Massacre of the Latins in 1182 and Sack of Constantinople in 1204. Those bloody events were followed by several failed attempts to reach reconciliation (see: Second Council of Lyon, Council of Florence, Union of Brest, Union of Uzhhorod). During the late medieval and early modern period, terminology became much more complicated, resulting in the creation of parallel and confronting terminological systems that exist today in all of their complexity. During the Early Modern period, a special term "Acatholic" was widely used in the West to mark all those who were considered to hold heretical theological views and irregular ecclesiastical practices. In the time of Counter-Reformation the term "Acatholic" was used by zealous members of the Catholic Church to designate Protestants as well as Eastern Orthodox Christians. The term was considered to be so insulting that the Council of the Serbian Orthodox Church, held in Temeswar in 1790, decided to send an official plea to emperor Leopold II, begging him to ban the use of the term "Acatholic". The Augsburg Confession found within the Book of Concord, a compendium of belief of the Lutheranism, teaches that "the faith as confessed by Luther and his followers is nothing new, but the true catholic faith, and that their churches represent the true catholic or universal church". When the Lutherans presented the Augsburg Confession to Charles V, Holy Roman Emperor in 1530, they believe to have "showed that each article of faith and practice was true first of all to Holy Scripture, and then also to the teaching of the church fathers and the councils". The term "Catholic" is commonly associated with the whole of the church led by the Roman Pontiff, the Catholic Church. Other Christian churches that use the description "Catholic" include the Eastern Orthodox Church and other churches that believe in the historic episcopate (bishops), such as the Anglican Communion. Many of those who apply the term "Catholic Church" to all Christians object to the use of the term to designate what they view as only one church within what they understand as the "whole" Catholic Church. In the English language, the first known use of the term is in Andrew of Wyntoun's "Orygynale Cronykil of Scotland", "He was a constant Catholic/All Lollard he hated and heretic." The Catholic Church, led by the Pope in Rome, usually distinguishes itself from other churches by calling itself the "Catholic", however has also used the description "Roman Catholic". Even apart from documents drawn up jointly with other churches, it has sometimes, in view of the central position it attributes to the See of Rome, adopted the adjective "Roman" for the whole church, Eastern as well as Western, as in the papal encyclicals "Divini illius Magistri" and "Humani generis". Another example is its self-description as "the holy Catholic Apostolic Roman Church" (or, by separating each adjective, as the "Holy, Catholic, Apostolic and Roman Church") in 24 April 1870 Dogmatic Constitution on the Catholic Faith of the First Vatican Council. In all of these documents it also refers to itself both simply as the Catholic Church and by other names. The Eastern Catholic Churches, while united with Rome in the faith, have their own traditions and laws, differing from those of the Latin Rite and those of other Eastern Catholic Churches. The contemporary Catholic Church has always considered itself to be the historic Catholic Church, and consider all others as "non-Catholics". This practice is an application of the belief that not all who claim to be Christians are part of the Catholic Church, as Ignatius of Antioch, the earliest known writer to use the term "Catholic Church", considered that certain heretics who called themselves Christians only seemed to be such. Regarding the relations with Eastern Christians, Pope Benedict XVI stated his wish to restore full unity with the Orthodox. The Roman Catholic Church considers that almost all of the ancient theological differences have been satisfactorily addressed (the Filioque clause, the nature of purgatory, etc.), and has declared that differences in traditional customs, observances and discipline are no obstacle to unity. Recent historic ecumenical efforts on the part of the Catholic Church have focused on healing the rupture between the Western ("Catholic") and the Eastern ("Orthodox") churches. Pope John Paul II often spoke of his great desire that the Catholic Church "once again breathe with both lungs", thus emphasizing that the Roman Catholic Church seeks to restore full communion with the separated Eastern churches. All of the three main branches of Eastern Christianity (Eastern Orthodox Church, Oriental Orthodox Church and Nestorianism; Ayssyrian Church of the East and Ancient Church of the East) continue to identify themselves as "Catholic" in accordance with Apostolic traditions and the Nicene Creed. The Eastern Orthodox Church firmly upholds the ancient doctrines of Eastern Orthodox Catholicity and commonly uses the term "Catholic", as in the title of "The Longer Catechism of the Orthodox, Catholic, Eastern Church". So does the Coptic Orthodox Church that belongs to Oriental Orthodoxy and considers its communion to be "the True Church of the Lord Jesus Christ". Non of the Eastern Churches, Orthodox or Oriental, have indicated any intention to abandon ancient traditions of their own Catholicity. Most Reformation and post-Reformation churches use the term "Catholic" (often with a lower-case "c") to refer to the belief that all Christians are part of one Church regardless of denominational divisions; e.g., Chapter XXV of the Westminster Confession of Faith refers to the "catholic or universal Church". It is in line with this interpretation, which applies the word "catholic" (universal) to no one denomination, that they understand the phrase "one holy catholic and apostolic Church" in the Nicene Creed, the phrase "the Catholic faith" in the Athanasian Creed and the phrase "holy catholic church" in the Apostles' Creed. The terms "Roman Catholics" or "Roman Catholic Church" imply that the Church which follows the Pope, who is based in Rome, is not the only Catholic Church and that others are also entitled to be called such – for example, the Anglican Church. This assumption is not is not accepted by the Roman Church itself, which usually calls itself "The Catholic Church" without qualification and recognizes no other contenders for the title. The term is used also to mean those Christian churches that maintain that their episcopate can be traced unbrokenly back to the apostles and consider themselves part of a "catholic" (universal) body of believers. Among those who regard themselves as "Catholic" but not "Roman Catholic" are Anglicans and Lutherans, who stress that they are both Reformed and Catholic. The Old Catholic Church and the various groups classified as Independent Catholic Churches also lay claim to the description "Catholic". Traditionalist Catholics, even if they may not be in communion with Rome, consider themselves to be not only Catholics but the "true" Roman Catholics. Some use the term "Catholic" to distinguish their own position from a Calvinist or Puritan form of Reformed-Protestantism. These include a faction of Anglicans often also called Anglo-Catholics, 19th century Neo-Lutherans, 20th century High Church Lutherans or evangelical-Catholics and others. Methodists and Presbyterians believe their denominations owe their origins to the Apostles and the early church, but do not claim descent from ancient church structures such as the episcopate. However, both of these churches hold that they are a part of the catholic (universal) church. According to "Harper's New Monthly Magazine": As such, according to one viewpoint, for those who "belong to the Church," the term Methodist Catholic, or Presbyterian Catholic, or Baptist Catholic, is as proper as the term Roman Catholic. It simply means that body of Christian believers over the world who agree in their religious views, and accept the same ecclesiastical forms. Some Independent Catholics accept that, among bishops, that of Rome is "primus inter pares", and hold that conciliarism is a necessary check against ultramontanism. They are however, by definition, not recognised by the Catholic Church. Some Protestant churches avoid using the term completely, to the extent among many Lutherans of reciting the Creed with the word "Christian" in place of "catholic". The Orthodox churches share some of the concerns about Roman Catholic papal claims, but disagree with some Protestants about the nature of the church as one body.
https://en.wikipedia.org/wiki?curid=7610
Crystal Eastman Crystal Catherine Eastman (June 25, 1881 – July 8, 1928) was an American lawyer, antimilitarist, feminist, socialist, and journalist. She is best remembered as a leader in the fight for women's suffrage, as a co-founder and co-editor with her brother Max Eastman of the radical arts and politics magazine "The Liberator," co-founder of the Women's International League for Peace and Freedom, and co-founder in 1920 of the American Civil Liberties Union. In 2000 she was inducted into the National Women's Hall of Fame in Seneca Falls, New York. Crystal Eastman was born in Marlborough, Massachusetts, on June 25, 1881, the third of four children. Her oldest brother, Morgan, was born in 1878 and died in 1884. The second brother, Anstice Ford Eastman, who became a general surgeon, was born in 1878 and died in 1937. Max was the youngest, born in 1882. In 1883 their parents, Samuel Elijah Eastman and Annis Bertha Ford, moved the family to Canandaigua, New York. In 1889, their mother became one of the first women ordained as a Protestant minister in America when she became a minister of the Congregational church. Her father was also a Congregational minister, and the two served as pastors at the church of Thomas K. Beecher near Elmira. Her parents were friendly with writer Mark Twain. From this association young Crystal also became acquainted with him. This part of New York was in the so-called "Burnt Over District." During the Second Great Awakening earlier in the 19th century, its frontier had been a center of evangelizing and much religious excitement, which resulted in the founding of the Shakers and Mormonism. During the antebellum period, some were inspired by religious ideals to support such progressive social causes as abolitionism and the Underground Railroad. Crystal and her brother Max Eastman were influenced by this humanitarian tradition. He became a socialist activist in his early life, and Crystal had several common causes with him. They were close throughout her life, even after he had become more conservative. The siblings lived together for several years on 11th Street in Greenwich Village among other radical activists. The group, including Ida Rauh, Inez Milholland, Floyd Dell, and Doris Stevens, also spent summers and weekends in Croton-on-Hudson. Eastman graduated from Vassar College in 1903 and received an MA in sociology (then a relatively new field) from Columbia University in 1904. Gaining her law degree from New York University Law School, she graduated second in the class of 1907. Social work pioneer and journal editor Paul Kellogg offered Eastman her first job, investigating labor conditions for The Pittsburgh Survey sponsored by the Russell Sage Foundation. Her report, "Work Accidents and the Law" (1910), became a classic and resulted in the first workers' compensation law, which she drafted while serving on a New York state commission. She continued to campaign for occupational safety and health while working as an investigating attorney for the U.S. Commission on Industrial Relations during Woodrow Wilson's presidency. She was at one time called the "most dangerous woman in America," due to her free-love idealism and outspoken nature. During a brief marriage to Wallace J. Benedict, which ended in divorce, Eastman moved to Milwaukee with him. There she managed the unsuccessful 1912 Wisconsin suffrage campaign. When she returned east in 1913, she joined Alice Paul, Lucy Burns, and others in founding the militant Congressional Union, which became the National Woman's Party. After the passage of the 19th Amendment gave women the right to vote in 1920, Eastman and Paul wrote the Equal Rights Amendment, first introduced in 1923. One of the few socialists to endorse the ERA, Eastman warned that protective legislation for women would mean only discrimination against women. Eastman claimed that one could assess the importance of the ERA by the intensity of the opposition to it, but she felt that it was still a struggle worth fighting. She also delivered the speech, "Now We Can Begin", following the ratification of the Nineteenth Amendment, outlining the work that needed to be done in the political and economic spheres to achieve gender equality. During World War I, Eastman was one of the founders of the Woman's Peace Party, soon joined by Jane Addams, Lillian D. Wald, and others. She served as president of the New York City branch. Renamed the Women's International League for Peace and Freedom in 1921, it remains the oldest extant women's peace organization. Eastman also became executive director of the American Union Against Militarism, which lobbied against America's entrance into the European war and more successfully against war with Mexico in 1916, sought to remove profiteering from arms manufacturing, and campaigned against conscription, imperial adventures and military intervention. When the United States entered World War I, Eastman organized with Roger Baldwin and Norman Thomas the National Civil Liberties Bureau to protect conscientious objectors, or in her words: "To maintain something over here that will be worth coming back to when the weary war is over." The NCLB grew into the American Civil Liberties Union, with Baldwin at the head and Eastman functioning as attorney-in-charge. Eastman is credited as a founding member of the ACLU, but her role as founder of the NCLB may have been largely ignored by posterity due to her personal differences with Baldwin. In 1916 Eastman married the British editor and antiwar activist Walter Fuller, who had come to the United States to direct his sisters’ singing of folksongs. They had two children, Jeffrey and Annis. They worked together as activists until the end of the war; then he worked as the managing editor of "The Freeman" until 1922 when he returned to England. He died in 1927, nine months before Crystal, ending his career editing "Radio Times" for the BBC. After Max Eastman's periodical "The Masses" was forced to close by government censorship in 1917, he and Crystal co-founded a radical journal of politics, art, and literature, "The Liberator", early in 1918. She and Max co-edited it until they put it in the hands of faithful friends in 1922. After the war, Eastman organized the First Feminist Congress in 1919. At times she traveled by ship to London to be with her husband. In New York, her activities led to her being blacklisted during the Red Scare of 1919–1920. She struggled to find paying work. Her only paid work during the 1920s was as a columnist for feminist journals, notably "Equal Rights" and "Time and Tide". Eastman claimed that "life was a big battle for the complete feminist," but she was convinced that the complete feminist would someday achieve total victory. Crystal Eastman died on July 8, 1928, of nephritis. Her friends were entrusted with her two children, then orphans, to rear them until adulthood. Eastman has been called one of the United States' most neglected leaders, because, although she wrote pioneering legislation and created long-lasting political organizations, she disappeared from history for fifty years. Freda Kirchwey, then editor of "The Nation", wrote at the time of her death: "When she spoke to people—whether it was to a small committee or a swarming crowd—hearts beat faster. She was for thousands a symbol of what the free woman might be." Her speech "Now We Can Begin", given in 1920, is listed as #83 in American Rhetoric's Top 100 Speeches of the 20th Century (listed by rank). In 2000 Eastman was inducted into the (American) National Women's Hall of Fame in Seneca Falls, New York. In 2018 "The Socialist", the official publication of the Socialist Party USA, published the article "Remembering Socialist Feminist Crystal Eastman" by Lisa Petriello, which was written "on the 90th-year anniversary of her [Eastman's] death to bring her life and legacy once again to the public eye." Eastman's papers are housed at Harvard University. The Library of Congress has the following publications by Eastman in its collection, much of them published posthumously:
https://en.wikipedia.org/wiki?curid=7611
Christopher Alexander Christopher Wolfgang Alexander (born 4 October 1936 in Vienna, Austria) is a widely influential British-American architect and design theorist, and currently emeritus professor at the University of California, Berkeley. His theories about the nature of human-centered design have affected fields beyond architecture, including urban design, software, sociology and others. Alexander has designed and personally built over 100 buildings, both as an architect and a general contractor. In software, Alexander is regarded as the father of the pattern language movement. The first wiki—the technology behind Wikipedia—led directly from Alexander's work, according to its creator, Ward Cunningham. Alexander's work has also influenced the development of agile software development. In architecture, Alexander's work is used by a number of different contemporary architectural communities of practice, including the New Urbanist movement, to help people to reclaim control over their own built environment. However, Alexander is controversial among some mainstream architects and critics, in part because his work is often harshly critical of much of contemporary architectural theory and practice. Alexander is known for many books on the design and building process, including "Notes on the Synthesis of Form, A City is Not a Tree" (first published as a paper and re-published in book form in 2015), "The Timeless Way of Building, A New Theory of Urban Design," and "The Oregon Experiment." More recently he published the four-volume "The Nature of Order: An Essay on the Art of Building and the Nature of the Universe," about his newer theories of "morphogenetic" processes, and "The Battle for the Life and Beauty of the Earth", about the implementation of his theories in a large building project in Japan. All his works are developed or accumulated from his previous works, so his works should be read as a whole rather than fragmented pieces. His life's work or the best of his works is The Nature of Order on which he spent about 30 years, and the very first version of The Nature of Order was done in 1981, one year before this famous debate with Peter Eisenman in Harvard. Alexander is perhaps best known for his 1977 book "A Pattern Language," a perennial seller some four decades after publication. Reasoning that users are more sensitive to their needs than any architect could be, he produced and validated (in collaboration with his students Sara Ishikawa, Murray Silverstein, Max Jacobson, Ingrid King, and Shlomo Angel) a "pattern language" to empower anyone to design and build at any scale. As a young child Alexander emigrated in fall 1938 with his parents from Austria to England, when his parents were forced to flee the Nazi regime. He spent much of his childhood in Chichester and Oxford, England, where he began his education in the sciences. He moved from England to the United States in 1958 to study at Harvard University and Massachusetts Institute of Technology. He moved to Berkeley, California in 1963 to accept an appointment as Professor of Architecture, a position he would hold for almost 40 years. In 2002, after his retirement, Alexander moved to Arundel, England, where he continues to write, teach and build. Alexander is married to Margaret Moore Alexander, and he has two daughters, Sophie and Lily, by his former wife Pamela. Alexander holds both British and American citizenship. Alexander attended Oundle School, England. In 1954, he was awarded the top open scholarship to Trinity College, Cambridge University in chemistry and physics, and went on to read mathematics. He earned a Bachelor's degree in Architecture and a Master's degree in Mathematics. He took his doctorate at Harvard (the first PhD in Architecture ever awarded at Harvard University), and was elected fellow at Harvard. During the same period he worked at MIT in transportation theory and computer science, and worked at Harvard in cognition and cognitive studies. Alexander was elected to the Society of Fellows, Harvard University 1961-64; awarded the First Medal for Research by the American Institute of Architects, 1972; elected member of the Swedish Royal Academy, 1980; winner of the Best Building in Japan award, 1985; winner of the ACSA (Association of Collegiate Schools of Architecture) Distinguished Professor Award, 1986 and 1987; invited to present the Louis Kahn Memorial Lecture, 1992; awarded the Seaside Prize, 1994; elected a Fellow of the American Academy of Arts and Sciences, 1996; one of the two inaugural recipients of the Athena Award, given by the Congress for the New Urbanism (CNU), 2006;. awarded ("in absentia") the Vincent Scully Prize by the National Building Museum, 2009; awarded the lifetime achievement award by the Urban Design Group, 2011; winner of the Global Award for Sustainable Architecture, 2014. "The Timeless Way of Building" (1979) described the perfection of use to which buildings could aspire: "A Pattern Language: Towns, Buildings, Construction" (1977) described a practical architectural system in a form that a theoretical mathematician or computer scientist might call a generative grammar. The work originated from an observation that many medieval cities are attractive and harmonious. The authors said that this occurs because they were built to local regulations that required specific features, but freed the architect to adapt them to particular situations. The book provides rules and pictures, and leaves decisions to be taken from the precise environment of the project. It describes exact methods for constructing practical, safe and attractive designs at every scale, from entire regions, through cities, neighborhoods, gardens, buildings, rooms, built-in furniture, and fixtures down to the level of doorknobs. A notable value is that the architectural system consists only of classic patterns tested in the real world and reviewed by multiple architects for beauty and practicality. The book includes all needed surveying and structural calculations, and a novel simplified building system that copes with regional shortages of wood and steel, uses easily stored inexpensive materials, and produces long-lasting classic buildings with small amounts of materials, design and labor. It first has users prototype a structure on-site in temporary materials. Once accepted, these are finished by filling them with very-low-density concrete. It uses vaulted construction to build as high as three stories, permitting very high densities. This book's method was adopted by the University of Oregon, as described in "The Oregon Experiment" (1975), and remains the official planning instrument. It has also been adopted in part by some cities as a building code. The idea of a pattern language appears to apply to any complex engineering task, and has been applied to some of them. It has been especially influential in software engineering where patterns have been used to document collective knowledge in the field. "A New Theory of Urban Design" (1987) coincided with a renewal of interest in urbanism among architects, but stood apart from most other expressions of this by assuming a distinctly anti-masterplanning stance. An account of a design studio conducted with Berkeley students on a site in San Francisco, it shows how convincing urban networks can be generated by requiring individual actors to respect only "local" rules, in relation to neighbours. A vastly undervalued part of the Alexander canon, "A New Theory" is important in understanding the generative processes which give rise to the shanty towns latterly championed by Stewart Brand, Robert Neuwirth, and the Prince of Wales. There have been critical reconstructions of Alexander's design studio based on the theories put forward in "A New Theory of Urban Design". "The Nature of Order: An Essay on the Art of Building and the Nature of the Universe" (2003–04), which includes The "Phenomenon of Life", "The Process of Creating Life", "A Vision of a Living World" and "The Luminous Ground", is Alexander's most comprehensive and elaborate work. In it, he puts forth a new theory about the nature of space and describes how this theory influences thinking about architecture, building, planning, and the way in which we view the world in general. The mostly static patterns from "A Pattern Language" have been amended by more dynamic sequences, which describe how to work towards patterns (which can roughly be seen as the end result of sequences). Sequences, like patterns, promise to be tools of wider scope than building (just as his theory of space goes beyond architecture). The online publication "Katarxis 3" (September 2004) includes several essays by Christopher Alexander, as well as the legendary debate between Alexander and Peter Eisenman from 1982. Alexander's latest book, "The Battle for the Life and Beauty of the Earth: A Struggle Between Two World-Systems" (2012), is the story of the largest project he and his colleagues had ever tackled, the construction of a new High School/College campus in Japan. He also uses the project to connect with themes in his four-volume series. He contrasts his approach, (System A) with the construction processes endemic in the US and Japanese economies (System B). As Alexander describes it, System A is focused on enhancing the life/spirit of spaces within given constraints (land, budget, client needs, etc.) (drawings are sketches - decisions on placing buildings, materials used, finish and such are made in the field as construction proceeds, with adjustments as needed to meet overall budget); System B ignores, and tends to diminish or destroy that quality because there is an inherent flaw: System A is a generally a product of a different Economic System than we live in now. When the architect is only responsible for concept and casual field drawings (which the builder uses to build structures at the lowest possible [competitive] cost), the builder finds that System A can not produce acceptable results at the lowest market cost. Except for a culture where land and material costs are low or first world clients who are sensitive, patient and wealthy. In most cases, the economically motivated builder must use a hybrid system. In the best case, System AB, the builder uses the processes of System A to differentiate, improve and inform his work. Or there are no economic considerations and the builder is the architect and is building for himself. In the last few chapters he describes "centers" as a way of thinking about the connections among spaces, and about what brings more wholeness and life to a space. Among Alexander's most notable built works are the Eishin Campus near Tokyo (the building process of which is outlined in his 2012 book "The Battle for the Life and Beauty of the Earth"); the West Dean Visitors Centre in West Sussex, England; the Julian Street Inn (a homeless shelter) in San Jose, California (both described in "Nature of Order"); the Sala House and the Martinez House (experimental houses in Albany and Martinez, California made of lightweight concrete); the low-cost housing in Mexicali, Mexico (described in "The Production of Houses"); and several private houses (described and illustrated in "The Nature of Order"). Alexander's built work is characterized by a special quality (which he used to call "the quality without a name", but named "wholeness" in "Nature of Order") that relates to human beings and induces feelings of belonging to the place and structure. This quality is found in the most loved traditional and historic buildings and urban spaces, and is precisely what Alexander has tried to capture with his sophisticated mathematical design theories. Paradoxically, achieving this connective human quality has also moved his buildings away from the abstract imageability valued in contemporary architecture, and this is one reason why his buildings are under-appreciated at present. His former student and colleague Michael Mehaffy wrote an introductory essay on Alexander's built work in the online publication "Katarxis 3", which includes a gallery of Alexander's major built projects through September 2004. In addition to his lengthy teaching career as a Professor at UC Berkeley (during which a number of international students began to appreciate and apply his methods), Alexander was a key faculty member at both The Prince of Wales's Summer Schools in Civil Architecture (1990–1994) and The Prince's Foundation for the Built Environment. He also initiated the process which led to the international Building Beauty post-graduate school for architecture, which launched in Sorrento, Italy for the 2017–18 academic year. Alexander's work has widely influenced architects; among those who acknowledge his influence are Sarah Susanka, Andres Duany, and Witold Rybczynski. Robert Campbell, the Pulitzer Prize-winning architecture critic for the "Boston Globe", stated that Alexander "has had an enormous critical influence on my life and work, and I think that's true of a whole generation of people." Architecture critic Peter Buchanan, in an essay for "The Architectural Review"s 2012 campaign "The Big Rethink", argues that Alexander's work as reflected in "A Pattern Language" is "thoroughly subversive and forward looking rather than regressive, as so many misunderstand it to be." He continues: Many urban development projects continue to incorporate Alexander's ideas. For example, in the UK the developers Living Villages have been highly influenced by Alexander's work and used "A Pattern Language" as the basis for the design of The Wintles in Bishops Castle, Shropshire. Sarah Susanka's "Not So Big House" movement adapts and popularizes Alexander's patterns and outlook. Alexander's "Notes on the Synthesis of Form" was said to be required reading for researchers in computer science throughout the 1960s. It had an influence in the 1960s and 1970s on programming language design, modular programming, object-oriented programming, software engineering and other design methodologies. Alexander's mathematical concepts and orientation were similar to Edsger Dijkstra's influential "A Discipline of Programming". The greatest influence of "A Pattern Language" in computer science is the design patterns movement. Alexander's philosophy of incremental, organic, coherent design also influenced the extreme programming movement. The Wiki was invented to allow the Hillside Group to work on programming design patterns. More recently the "deep geometrical structures" as discussed in "The Nature of Order" have been cited as having importance for object-oriented programming, particularly in C++. Will Wright wrote that Alexander's work was influential in the origin of the "SimCity" computer games, and in his later game "Spore". Alexander has often led his own software research, such as the 1996 Gatemaker project with Greg Bryant. Alexander discovered and conceived a recursive structure, so called wholeness, which is defined mathematically, exists in space and matter physically, and reflects in our minds and cognition psychologically. He had his idea of wholeness back to early 1980s when he finished his very first version of "The Nature of Order". In fact, his idea of wholeness or degree of wholeness relying a recursive structure of centers resemble in spirit to Google's PageRank. The fourth volume of "The Nature of Order" approaches religious questions from a scientific and philosophical rather than mystical direction. In it, Alexander describes deep ties between the nature of matter, human perception of the universe, and the geometries people construct in buildings, cities, and artifacts. He suggests a crucial link between traditional practices and beliefs, and recent scientific advances. Despite his leanings toward Deism, and his naturalistic and anthropological approach to religion, Alexander maintains that he is a practicing member of the Catholic Church, believing it to embody a great deal of accumulated human truth within its rituals. The life's work of Alexander is dedicated to turn design from unselfconscious behavior to selfconscious behavior, so called design science. In his very first book "Notes on the Synthesis of Forms", he has set what he wanted to do. He was inspired by traditional buildings, and tried to derive some 253 patterns for architectural design. Later on, he further distills 15 geometric properties to characterize living structure in "The Nature of Order". The design principles are differentiation and adaptation. In his classic "A City is Not a Tree", he already had some primary ideas of complex networks, although he used semilattice rather than complex networks. In his first book "Notes on the synthesis of forms" on page 65, he illustrated something that is in fact community or community structure in complex networks, a recent topic emerged around 2004: community structure. Alexander's published works include: Unpublished:
https://en.wikipedia.org/wiki?curid=7612
Clabbers Clabbers is a game played by tournament Scrabble players for fun, or occasionally at Scrabble variant tournaments. The name derives from the fact that the words CLABBERS and SCRABBLE form an anagram pair. The rules are identical to those of Scrabble, except that valid plays are only required to form anagrams of acceptable words; in other words, the letters in a word do not need to be placed in the correct order. If a word is challenged, the player who played the word must then name an acceptable word that anagrams to the tiles played. Because the number of "words" that can be formed is vastly larger than in standard English, the board usually ends up tightly packed in places, and necessarily quite empty in others. Game scores will often be much higher than in standard Scrabble, due to the relative ease of making high-scoring overlap plays and easier access to premium squares. The Internet Scrabble Club offers the ability to play Clabbers online. Horizontal words from top to bottom (# denotes words that exist in the Collins English Dictionary but not the TWL). Some of the words below have multiple anagrams: Vertical words from left to right
https://en.wikipedia.org/wiki?curid=7614
Corum Jhaelen Irsei Corum Jhaelen Irsei ("the Prince in the Scarlet Robe") is the name of a fictional fantasy hero in a series of two trilogies written by author Michael Moorcock. Corum is the last survivor of the Vadhagh race and an incarnation aspect of the Eternal Champion, a being that exists in all worlds to ensure there is "Cosmic Balance". This trilogy consists of "The Knight of the Swords" (1971), "The Queen of the Swords" (1971), and "The King of the Swords" (1971). In the United Kingdom it has been collected as an omnibus edition titled "Corum", "Swords of Corum" and most recently "Corum: The Prince in the Scarlet Robe" (vol. 30 of Orion's Fantasy Masterworks series). In the United States the first trilogy has been published as "Corum: The Coming of Chaos". Corum is a Vadhagh, one of a race of long-lived beings with limited magical abilities dedicated to peaceful pursuits such as art and poetry. A group of "Mabden" (men) led by the savage Earl Glandyth-a-Krae raid the family castle and slaughter everyone with the exception of Corum, who escapes. Arming himself, Corum attacks and kills several of the Mabden before being captured and tortured. After having his left hand cut off and right eye put out, Corum escapes by moving into another plane of existence, becoming invisible to the Mabden. They depart and Corum is found by The Brown Man, a dweller of the forest of Laar able to see Corum while out of phase. The Brown Man takes Corum to a being called Arkyn, who treats his wounds and explains he has a higher purpose. Travelling to Moidel's Castle, Corum encounters his future lover, the Margravine Rhalina, a mabden woman of the civilized land of Lwym an Esh. Having found out Corum's location by torturing and killing the Brown Man of Laar, Glandyth-a-Krae marshalled his allies to Moidel's Castle. Glandyth had kept Corum's former hand and eye as souvenirs, and showed them to Corum to provoke a reaction. Rhalina uses sorcery (a ship summoned from the depths of the ocean and manned by her drowned dead husband and crew) to ward off an attack by Glandyth-a-Krae. Determined to restore himself, Corum and Rhalina travel to the island of Shool, a near immortal and mad sorcerer. During the journey Corum observes a mysterious giant who trawls the ocean with a net. On arrival at the island Shool takes Rhalina hostage, and then provides Corum with two artifacts to replace his lost hand and eye: the Hand of Kwll and the Eye of Rhynn. The Eye of Rhynn allows Corum to see into an undead netherworld where the last beings killed by Corum exist until summoned by the Hand of Kwll. Shool then explains that Corum's ill fortune has been caused by the Chaos God Arioch, the Knight of the Swords. When Arioch and his fellow Chaos Lords conquered the Fifteen Planes, the balance between the forces of Law and Chaos tipped in favor of Chaos, and their minions - such as Glandyth-a-Krae - embarked on a bloody rampage. Shool sends Corum to Arioch's fortress to steal the Heart of Arioch, which the sorcerer intends to use to attain greater power. Corum confronts Arioch, and learns Shool is nothing more than a pawn of the Chaos God. Arioch then ignores Corum, who discovers the location of the Heart. Corum is then attacked by Arioch, but the Hand of Kwll crushes the Heart and banishes the Chaos God forever. Before fading from existence, Arioch warns Corum that he has now earned the enmity of the Sword Rulers. Corum returns to the island to rescue Rhalina, and observes Shool has become a powerless moron, and is devoured by his own creations soon afterwards. Corum learns Arkyn is in fact a Lord of Law, and that this is the first step towards Law regaining control of the Fifteen Planes. On another five planes, the forces of Chaos - led by Xiombarg, Queen of the Swords - reign supreme and are on the verge on eradicating the last resistance from the forces of Law. The avatars of the Bear and Dog gods plot with Earl Glandyth-a-Krae to murder Corum and return Arioch to the Fifteen Planes. Guided by Arkyn, Corum, Rhalina and companion Jhary-a-Conel cross the planes and encounter the King Without A Country, the last of his people who in turn is seeking the City in the Pyramid. The group locate the City, which is in fact a floating arsenal powered by advanced technology and inhabited by a people originally from Corum's world and his distant kin. Besieged by the forces of Chaos, the City requires certain rare minerals to continue to power their weapons. Corum and Jhary attempt to locate the minerals and also encounter Xiombarg, who learns of Corum's identity. Corum slows Xiombarg's forces by defeating their leader, Prince Gaynor the Damned. Xiombarg is goaded into attacking the City directly in revenge for Arioch's banishment. Arkyn provides the minerals and confronts Xiombarg, who has manifested in a vulnerable state. As Arkyn banishes Xiombarg, Corum and his allies devastate the forces of Chaos. Glandyth-a-Krae, however, escapes, and seeks revenge. A spell - determined to have been cast by the forces of Chaos - forces the inhabitants of Corum's plane to war with each other (including the City in the Pyramid). Desperate to stop the slaughter, Corum, Rhalina and Jhary-a-Conel travel to the last five planes, ruled by Mabelode, the King of the Swords. Rhalina is taken hostage by the forces of Chaos and Corum has several encounters with the forces of Chaos, including Earl Glandyth-a-Krae. Corum also meets two other aspects of the Eternal Champion: Elric and Erekosë, with all three seeking the mystical city of Tanelorn for their own purposes. After a brief adventure in the "Vanishing Tower", the other heroes depart and Corum and Jhary arrive at their version of Tanelorn. Corum discovers one of the "Lost Gods", the being Kwll, who is imprisoned and cannot be freed until whole. Corum offers Kwll his hand, on the condition that he aid them against Mabelode. Kwll accepts the terms, but reneges on the bargain until persuaded to assist. Corum is also stripped of his artificial eye, which belongs to Rhynn - actually the mysterious giant Corum had previously encountered. Kwll transports Corum and Jhary to the court of Mabelode, with the pair fleeing with Rhalina when Kwll directly challenges the Chaos God. In a final battle Corum avenged his family by killing Glandyth-a-Krae and decimating the last of Chaos' mortal forces. Kwll later located Corum and revealed that all the gods - of both Chaos and Law - have been slain in order to free humanity and allow it to shape its own destiny. This trilogy consists of "The Bull and the Spear" (1973), "The Oak and the Ram" (1973), and "The Sword and the Stallion" (1974). It was titled "The Prince with the Silver Hand" in the United Kingdom and "The Chronicles of Corum" in the United States respectively. The previous trilogy hinted at a Celtic or proto-Celtic setting for the stories - the terms "mabden" (human beings) and "shefanhow" (demons) occurring in these books are both Cornish language words. The Silver Hand trilogy is more explicit in its Celtic connections, with overt borrowings from Celtic mythology. Set eighty years after the defeat of the Sword Rulers, Corum has become despondent and alone since the death of his Mabden bride Rhalina. Plagued by voices at night, Corum believes he has gone insane until old friend Jhary-a-Conel advises Corum it is in fact a summons from another world. Listening to the voices allows Corum to pass to the other world, which is in fact the distant future. The descendants of Rhalina's folk, the Tuha-na-Cremm Croich (see: Crom Cruach), who call Corum "Corum Llew Ereint" (see: Lludd Llaw Eraint), face extinction by the Fhoi Myore (Fomorians). The Fhoi Myore, seven powerful but diseased and barely sentient giants, with the aid of their allies have conquered the land and plunged it into eternal winter. Allying himself with King Mannach, ruler of the Tuha-na-Cremm Croich, Corum falls in love with his daughter Medhbh (see: Medb). Corum also hears the prophecy of a seeress, who claims Corum should fear a brother (who will apparently slay him), a harp and above all, beauty. Corum seeks the lost artifacts of the Tuha-na-Cremm Croich - a sacred Bull, a spear, an oak, a ram, a sword and a stallion - which will restore the land. Corum gains new allies, Goffanon (a blacksmith and diminutive giant, a member of the Sidhe race) and Goffanon's cousin and true giant Illbrec. They battle the Fhoi Myore, who themselves have allies: a returned Prince Gaynor, the wizard Calatin and his clone of Corum, the Brothers of the Pine, the undead Ghoolegh and a host of giant demonic dogs. After being instrumental in the death of two of the Fhoi Myore and restoring to his senses the enscorceled Amergin, the High King and Chief Druid of the Tuha-na-Cremm Croich, Corum and his allies fight a final battle in which all their foes are destroyed. Corum decides not to return his own world, and is attacked by his clone, whom he defeats with the aid of a spell placed on his silver hand by Medhbh. Medhbh, however, attacks and wounds Corum, having been told by the being the Dagdah that their world must be free of all gods and demi-gods if they are to flourish as a people. Corum is then killed with his own sword by his animated silver hand, thereby fulfilling the prophecy. "The Swords" trilogy: "The Silver Hand" trilogy: Additional appearances: The August Derleth Award won by: First Comics published "The Chronicles of Corum", a twelve issue limited series (Jan. 1986 - Dec. 1988) that adapted the "Swords Trilogy", and was followed by the four issue limited series "Corum: The Bull and the Spear" (Jan. - July (bi-monthly) 1989), which adapted the first book in the second trilogy. Darcsyde Productions produced a supplement for use with Chaosium's "Stormbringer" (2001) role-playing game adapting the characters and settings from the "Corum" series for role-playing. Gollancz have announced plans to release the entire Corum stories in both print and ebook form, commencing in 2013. The ebooks will be available via Gollancz's SF Gateway site. Audiobooks In 2016, produced dramatized audiobook versions of Corum.
https://en.wikipedia.org/wiki?curid=7617
Complex instruction set computer A complex instruction set computer (CISC ) is a computer in which single instructions can execute several low-level operations (such as a load from memory, an arithmetic operation, and a memory store) or are capable of multi-step operations or addressing modes within single instructions. The term was retroactively coined in contrast to reduced instruction set computer (RISC) and has therefore become something of an umbrella term for everything that is not RISC, from large and complex mainframe computers to simplistic microcontrollers where memory load and store operations are not separated from arithmetic instructions. A modern RISC processor can therefore be much more complex than, say, a modern microcontroller using a CISC-labeled instruction set, especially in the complexity of its electronic circuits, but also in the number of instructions or the complexity of their encoding patterns. The only typical differentiating characteristic is that most RISC designs use uniform instruction length for almost all instructions, and employ strictly separate load/store-instructions. Examples of instruction set architectures that have been retroactively labeled CISC are System/360 through z/Architecture, the PDP-11 and VAX architectures, Data General Nova and many others. Well known microprocessors and microcontrollers that have also been labeled CISC in many academic publications include the Motorola 6800, 6809 and 68000-families; the Intel 8080, iAPX432 and x86-family; the Zilog Z80, Z8 and Z8000-families; the National Semiconductor 32016 and NS320xx-line; the MOS Technology 6502-family; the Intel 8051-family; and others. Some designs have been regarded as borderline cases by some writers. For instance, the Microchip Technology PIC has been labeled RISC in some circles and CISC in others. The 6502 and 6809 have both been described as "RISC-like", although they have complex addressing modes as well as arithmetic instructions that operate on memory, contrary to the RISC-principles. Before the RISC philosophy became prominent, many computer architects tried to bridge the so-called semantic gap, i.e., to design instruction sets that directly support high-level programming constructs such as procedure calls, loop control, and complex addressing modes, allowing data structure and array accesses to be combined into single instructions. Instructions are also typically highly encoded in order to further enhance the code density. The compact nature of such instruction sets results in smaller program sizes and fewer (slow) main memory accesses, which at the time (early 1960s and onwards) resulted in a tremendous saving on the cost of computer memory and disc storage, as well as faster execution. It also meant good programming productivity even in assembly language, as high level languages such as Fortran or Algol were not always available or appropriate. Indeed, microprocessors in this category are sometimes still programmed in assembly language for certain types of critical applications. In the 1970s, analysis of high-level languages indicated some complex machine language implementations and it was determined that new instructions could improve performance. Some instructions were added that were never intended to be used in assembly language but fit well with compiled high-level languages. Compilers were updated to take advantage of these instructions. The benefits of semantically rich instructions with compact encodings can be seen in modern processors as well, particularly in the high-performance segment where caches are a central component (as opposed to most embedded systems). This is because these fast, but complex and expensive, memories are inherently limited in size, making compact code beneficial. Of course, the fundamental reason they are needed is that main memories (i.e., dynamic RAM today) remain slow compared to a (high-performance) CPU core. While many designs achieved the aim of higher throughput at lower cost and also allowed high-level language constructs to be expressed by fewer instructions, it was observed that this was not "always" the case. For instance, low-end versions of complex architectures (i.e. using less hardware) could lead to situations where it was possible to improve performance by "not" using a complex instruction (such as a procedure call or enter instruction), but instead using a sequence of simpler instructions. One reason for this was that architects (microcode writers) sometimes "over-designed" assembly language instructions, including features which could not be implemented efficiently on the basic hardware available. There could, for instance, be "side effects" (above conventional flags), such as the setting of a register or memory location that was perhaps seldom used; if this was done via ordinary (non duplicated) internal buses, or even the external bus, it would demand extra cycles every time, and thus be quite inefficient. Even in balanced high-performance designs, highly encoded and (relatively) high-level instructions could be complicated to decode and execute efficiently within a limited transistor budget. Such architectures therefore required a great deal of work on the part of the processor designer in cases where a simpler, but (typically) slower, solution based on decode tables and/or microcode sequencing is not appropriate. At a time when transistors and other components were a limited resource, this also left fewer components and less opportunity for other types of performance optimizations. The circuitry that performs the actions defined by the microcode in many (but not all) CISC processors is, in itself, a processor which in many ways is reminiscent in structure to very early CPU designs. In the early 1970s, this gave rise to ideas to return to simpler processor designs in order to make it more feasible to cope without ("then" relatively large and expensive) ROM tables and/or PLA structures for sequencing and/or decoding. The first (retroactively) RISC-"labeled" processor (IBM 801 IBM's Watson Research Center, mid-1970s) was a tightly pipelined simple machine originally intended to be used as an internal microcode kernel, or engine, in CISC designs, but also became the processor that introduced the RISC idea to a somewhat larger public. Simplicity and regularity also in the visible instruction set would make it easier to implement overlapping processor stages (pipelining) at the machine code level (i.e. the level seen by compilers). However, pipelining at that level was already used in some high performance CISC "supercomputers" in order to reduce the instruction cycle time (despite the complications of implementing within the limited component count and wiring complexity feasible at the time). Internal microcode execution in CISC processors, on the other hand, could be more or less pipelined depending on the particular design, and therefore more or less akin to the basic structure of RISC processors. In a more modern context, the complex variable-length encoding used by some of the typical CISC architectures makes it complicated, but still feasible, to build a superscalar implementation of a CISC programming model "directly"; the in-order superscalar original Pentium and the out-of-order superscalar Cyrix 6x86 are well known examples of this. The frequent memory accesses for operands of a typical CISC machine may limit the instruction level parallelism that can be extracted from the code, although this is strongly mediated by the fast cache structures used in modern designs, as well as by other measures. Due to inherently compact and semantically rich instructions, the average amount of work performed per machine code unit (i.e. per byte or bit) is higher for a CISC than a RISC processor, which may give it a significant advantage in a modern cache based implementation. Transistors for logic, PLAs, and microcode are no longer scarce resources; only large high-speed cache memories are limited by the maximum number of transistors today. Although complex, the transistor count of CISC decoders do not grow exponentially like the total number of transistors per processor (the majority typically used for caches). Together with better tools and enhanced technologies, this has led to new implementations of highly encoded and variable length designs without load-store limitations (i.e. non-RISC). This governs re-implementations of older architectures such as the ubiquitous x86 (see below) as well as new designs for microcontrollers for embedded systems, and similar uses. The superscalar complexity in the case of modern x86 was solved by converting instructions into one or more micro-operations and dynamically issuing those micro-operations, i.e. indirect and dynamic superscalar execution; the Pentium Pro and AMD K5 are early examples of this. It allows a fairly simple superscalar design to be located after the (fairly complex) decoders (and buffers), giving, so to speak, the best of both worlds in many respects. This technique is also used in IBM z196 and later z/Architecture microprocessors. The terms CISC and RISC have become less meaningful with the continued evolution of both CISC and RISC designs and implementations. The first highly (or tightly) pipelined x86 implementations, the 486 designs from Intel, AMD, Cyrix, and IBM, supported every instruction that their predecessors did, but achieved "maximum efficiency" only on a fairly simple x86 subset that was only a little more than a typical RISC instruction set (i.e. without typical RISC "load-store" limitations). The Intel P5 Pentium generation was a superscalar version of these principles. However, modern x86 processors also (typically) decode and split instructions into dynamic sequences of internally buffered micro-operations, which not only helps execute a larger subset of instructions in a pipelined (overlapping) fashion, but also facilitates more advanced extraction of parallelism out of the code stream, for even higher performance. Contrary to popular simplifications (present also in some academic texts), not all CISCs are microcoded or have "complex" instructions. As CISC became a catch-all term meaning anything that's not a load-store (RISC) architecture, it's not the number of instructions, nor the complexity of the implementation or of the instructions themselves, that define CISC, but the fact that arithmetic instructions also perform memory accesses. Compared to a small 8-bit CISC processor, a RISC floating-point instruction is complex. CISC does not even need to have complex addressing modes; 32 or 64-bit RISC processors may well have more complex addressing modes than small 8-bit CISC processors. A PDP-10, a PDP-8, an Intel 80386, an Intel 4004, a Motorola 68000, a System z mainframe, a Burroughs B5000, a VAX, a Zilog Z80000, and a MOS Technology 6502 all vary wildly in the number, sizes, and formats of instructions, the number, types, and sizes of registers, and the available data types. Some have hardware support for operations like scanning for a substring, arbitrary-precision BCD arithmetic, or transcendental functions, while others have only 8-bit addition and subtraction. But they are all in the CISC category because they have "load-operate" instructions that load and/or store memory contents within the same instructions that perform the actual calculations. For instance, the PDP-8, having only 8 fixed-length instructions and no microcode at all, is a CISC because of "how" the instructions work, PowerPC, which has over 230 instructions (more than some VAXes), and complex internals like register renaming and a reorder buffer, is a RISC, while Minimal CISC has 8 instructions, but is clearly a CISC because it combines memory access and computation in the same instructions.
https://en.wikipedia.org/wiki?curid=7622
Cetacea Cetaceans () (from , from ) are aquatic mammals constituting the infraorder Cetacea. There are around 89 living species, which are divided into two parvorders. The first is the Odontoceti, the toothed whales, which consist of around 70 species, including the dolphin (which includes killer whales), porpoise, beluga whale, narwhal, sperm whale, and beaked whale. The second is the Mysticeti, the baleen (from ) whales, which have a filter-feeder system, and consist of fifteen species divided into three families, and include the blue whale, right whale, bowhead whale, rorqual, and gray whale. The ancient and extinct ancestors of modern whales (Archaeoceti) lived 53 to 45 million years ago. They diverged from even-toed ungulates; their closest living relatives are hippopotamuses and others such as cows and pigs. They were semiaquatic and evolved in the shallow waters that separated India from Asia. Around 30 species adapted to a fully oceanic life. Baleen whales split from toothed whales around 34 million years ago. The smallest cetacean is Maui's dolphin, at and ; the largest is the blue whale, at and . Baleen whales have a tactile system in the short hairs (vibrissae) around their mouth; toothed whales also develop vibrissae, but lose them during fetal development or shortly after birth, leaving behind electroreceptive vibrissal crypts in some species. Cetaceans have well-developed senses—their eyesight and hearing are adapted for both air and water. They have a layer of fat, or blubber, under the skin to maintain body heat in cold water. Several species exhibit sexual dimorphism. Two external forelimbs are modified into flippers; two internal hindlimbs are vestigial. Cetaceans have streamlined bodies. Dolphins are able to make very tight turns at high speeds, others are capable of diving to great depths. Although cetaceans are widespread, most species prefer the colder waters of the Northern and Southern Hemispheres. They spend their lives in the water of seas and rivers; having to mate, give birth, molt or escape from predators, like killer whales, underwater. This has been enabled by unique evolutionary adaptations in their physiology and anatomy. They feed largely on fish and marine invertebrates; but a few, like the killer whale, feed on large mammals and birds, such as penguins and seals. Some baleen whales (mainly gray whales and right whales) are specialised for feeding on benthic creatures. Male cetaceans typically mate with more than one female (polygyny), although the degree of polygyny varies with the species. Cetaceans are not known to have pair bonds. Male cetacean strategies for reproductive success vary between herding females, defending potential mates from other males, or whale song which attracts mates. Calves are typically born in the fall and winter months, and females bear almost all the responsibility for raising them. Mothers of some species fast and nurse their young for a relatively short period of time, which is more typical of baleen whales as their main food source (invertebrates) aren't found in their breeding and calving grounds (tropics). Cetaceans produce a number of vocalizations, notably the clicks and whistles of dolphins and the moaning songs of the humpback whale. The meat, blubber and oil of cetaceans have traditionally been used by indigenous peoples of the Arctic. Cetaceans have been depicted in various cultures worldwide. Dolphins are commonly kept in captivity and are even sometimes trained to perform tricks and tasks, other cetaceans aren't as often kept in captivity (with usually unsuccessful attempts). Cetaceans have been extensively hunted by commercial industries for their products, although hunting the largest whales is now forbidden by international law. The baiji (Chinese river dolphin) has become "Possibly Extinct" in the past century, while the vaquita and Yangtze finless porpoise are ranked Critically Endangered by the International Union for Conservation of Nature. Besides hunting, cetaceans also face threats from accidental trapping and environmental hazards such as marine pollution, noise pollution and ongoing climate change. The two parvorders, baleen whales (Mysticeti) and toothed whales (Odontoceti), are thought to have diverged around thirty-four million years ago. Baleen whales have bristles made of keratin instead of teeth. The bristles filter krill and other small invertebrates from seawater. Grey whales feed on bottom-dwelling mollusks. Rorqual family (balaenopterids) use throat pleats to expand their mouths to take in food and sieve out the water. Balaenids (right whales and bowhead whales) have massive heads that can make up 40% of their body mass. Most mysticetes prefer the food-rich colder waters of the Northern and Southern Hemispheres, migrating to the Equator to give birth. During this process, they are capable of fasting for several months, relying on their fat reserves. The parvorder of Odontocetes – the toothed whales – include sperm whales, beaked whales, killer whales, dolphins and porpoises. Generally the teeth are designed for catching fish, squid or other marine invertebrates, not for chewing them, so prey is swallowed whole. Teeth are shaped like cones (dolphins and sperm whales), spades (porpoises), pegs (belugas), tusks (narwhals) or variable (beaked whale males). Female beaked whales' teeth are hidden in the gums and are not visible, and most male beaked whales have only two short tusks. Narwhals have vestigial teeth other than their tusk, which is present on males and 15% of females and has millions of nerves to sense water temperature, pressure and salinity. A few toothed whales, such as some killer whales, feed on mammals, such as pinnipeds and other whales. Toothed whales have well-developed senses – their eyesight and hearing are adapted for both air and water, and they have advanced sonar capabilities using their melon. Their hearing is so well-adapted for both air and water that some blind specimens can survive. Some species, such as sperm whales, are well adapted for diving to great depths. Several species of toothed whales show sexual dimorphism, in which the males differ from the females, usually for purposes of sexual display or aggression. Cetacean bodies are generally similar to that of fish, which can be attributed to their lifestyle and the habitat conditions. Their body is well-adapted to their habitat, although they share essential characteristics with other higher mammals (Eutheria). They have a streamlined shape, and their forelimbs are flippers. Almost all have a dorsal fin on their backs that can take on many forms depending on the species. A few species, such as the beluga whale, lack them. Both the flipper and the fin are for stabilization and steering in the water. The male genitals and mammary glands of females are sunken into the body. The body is wrapped in a thick layer of fat, known as blubber, used for thermal insulation and gives cetaceans their smooth, streamlined body shape. In larger species, it can reach a thickness up to half a meter (1.6 ft). Sexual dimorphism evolved in many toothed whales. Sperm whales, narwhals, many members of the beaked whale family, several species of the porpoise family, killer whales, pilot whales, eastern spinner dolphins and northern right whale dolphins show this characteristic. Males in these species developed external features absent in females that are advantageous in combat or display. For example, male sperm whales are up to 63% percent larger than females, and many beaked whales possess tusks used in competition among males. Hind legs are not present in cetaceans, nor are any other external body attachments such as a pinna and hair. Whales have an elongated head, especially baleen whales, due to the wide overhanging jaw. Bowhead whale plates can be long. Their nostril(s) make up the blowhole, with one in toothed whales and two in baleen whales. The nostrils are located on top of the head above the eyes so that the rest of the body can remain submerged while surfacing for air. The back of the skull is significantly shortened and deformed. By shifting the nostrils to the top of the head, the nasal passages extend perpendicularly through the skull. The teeth or baleen in the upper jaw sit exclusively on the maxilla. The braincase is concentrated through the nasal passage to the front and is correspondingly higher, with individual cranial bones that overlap. In toothed whales, connective tissue exists in the melon as a head buckle. This is filled with air sacs and fat that aid in buoyancy and biosonar. The sperm whale has a particularly pronounced melon; this is called the spermaceti organ and contains the eponymous spermaceti, hence the name "sperm whale". Even the long tusk of the narwhal is a vice-formed tooth. In many toothed whales, the depression in their skull is due to the formation of a large melon and multiple, asymmetric air bags. River dolphins, unlike most other cetaceans, can turn their head 90°. Other cetaceans have fused neck vertebrae and are unable to turn their head at all. The baleen of baleen whales consists of long, fibrous strands of keratin. Located in place of the teeth, it has the appearance of a huge fringe and is used to sieve the water for plankton and krill. The neocortex of many cetaceans is home to elongated spindle neurons that, prior to 2019, were known only in hominids. In humans, these cells are thought to be involved in social conduct, emotions, judgment and theory of mind. Cetacean spindle neurons are found in areas of the brain homologous to where they are found in humans, suggesting they perform a similar function. Brain size was previously considered a major indicator of intelligence. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately two-thirds or three-quarter exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such an analysis provides an encephalization quotient that can be used as an indication of animal intelligence. Sperm whales have the largest brain mass of any animal on earth, averaging and in mature males. The brain to body mass ratio in some odontocetes, such as belugas and narwhals, is second only to humans. In some whales, however, it is less than half that of humans: 0.9% versus 2.1%. The sperm whale ("Physeter macrocephalus") is the largest of all toothed predatory animals and possesses the largest brain. The cetacean skeleton is largely made up of cortical bone, which stabilizes the animal in the water. For this reason, the usual terrestrial compact bones, which are finely woven cancellous bone, are replaced with lighter and more elastic material. In many places, bone elements are replaced by cartilage and even fat, thereby improving their hydrostatic qualities. The ear and the muzzle contain a bone shape that is exclusive to cetaceans with a high density, resembling porcelain. This conducts sound better than other bones, thus aiding biosonar. The number of vertebrae that make up the spine varies by species, ranging from forty to ninety-three. The cervical spine, found in all mammals, consists of seven vertebrae which, however, are reduced or fused. This fusion provides stability during swimming at the expense of mobility. The fins are carried by the thoracic vertebrae, ranging from nine to seventeen individual vertebrae. The sternum is cartilaginous. The last two to three pairs of ribs are not connected and hang freely in the body wall. The stable lumbar and tail include the other vertebrae. Below the caudal vertebrae is the chevron bone. The front limbs are paddle-shaped with shortened arms and elongated finger bones, to support movement. They are connected by cartilage. The second and third fingers display a proliferation of the finger members, a so-called hyperphalangy. The shoulder joint is the only functional joint in all cetaceans except for the Amazon river dolphin. The collarbone is completely absent. They have a cartilaginous fluke at the end of their tails that is used for propulsion. The fluke is set horizontally on the body, unlike fish, which have vertical tails. Cetaceans have powerful hearts. Blood oxygen is distributed effectively throughout the body. They are warm-blooded, i.e., they hold a nearly constant body temperature. Cetaceans have lungs, meaning they breathe air. An individual can last without a breath from a few minutes to over two hours depending on the species. Cetacea are deliberate breathers who must be awake to inhale and exhale. When stale air, warmed from the lungs, is exhaled, it condenses as it meets colder external air. As with a terrestrial mammal breathing out on a cold day, a small cloud of 'steam' appears. This is called the 'spout' and varies across species in shape, angle and height. Species can be identified at a distance using this characteristic. The structure of the respiratory and circulatory systems is of particular importance for the life of marine mammals. The oxygen balance is effective. Each breath can replace up to 90% of the total lung volume. For land mammals, in comparison, this value is usually about 15%. During inhalation, about twice as much oxygen is absorbed by the lung tissue as in a land mammal. As with all mammals, the oxygen is stored in the blood and the lungs, but in cetaceans, it is also stored in various tissues, mainly in the muscles. The muscle pigment, myoglobin, provides an effective bond. This additional oxygen storage is vital for deep diving, since beyond a depth around , the lung tissue is almost completely compressed by the water pressure. The stomach consists of three chambers. The first region is formed by a loose gland and a muscular forestomach (missing in beaked whales), which is then followed by the main stomach and the pylorus. Both are equipped with glands to help digestion. A bowel adjoins the stomachs, whose individual sections can only be distinguished histologically. The liver is large and separate from the gall bladder. The kidneys are long and flattened. The salt concentration in cetacean blood is lower than that in seawater, requiring kidneys to excrete salt. This allows the animals to drink seawater. Cetacean eyes are set on the sides rather than the front of the head. This means only species with pointed 'beaks' (such as dolphins) have good binocular vision forward and downward. Tear glands secrete greasy tears, which protect the eyes from the salt in the water. The lens is almost spherical, which is most efficient at focusing the minimal light that reaches deep water. Cetaceans are known to possess excellent hearing. At least one species, the tucuxi or Guiana dolphin, is able to use electroreception to sense prey. The external ear has lost the pinna (visible ear), but still retains a narrow external auditory meatus. To register sounds, instead, the posterior part of the mandible has a thin lateral wall (the pan bone) fronting a concavity that houses a fat pad. The pad passes anteriorly into the greatly enlarged mandibular foramen to reach in under the teeth and posteriorly to reach the thin lateral wall of the ectotympanic. The ectotympanic offers a reduced attachment area for the tympanic membrane. The connection between this auditory complex and the rest of the skull is reduced—to a single, small cartilage in oceanic dolphins. In odontocetes, the complex is surrounded by spongy tissue filled with air spaces, while in mysticetes, it is integrated into the skull as with land mammals. In odontocetes, the tympanic membrane (or ligament) has the shape of a folded-in umbrella that stretches from the ectotympanic ring and narrows off to the malleus (quite unlike the flat, circular membrane found in land mammals.) In mysticetes, it also forms a large protrusion (known as the "glove finger"), which stretches into the external meatus and the stapes are larger than in odontocetes. In some small sperm whales, the malleus is fused with the ectotympanic. The ear ossicles are pachyosteosclerotic (dense and compact) and differently shaped from land mammals (other aquatic mammals, such as sirenians and earless seals, have also lost their pinnae). T semicircular canals are much smaller relative to body size than in other mammals. The auditory bulla is separated from the skull and composed of two compact and dense bones (the periotic and tympanic) referred to as the tympanoperiotic complex. This complex is located in a cavity in the middle ear, which, in the Mysticeti, is divided by a bony projection and compressed between the exoccipital and squamosal, but in the odontoceti, is large and completely surrounds the bulla (hence called "peribullar"), which is, therefore, not connected to the skull except in physeterids. In the Odontoceti, the cavity is filled with a dense foam in which the bulla hangs suspended in five or more sets of ligaments. The pterygoid and peribullar sinuses that form the cavity tend to be more developed in shallow water and riverine species than in pelagic Mysticeti. In Odontoceti, the composite auditory structure is thought to serve as an acoustic isolator, analogous to the lamellar construction found in the temporal bone in bats. Cetaceans use sound to communicate, using groans, moans, whistles, clicks or the 'singing' of the humpback whale. Odontoceti are generally capable of echolocation. They can discern the size, shape, surface characteristics, distance and movement of an object. They can search for, chase and catch fast-swimming prey in total darkness. Most Odontoceti can distinguish between prey and nonprey (such as humans or boats); captive Odontoceti can be trained to distinguish between, for example, balls of different sizes or shapes. Echolocation clicks also contain characteristic details unique to each animal, which may suggest that toothed whales can discern between their own click and that of others. Mysticeti have exceptionally thin, wide basilar membranes in their cochleae without stiffening agents, making their ears adapted for processing low to infrasonic frequencies. The initial karyotype includes a set of chromosomes from 2n = 44. They have four pairs of telocentric chromosomes (whose centromeres sit at one of the telomeres), two to four pairs of subtelocentric and one or two large pairs of submetacentric chromosomes. The remaining chromosomes are metacentric—the centromere is approximately in the middle—and are rather small. Sperm whales, beaked whales and right whales converge to a reduction in the number of chromosomes to 2n = 42. Cetaceans are found in many aquatic habitats. While many marine species, such as the blue whale, the humpback whale and the killer whale, have a distribution area that includes nearly the entire ocean, some species occur only locally or in broken populations. These include the vaquita, which inhabits a small part of the Gulf of California and Hector's dolphin, which lives in some coastal waters in New Zealand. River dolphin species live exclusively in fresh water. Many species inhabit specific latitudes, often in tropical or subtropical waters, such as Bryde's whale or Risso's dolphin. Others are found only in a specific body of water. The southern right whale dolphin and the hourglass dolphin live only in the Southern Ocean. The narwhal and the beluga live only in the Arctic Ocean. Sowerby's beaked whale and the Clymene dolphin exist only in the Atlantic and the Pacific white-sided dolphin and the northern straight dolphin live only in the North Pacific. Cosmopolitan species may be found in the Pacific, Atlantic and Indian Oceans. However, northern and southern populations become genetically separated over time. In some species, this separation leads eventually to a divergence of the species, such as produced the southern right whale, North Pacific right whale and North Atlantic right whale. Migratory species' reproductive sites often lie in the tropics and their feeding grounds in polar regions. Thirty-two species are found in European waters, including twenty-five toothed and seven baleen species. Many species of whales migrate on a latitudinal basis to move between seasonal habitats. For example, the gray whale migrates 10,000 miles round trip. The journey begins at winter birthing grounds in warm lagoons along Baja California, and traverses 5,000-7,000 miles of coastline to summer feeding grounds in the Bering, Chuckchi and Beaufort seas off the coast of Alaska. Conscious breathing cetaceans sleep but cannot afford to be unconscious for long, because they may drown. While knowledge of sleep in wild cetaceans is limited, toothed cetaceans in captivity have been recorded to exhibit unihemispheric slow-wave sleep (USWS), which means they sleep with one side of their brain at a time, so that they may swim, breathe consciously and avoid both predators and social contact during their period of rest. A 2008 study found that sperm whales sleep in vertical postures just under the surface in passive shallow 'drift-dives', generally during the day, during which whales do not respond to passing vessels unless they are in contact, leading to the suggestion that whales possibly sleep during such dives. While diving, the animals reduce their oxygen consumption by lowering the heart activity and blood circulation; individual organs receive no oxygen during this time. Some rorquals can dive for up to 40 minutes, sperm whales between 60 and 90 minutes and bottlenose whales for two hours. Diving depths average about . Species such as sperm whales can dive to , although more commonly . Most cetaceans are social animals, although a few species live in pairs or are solitary. A group, known as a pod, usually consists of ten to fifty animals, but on occasion, such as mass availability of food or during mating season, groups may encompass more than one thousand individuals. Inter-species socialization can occur. Pods have a fixed hierarchy, with the priority positions determined by biting, pushing or ramming. The behavior in the group is aggressive only in situations of stress such as lack of food, but usually it is peaceful. Contact swimming, mutual fondling and nudging are common. The playful behavior of the animals, which is manifested in air jumps, somersaults, surfing, or fin hitting, occurs more often than not in smaller cetaceans, such as dolphins and porpoises. Males in some baleen species communicate via whale song, sequences of high pitched sounds. These "songs" can be heard for hundreds of kilometers. Each population generally shares a distinct song, which evolves over time. Sometimes, an individual can be identified by its distinctive vocals, such as the 52-hertz whale that sings at a higher frequency than other whales. Some individuals are capable of generating over 600 distinct sounds. In baleen species such as humpbacks, blues and fins, male-specific song is believed to be used to attract and display fitness to females. Pod groups also hunt, often with other species. Many species of dolphins accompany large tunas on hunting expeditions, following large schools of fish. The killer whale hunts in pods and targets belugas and even larger whales. Humpback whales, among others, form in collaboration bubble carpets to herd krill or plankton into bait balls before lunging at them. Cetacea are known to teach, learn, cooperate, scheme and grieve. Smaller cetaceans, such as dolphins and porpoises, engage in complex play behavior, including such things as producing stable underwater toroidal air-core vortex rings or "bubble rings". The two main methods of bubble ring production are rapid puffing of air into the water and allowing it to rise to the surface, forming a ring, or swimming repeatedly in a circle and then stopping to inject air into the helical vortex currents thus formed. They also appear to enjoy biting the vortex rings, so that they burst into many separate bubbles and then rise quickly to the surface. Whales produce bubble nets to aid in herding prey. Larger whales are also thought to engage in play. The southern right whale elevates its tail fluke above the water, remaining in the same position for a considerable time. This is known as "sailing". It appears to be a form of play and is most commonly seen off the coast of Argentina and South Africa. Humpback whales also display this behaviour. Self-awareness appears to be a sign of abstract thinking. Self-awareness, although not well-defined, is believed to be a precursor to more advanced processes such as metacognitive reasoning (thinking about thinking) that humans exploit. Cetaceans appear to possess self-awareness. The most widely used test for self-awareness in animals is the mirror test, in which a temporary dye is placed on an animal's body and the animal is then presented with a mirror. Researchers then explore whether the animal shows signs of self-recognition. Critics claim that the results of these tests are susceptible to the Clever Hans effect. This test is much less definitive than when used for primates. Primates can touch the mark or the mirror, while cetaceans cannot, making their alleged self-recognition behavior less certain. Skeptics argue that behaviors said to identify self-awareness resemble existing social behaviors, so researchers could be misinterpreting self-awareness for social responses. Advocates counter that the behaviors are different from normal responses to another individual. Cetaceans show less definitive behavior of self-awareness, because they have no pointing ability. In 1995, Marten and Psarakos used video to test dolphin self-awareness. They showed dolphins real-time footage of themselves, recorded footage and another dolphin. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been replicated, dolphins later "passed" the mirror test. Most cetaceans sexually mature at seven to 10 years. An exception to this is the La Plata dolphin, which is sexually mature at two years, but lives only to about 20. The sperm whale reaches sexual maturity within about 20 years and a lifespan between 50 and 100 years. For most species, reproduction is seasonal. Ovulation coincides with male fertility. This cycle is usually coupled with seasonal movements that can be observed in many species. Most toothed whales have no fixed bonds. In many species, females choose several partners during a season. Baleen whales are largely monogamous within each reproductive period. Gestation ranges from 9 to 16 months. Duration is not necessarily a function of size. Porpoises and blue whales gestate for about 11 months. As with all mammals other than marsupials and monotremes, the embryo is fed by the placenta, an organ that draws nutrients from the mother’s bloodstream. Mammals without placentas either lay minuscule eggs (monotremes) or bear minuscule offspring (marsupials). Cetaceans usually bear one calf. In the case of twins, one usually dies, because the mother cannot produce sufficient milk for both. The fetus is positioned for a tail-first delivery, so that the risk of drowning during delivery is minimal. After birth, the mother carries the infant to the surface for its first breath. At birth they are about one-third of their adult length and tend to be independently active, comparable to terrestrial mammals. Like other placental mammals, cetaceans give birth to well-developed calves and nurse them with milk from their mammary glands. When suckling, the mother actively splashes milk into the mouth of the calf, using the muscles of her mammary glands, as the calf has no lips. This milk usually has a high fat content, ranging from 16 to 46%, causing the calf to increase rapidly in size and weight. In many small cetaceans, suckling lasts for about four months. In large species, it lasts for over a year and involves a strong bond between mother and offspring. The mother is solely responsible for brooding. In some species, so-called "aunts" occasionally suckle the young. This reproductive strategy provides a few offspring that have a high survival rate. Among cetaceans, whales are distinguished by an unusual longevity compared to other higher mammals. Some species, such as the bowhead whale ("Balaena mysticetus"), can reach over 200 years. Based on the annual rings of the bony otic capsule, the age of the oldest known specimen is a male determined to be 211 years at the time of death. Upon death, whale carcasses fall to the deep ocean and provide a substantial habitat for marine life. Evidence of whale falls in present-day and fossil records shows that deep-sea whale falls support a rich assemblage of creatures, with a global diversity of 407 species, comparable to other neritic biodiversity hotspots, such as cold seeps and hydrothermal vents. Deterioration of whale carcasses happens through three stages. Initially, organisms such as sharks and hagfish scavenge the soft tissues at a rapid rate over a period of months and as long as two years. This is followed by the colonization of bones and surrounding sediments (which contain organic matter) by enrichment opportunists, such as crustaceans and polychaetes, throughout a period of years. Finally, sulfophilic bacteria reduce the bones releasing hydrogen sulfide enabling the growth of chemoautotrophic organisms, which in turn, support organisms such as mussels, clams, limpets and sea snails. This stage may last for decades and supports a rich assemblage of species, averaging 185 per site. Brucellosis affects almost all mammals. It is distributed worldwide, while fishing and pollution have caused porpoise population density pockets, which risks further infection and disease spreading. "Brucella ceti", most prevalent in dolphins, has been shown to cause chronic disease, increasing the chance of failed birth and miscarriages, male infertility, neurobrucellosis, cardiopathies, bone and skin lesions, strandings and death. Until 2008, no case had ever been reported in porpoises, but isolated populations have an increased risk and consequentially a high mortality rate. Molecular biology and immunology show that cetaceans are phylogenetically closely related with the even-toed ungulates (Artiodactyla). Whales direct lineage began in the early Eocene, more than 50 million years ago, with early artiodactyls. Fossil discoveries at the beginning of the 21st century confirmed this. Most molecular biological evidence suggests that hippos are the closest living relatives. Common anatomical features include similarities in the morphology of the posterior molars, and the bony ring on the temporal bone (bulla) and the involucre, a skull feature that was previously associated only with cetaceans. The fossil record, however, does not support this relationship, because the hippo lineage dates back only about 15 million years. The most striking common feature is the talus, a bone in the upper ankle. Early cetaceans, archaeocetes, show double castors, which occur only in even-toed ungulates. Corresponding findings are from Tethys Sea deposits in northern India and Pakistan. The Tethys Sea was a shallow sea between the Asian continent and northward-bound Indian plate. Mysticetes evolved baleen around 25 million years ago and lost their teeth. The direct ancestors of today's cetaceans are probably found within the Dorudontidae whose most famous member, "Dorudon", lived at the same time as "Basilosaurus". Both groups had already developed the typical anatomical features of today's whales, such as hearing. Life in the water for a formerly terrestrial creature required significant adjustments such as the fixed bulla, which replaces the mammalian eardrum, as well as sound-conducting elements for submerged directional hearing. Their wrists were stiffened and probably contributed to the typical build of flippers. The hind legs existed, however, but were significantly reduced in size and with a vestigial pelvis connection. The fossil record traces the gradual transition from terrestrial to aquatic life. The regression of the hind limbs allowed greater flexibility of the spine. This made it possible for whales to move around with the vertical tail hitting the water. The front legs transformed into flippers, costing them their mobility on land. One of the oldest members of ancient cetaceans (Archaeoceti) is "Pakicetus" from the Middle Eocene. This is an animal the size of a wolf, whose skeleton is known only partially. It had functioning legs and lived near the shore. This suggests the animal could still move on land. The long snout had carnivorous dentition. The transition from land to sea dates to about 49 million years ago, with the "Ambulocetus" ("running whale"), discovered in Pakistan. It was up to long. The limbs of this archaeocete were leg-like, but it was already fully aquatic, indicating that a switch to a lifestyle independent from land happened extraordinarily quickly. The snout was elongated with overhead nostrils and eyes. The tail was strong and supported movement through water. "Ambulocetus" probably lived in mangroves in brackish water and fed in the riparian zone as a predator of fish and other vertebrates. Dating from about 45 million years ago are species such as "Indocetus", "Kutchicetus", "Rodhocetus" and "Andrewsiphius", all of which were adapted to life in water. The hind limbs of these species were regressed and their body shapes resemble modern whales. Protocetidae family member "Rodhocetus" is considered the first to be fully aquatic. The body was streamlined and delicate with extended hand and foot bones. The merged pelvic lumbar spine was present, making it possible to support the floating movement of the tail. It was likely a good swimmer, but could probably move only clumsily on land, much like a modern seal. Since the late Eocene, about 40 million years ago, cetaceans populated the subtropical oceans and no longer emerged on land. An example is the 18-m-long "Basilosaurus", sometimes referred to as "Zeuglodon". The transition from land to water was completed in about 10 million years. The Wadi Al-Hitan ("Whale Valley") in Egypt contains numerous skeletons of "Basilosaurus", as well as other marine vertebrates. The two parvorders are baleen whales (Mysticeti) which owe their name to their baleen, and toothed whales (Odontoceti), which have teeth shaped like cones, spades, pegs or tusks, and can perceive their environment through biosonar. The terms whale and dolphin are informal: The term 'great whales' covers those currently regulated by the International Whaling Commission: the Odontoceti family Physeteridae (sperm whales); and the Mysticeti families Balaenidae (right and bowhead whales), Eschrichtiidae (grey whales), and some of the Balaenopteridae (minke, Bryde's, sei, blue and fin; not Eden's and Omura's whales). †Recently extinct The primary threats to cetaceans come from people, both directly from whaling or drive hunting and indirect threats from fishing and pollution. Whaling is the practice of hunting whales, mainly baleen and sperm whales. This activity has gone on since the Stone Age. In the Middle Ages, reasons for whaling included their meat, oil usable as fuel and the jawbone, which was used in house construction. At the end of the Middle Ages, early whaling fleets aimed at baleen whales, such as bowheads. In the 16th and 17th centuries, the Dutch fleet had about 300 whaling ships with 18,000 crewmen. In the 18th and 19th centuries, baleen whales especially were hunted for their baleen, which was used as a replacement for wood, or in products requiring strength and flexibility such as corsets and crinoline skirts. In addition, the spermaceti found in the sperm whale was used as a machine lubricant and the ambergris as a material for pharmaceutical and perfume industries. In the second half of the 19th century, the explosive harpoon was invented, leading to a massive increase in the catch size. Large ships were used as "mother" ships for the whale handlers. In the first half of the 20th century, whales were of great importance as a supplier of raw materials. Whales were intensively hunted during this time; in the 1930s, 30,000 whales were killed. This increased to over 40,000 animals per year up to the 1960s, when stocks of large baleen whales collapsed. Most hunted whales are now threatened, with some great whale populations exploited to the brink of extinction. Atlantic and Korean gray whale populations were completely eradicated and the North Atlantic right whale population fell to some 300-600. The blue whale population is estimated to be around 14,000. The first efforts to protect whales came in 1931. Some particularly endangered species, such as the humpback whale (which then numbered about 100 animals), were placed under international protection and the first protected areas were established. In 1946, the International Whaling Commission (IWC) was established, to monitor and secure whale stocks. Whaling of 14 large species for commercial purposes was prohibited worldwide by this organization from 1985 to 2005, though some countries do not honor the prohibition. The stocks of species such as humpback and blue whales have recovered, though they are still threatened. The United States Congress passed the Marine Mammal Protection Act of 1972 sustain the marine mammal population. It prohibits the taking of marine mammals except for several hundred per year taken in Alaska. Japanese whaling ships are allowed to hunt whales of different species for ostensibly scientific purposes. Aboriginal whaling is still permitted. About 1,200 pilot whales were taken in the Faroe Islands in 2017., and about 900 narwhals and 800 belugas per year are taken in Alaska, Canada, Greenland, and Siberia. About 150 minke are taken in Greenland per year, 120 gray whales in Siberia and 50 bowheads in Alaska, as aboriginal whaling, besides the 600 minke taken commercially by Norway, 300 minke and 100 sei taken by Japan and up to 100 fin whales taken by Iceland. Iceland and Norway do not recognize the ban and operate commercial whaling. Norway and Japan are committed to ending the ban. Dolphins and other smaller cetaceans are sometimes hunted in an activity known as dolphin drive hunting. This is accomplished by driving a pod together with boats, usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. Dolphins are hunted this way in several places around the world, including the Solomon Islands, the Faroe Islands, Peru and Japan (the most well-known practitioner). Dolphins are mostly hunted for their meat, though some end up in dolphinaria. Despite the controversy thousands of dolphins are caught in drive hunts each year. Dolphin pods often reside near large tuna shoals. This is known to fishermen, who look for dolphins to catch tuna. Dolphins are much easier to spot from a distance than tuna, since they regularly breathe. The fishermen pull their nets hundreds of meters wide in a circle around the dolphin groups, in the expectation that they will net a tuna shoal. When the nets are pulled together, the dolphins become entangled under water and drown. Line fisheries in larger rivers are threats to river dolphins. A greater threat than by-catch for small cetaceans is targeted hunting. In Southeast Asia, they are sold as fish-replacement to locals, since the region's edible fish promise higher revenues from exports. In the Mediterranean, small cetaceans are targeted to ease pressure on edible fish. A stranding is when a cetacean leaves the water to lie on a beach. In some cases, groups of whales strand together. The best known are mass strandings of pilot whales and sperm whales. Stranded cetaceans usually die, because their as much as body weight compresses their lungs or breaks their ribs. Smaller whales can die of heatstroke because of their thermal insulation. The causes are not clear. Possible reasons for mass beachings are: Since 2000, whale strandings frequently occurred following military sonar testing. In December 2001, the US Navy admitted partial responsibility for the beaching and the deaths of several marine mammals in March 2000. The coauthor of the interim report stated that animals killed by active sonar of some Navy ships were injured. Generally, underwater noise, which is still on the increase, is increasingly tied to strandings; because it impairs communication and sense of direction. Climate change influences the major wind systems and ocean currents, which also lead to cetacean strandings. Researchers studying strandings on the Tasmanian coast from 1920–2002 found that greater strandings occurred at certain time intervals. Years with increased strandings were associated with severe storms, which initiated cold water flows close to the coast. In nutrient-rich, cold water, cetaceans expect large prey animals, so they follow the cold water currents into shallower waters, where the risk is higher for strandings. Whales and dolphins who live in pods may accompany sick or debilitated pod members into shallow water, stranding them at low tide. Once stranded, large whales are crushed by their own body weight, if they cannot quickly return to the water. In addition, body temperature regulation is compromised. Heavy metals, residues of many plant and insect venoms and plastic waste flotsam are not biodegradable. Sometimes, cetaceans consume these hazardous materials, mistaking them for food items. As a result, the animals are more susceptible to disease and have fewer offspring. Damage to the ozone layer reduces plankton reproduction because of its resulting radiation. This shrinks the food supply for many marine animals, but the filter-feeding baleen whales are most impacted. Even the Nekton is, in addition to intensive exploitation, damaged by the radiation. Food supplies are also reduced long-term by ocean acidification due to increased absorption of increased atmospheric carbon dioxide. The CO2 reacts with water to form carbonic acid, which reduces the construction of the calcium carbonate skeletons of food supplies for zooplankton that baleen whales depend on. The military and resource extraction industries operate strong sonar and blasting operations. Marine seismic surveys use loud, low-frequency sound that show what is lying underneath the Earth's surface. Vessel traffic also increases noise in the oceans. Such noise can disrupt cetacean behavior such as their use of biosonar for orientation and communication. Severe instances can panic them, driving them to the surface. This leads to bubbles in blood gases and can cause decompression sickness. Naval exercises with sonar regularly results in fallen cetaceans that wash up with fatal decompression. Sounds can be disruptive at distances of more than . Damage varies across frequency and species. In Aristotle's time, the 4th century BCE, whales were regarded as fish due to their superficial similarity. Aristotle, however, observed many physiological and anatomical similarities with the terrestrial vertebrates, such as blood (circulation), lungs, uterus and fin anatomy. His detailed descriptions were assimilated by the Romans, but mixed with a more accurate knowledge of the dolphins, as mentioned by Pliny the Elder in his "Natural history". In the art of this and subsequent periods, dolphins are portrayed with a high-arched head (typical of porpoises) and a long snout. The harbour porpoise was one of the most accessible species for early cetologists; because it could be seen close to land, inhabiting shallow coastal areas of Europe. Much of the findings that apply to all cetaceans were first discovered in porpoises. One of the first anatomical descriptions of the airways of a harbor porpoise dates from 1671 by John Ray. It nevertheless referred to the porpoise as a fish. In the 10th edition of Systema Naturae (1758), Swedish biologist and taxonomist Carl Linnaeus asserted that cetaceans were mammals and not fish. His groundbreaking binomial system formed the basis of modern whale classification. Cetaceans play a role in human culture. Stone Age petroglyphs, such as those in Roddoy and Reppa (Norway), and the Bangudae Petroglyphs in South Korea, depict them. Whale bones were used for many purposes. In the Neolithic settlement of Skara Brae on Orkney sauce pans were made from whale vertebrae. The whale was first mentioned in ancient Greece by Homer. There, it is called Ketos, a term that initially included all large marine animals. From this was derived the Roman word for whale, Cetus. Other names were phálaina (Aristotle, Latin form of ballaena) for the female and, with an ironic characteristic style, musculus (Mouse) for the male. North Sea whales were called Physeter, which was meant for the sperm whale "Physter macrocephalus". Whales are described in particular by Aristotle, Pliny and Ambrose. All mention both live birth and suckling. Pliny describes the problems associated with the lungs with spray tubes and Ambrose claimed that large whales would take their young into their mouth to protect them. In the Bible especially, the leviathan plays a role as a sea monster. The essence, which features a giant crocodile or a dragon and a whale, was created according to the Bible by God and should again be destroyed by him. In the Book of Job, the leviathan is described in more detail. In Jonah there is a more recognizable description of a whale alongside the prophet Jonah, who, on his flight from the city of Nineveh is swallowed by a whale. Dolphins are mentioned far more often than whales. Aristotle discusses the sacred animals of the Greeks in his "Historia Animalium" and gives details of their role as aquatic animals. The Greeks admired the dolphin as a "king of the aquatic animals" and referred to them erroneously as fish. Its intelligence was apparent both in its ability to escape from fishnets and in its collaboration with fishermen. River dolphins are known from the Ganges and – erroneously – the Nile. In the latter case it was equated with sharks and catfish. Supposedly they attacked even crocodiles. Dolphins appear in Greek mythology. Because of their intelligence, they rescued multiple people from drowning. They were said to love music – probably not least because of their own song – they saved, in the legends, famous musicians such as Arion of Lesbos from Methymna or Kairanos from Miletus. Because of their mental faculties, dolphins were considered for the god Dionysus. Dolphins belong to the domain of Poseidon and led him to his wife Amphitrite. Dolphins are associated with other gods, such as Apollo, Dionysus and Aphrodite. The Greeks paid tribute to both whales and dolphins with their own constellation. The constellation of the Whale (Ketos, lat. Cetus) is located south of the Dolphin (Delphi, lat. Delphinus) north of the zodiac. Ancient art often included dolphin representations, including the Cretan Minoans. Later they appeared on reliefs, gems, lamps, coins, mosaics and gravestones. A particularly popular representation is that of Arion or the Taras (mythology) riding on a dolphin. In early Christian art, the dolphin is a popular motif, at times used as a symbol of Christ. St. Brendan described in his travel story "Navigatio Sancti Brendani" an encounter with a whale, between the years 565–573. He described how he and his companions entered a treeless island, which turned out to be a giant whale, which he called Jasconicus. He met this whale seven years later and rested on his back. Most descriptions of large whales from this time until the whaling era, beginning in the 17th century, were of beached whales, which resembled no other animal. This was particularly true for the sperm whale, the most frequently stranded in larger groups. Raymond Gilmore documented seventeen sperm whales in the estuary of the Elbe from 1723 to 1959 and thirty-one animals on the coast of Great Britain in 1784. In 1827, a blue whale beached itself off the coast of Ostend. Whales were used as attractions in museums and traveling exhibitions. Whalers from the 17th to 19th centuries depicted whales in drawings and recounted tales of their occupation. Although they knew that whales were harmless giants, they described battles with harpooned animals. These included descriptions of sea monsters, including huge whales, sharks, sea snakes, giant squid and octopuses. Among the first whalers who described their experiences on whaling trips was Captain William Scoresby from Great Britain, who published the book "Northern Whale Fishery", describing the hunt for northern baleen whales. This was followed by Thomas Beale, a British surgeon, in his book "Some observations on the natural history of the sperm whale" in 1835; and Frederick Debell Bennett's "The tale of a whale hunt" in 1840. Whales were described in narrative literature and paintings, most famously in the novels "Moby Dick" by Herman Melville and "20,000 Leagues Under the Sea" by Jules Verne. Baleen was used to make vessel components such as the bottom of a bucket in the Scottish National Museum. The Norsemen crafted ornamented plates from baleen, sometimes interpreted as ironing boards. In the Canadian Arctic (east coast) in Punuk and Thule culture (1000–1600 C.E.), I baleen was used to construct houses in place of wood as roof support for winter houses, with half of the building buried under the ground. The actual roof was probably made of animal skins that were covered with soil and moss. In the 20th century perceptions of cetaceans changed. They transformed from monsters into creatures of wonder, as science revealed them to be intelligent and peaceful animals. Hunting was replaced by whale and dolphin tourism. This change is reflected in films and novels. For example, the protagonist of the series Flipper was a bottle-nose dolphin. The TV series SeaQuest DSV (1993–1996), the movies Free Willy, and the book series The Hitchhiker's Guide to the Galaxy by Douglas Adams are examples. The study of whale song also produced a popular album, "Songs of the Humpback Whale". Whales and dolphins have been kept in captivity for use in education, research and entertainment since the 19th century. Beluga whales were the first whales to be kept in captivity. Other species were too rare, too shy or too big. The first was shown at Barnum's Museum in New York City in 1861. For most of the 20th century, Canada was the predominant source. They were taken from the St. Lawrence River estuary until the late 1960s, after which they were predominantly taken from the Churchill River estuary until capture was banned in 1992. Russia then became the largest provider. Belugas are caught in the Amur Darya delta and their eastern coast and are transported domestically to aquaria or dolphinaria in Moscow, St. Petersburg and Sochi, or exported to countries such as Canada. They have not been domesticated. As of 2006, 30 belugas lived in Canada and 28 in the United States. 42 deaths in captivity had been reported. A single specimen can reportedly fetch up to US$100,000 (UK£64,160). The beluga's popularity is due to its unique color and its facial expressions. The latter is possible because while most cetacean "smiles" are fixed, the extra movement afforded by the beluga's unfused cervical vertebrae allows a greater range of apparent expression. The killer whale's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, fifty-five whales were taken from the wild in Iceland, nineteen from Japan and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s and by 1999, about 40% of the forty-eight animals on display in the world were captive-born. Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of captive males. Captives have reduced life expectancy, on average only living into their 20s, although some live longer, including several over 30 years old and two, Corky II and Lolita, in their mid-40s. In the wild, females who survive infancy live 46 years on average and up to 70–80 years. Wild males who survive infancy live 31 years on average and can reach 50–60 years. Captivity usually bears little resemblance to wild habitat and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild killer whale behavior. Wild killer whales may travel up to in a day and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress. Killer whales are well known for their performances in shows, but the number of orcas kept in captivity is small, especially when compared to the number of bottlenose dolphins, with only forty-four captive orcas being held in aquaria as of 2012. Each country has its own tank requirements; in the US, the minimum enclosure size is set by the Code of Federal Regulations, 9 CFR E § 3.104, under the "Specifications for the Humane Handling, Care, Treatment and Transportation of Marine Mammals". Aggression among captive killer whales is common. They attack each other and their trainers as well. In 2013, SeaWorld's treatment of killer whales in captivity was the basis of the movie "Blackfish", which documents the history of Tilikum, a killer whale at SeaWorld Orlando, who had been involved in the deaths of three people. The film led to proposals by some lawmakers to ban captivity of cetaceans, and lead SeaWorld to announce in 2016 that it would phase out its killer whale program after various unsuccessful attempts to restore its revenues, reputation, and stock price. Dolphins and porpoises are kept in captivity. Bottlenose dolphins are the most common, as they are relatively easy to train, have a long lifespan in captivity and have a friendly appearance. Bottlenose dolphins live in captivity across the world, though exact numbers are hard to determine. Other species kept in captivity are spotted dolphins, false killer whales and common dolphins, Commerson's dolphins, as well as rough-toothed dolphins, but all in much lower numbers. There are also fewer than ten pilot whales, Amazon river dolphins, Risso's dolphins, spinner dolphins, or tucuxi in captivity. Two unusual and rare hybrid dolphins, known as wolphins, are kept at Sea Life Park in Hawaii, which is a cross between a bottlenose dolphin and a false killer whale. Also, two common/bottlenose hybrids reside in captivity at Discovery Cove and SeaWorld San Diego. In repeated attempts in the 1960s and 1970s, narwhals kept in captivity died within months. A breeding pair of pygmy right whales were retained in a netted area. They were eventually released in South Africa. In 1971, SeaWorld captured a California gray whale calf in Mexico at Scammon's Lagoon. The calf, later named Gigi, was separated from her mother using a form of lasso attached to her flukes. Gigi was displayed at SeaWorld San Diego for a year. She was then released with a radio beacon affixed to her back; however, contact was lost after three weeks. Gigi was the first captive baleen whale. JJ, another gray whale calf, was kept at SeaWorld San Diego. JJ was an orphaned calf that beached itself in April 1997 and was transported two miles to SeaWorld. The calf was a popular attraction and behaved normally, despite separation from his mother. A year later, the then whale though smaller than average, was too big to keep in captivity, and was released on April 1, 1998. A captive Amazon river dolphin housed at Acuario de Valencia is the only trained river dolphin in captivity. Here is a list of all the cetaceans that have been taken into captivity for either conservation or human entertainment purposes currently or in the past, temporarily or permanently.
https://en.wikipedia.org/wiki?curid=7626
The Canterbury Tales The Canterbury Tales () is a collection of 24 stories that runs to over 17,000 lines written in Middle English by Geoffrey Chaucer between 1387 and 1400. In 1386, Chaucer became Controller of Customs and Justice of Peace and, in 1389, Clerk of the King's work. It was during these years that Chaucer began working on his most famous text, "The Canterbury Tales". The tales (mostly written in verse, although some are in prose) are presented as part of a story-telling contest by a group of pilgrims as they travel together from London to Canterbury to visit the shrine of Saint Thomas Becket at Canterbury Cathedral. The prize for this contest is a free meal at the Tabard Inn at Southwark on their return. After a long list of works written earlier in his career, including "Troilus and Criseyde", "House of Fame", and "Parliament of Fowls", "The Canterbury Tales" is near-unanimously seen as Chaucer's "magnum opus". He uses the tales and descriptions of its characters to paint an ironic and critical portrait of English society at the time, and particularly of the Church. Chaucer's use of such a wide range of classes and types of people was without precedent in English. Although the characters are fictional, they still offer a variety of insights into customs and practices of the time. Often, such insight leads to a variety of discussions and disagreements among people in the 14th century. For example, although various social classes are represented in these stories and all of the pilgrims are on a spiritual quest, it is apparent that they are more concerned with worldly things than spiritual. Structurally, the collection resembles Boccaccio's "Decameron", which Chaucer may have read during his first diplomatic mission to Italy in 1372. It has been suggested that the greatest contribution of "The Canterbury Tales" to English literature was the popularisation of the English vernacular in mainstream literature, as opposed to French, Italian or Latin. English had, however, been used as a literary language centuries before Chaucer's time, and several of Chaucer's contemporaries—John Gower, William Langland, the Pearl Poet, and Julian of Norwich—also wrote major literary works in English. It is unclear to what extent Chaucer was seminal in this evolution of literary preference. While Chaucer clearly states the addressees of many of his poems, the intended audience of "The Canterbury Tales" is more difficult to determine. Chaucer was a courtier, leading some to believe that he was mainly a court poet who wrote exclusively for nobility. "The Canterbury Tales" is generally thought to have been incomplete at the end of Chaucer's life. In the General Prologue, some 30 pilgrims are introduced. According to the Prologue, Chaucer's intention was to write four stories from the perspective of each pilgrim, two each on the way to and from their ultimate destination, St. Thomas Becket's shrine (making for a total of about 120 stories). Although perhaps incomplete, "The Canterbury Tales" is revered as one of the most important works in English literature. It is also open to a wide range of interpretations. The question of whether "The Canterbury Tales" is a finished work has not been answered to date. There are 84 manuscripts and four incunabula (printed before 1500) editions of the work, dating from the late medieval and early Renaissance periods, more than for any other vernacular literary text with the exception of "The Prick of Conscience". This is taken as evidence of the "Tales"' popularity during the century after Chaucer's death. Fifty-five of these manuscripts are thought to have been originally complete, while 28 are so fragmentary that it is difficult to ascertain whether they were copied individually or as part of a set. The "Tales" vary in both minor and major ways from manuscript to manuscript; many of the minor variations are due to copyists' errors, while it is suggested that in other cases Chaucer both added to his work and revised it as it was being copied and possibly as it was being distributed. Determining the text of the work is complicated by the question of the narrator's voice which Chaucer made part of his literary structure. Even the oldest surviving manuscripts of the "Tales" are not Chaucer's originals. The very oldest is probably MS Peniarth 392 D (called "Hengwrt"), written by a scribe shortly after Chaucer's death. Another famous example is the Ellesmere Manuscript, a manuscript handwritten by one person with illustrations by several illustrators; the tales are put in an order that many later editors have followed for centuries. The first version of "The Canterbury Tales" to be published in print was William Caxton's 1476 edition. Only 10 copies of this edition are known to exist, including one held by the British Library and one held by the Folger Shakespeare Library. In 2004, Linne Mooney claimed that she was able to identify the scrivener who worked for Chaucer as an Adam Pinkhurst. Mooney, then a professor at the University of Maine and a visiting fellow at Corpus Christi College, Cambridge, said she could match Pinkhurst's signature, on an oath he signed, to his handwriting on a copy of "The Canterbury Tales" that might have been transcribed from Chaucer's working copy. Recent scholarship has cast severe doubt upon that identification. In the absence of consensus as to whether or not a complete version of the "Tales" exists, there is also no general agreement regarding the order in which Chaucer intended the stories to be placed. Textual and manuscript clues have been adduced to support the two most popular modern methods of ordering the tales. Some scholarly editions divide the "Tales" into ten "Fragments". The tales that make up a Fragment are closely related and contain internal indications of their order of presentation, usually with one character speaking to and then stepping aside for another character. However, between Fragments, the connection is less obvious. Consequently, there are several possible orders; the one most frequently seen in modern editions follows the numbering of the Fragments (ultimately based on the Ellesmere order). Victorians frequently used the nine "Groups", which was the order used by Walter William Skeat whose edition "Chaucer: Complete Works" was used by Oxford University Press for most of the twentieth century, but this order is now seldom followed. An alternative ordering (seen in an early manuscript containing "The Canterbury Tales", the early-fifteenth century Harley MS. 7334) places Fragment VIII before VI. Fragments I and II almost always follow each other, just as VI and VII, IX and X do in the oldest manuscripts. Fragments IV and V, by contrast, vary in location from manuscript to manuscript. Chaucer wrote in a London dialect of late Middle English, which has clear differences from Modern English. From philological research, some facts are known about the pronunciation of English during the time of Chaucer. Chaucer pronounced "-e" at the end of many words, so that "care" was , not as in Modern English. Other silent letters were also pronounced, so that the word "knight" was , with both the "k" and the "gh" pronounced, not . In some cases, vowel letters in Middle English were pronounced very differently from Modern English, because the Great Vowel Shift had not yet happened. For instance, the long "e" in "wepyng" "weeping" was pronounced as , as in modern German or Italian, not as . Below is an IPA transcription of the opening lines of "The Merchant's Prologue": Although no manuscript exists in Chaucer's own hand, two were copied around the time of his death by Adam Pinkhurst, a scribe with whom he may have worked closely before, giving a high degree of confidence that Chaucer himself wrote the "Tales". Because the final "-e" sound was lost soon after Chaucer's time, scribes did not accurately copy it, and this gave scholars the impression that Chaucer himself was inconsistent in using it. It has now been established, however, that "-e" was an important part of Chaucer's grammar, and helped to distinguish singular adjectives from plural and subjunctive verbs from indicative. No other work prior to Chaucer's is known to have set a collection of tales within the framework of pilgrims on a pilgrimage. It is obvious, however, that Chaucer borrowed portions, sometimes very large portions, of his stories from earlier stories, and that his work was influenced by the general state of the literary world in which he lived. Storytelling was the main entertainment in England at the time, and storytelling contests had been around for hundreds of years. In 14th-century England the English Pui was a group with an appointed leader who would judge the songs of the group. The winner received a crown and, as with the winner of "The Canterbury Tales", a free dinner. It was common for pilgrims on a pilgrimage to have a chosen "master of ceremonies" to guide them and organise the journey. Harold Bloom suggests that the structure is mostly original, but inspired by the "pilgrim" figures of Dante and Virgil in "The Divine Comedy". New research suggests that the General Prologue, in which the innkeeper and host Harry Bailey introduces each pilgrim, is a pastiche of the historical Harry Bailey's surviving 1381 poll-tax account of Southwark's inhabitants. "The Decameron" by Giovanni Boccaccio contains more parallels to "The Canterbury Tales" than any other work. Like the "Tales", it features a number of narrators who tell stories along a journey they have undertaken (to flee from the Black Death). It ends with an apology by Boccaccio, much like Chaucer's Retraction to the "Tales". A quarter of the tales in "The Canterbury Tales" parallel a tale in the "Decameron", although most of them have closer parallels in other stories. Some scholars thus find it unlikely that Chaucer had a copy of the work on hand, surmising instead that he may have merely read the "Decameron" at some point. Each of the tales has its own set of sources that have been suggested by scholars, but a few sources are used frequently over several tales. They include poetry by Ovid, the Bible in one of the many vulgate versions in which it was available at the time (the exact one is difficult to determine), and the works of Petrarch and Dante. Chaucer was the first author to use the work of these last two, both Italians. Boethius' "Consolation of Philosophy" appears in several tales, as the works of John Gower do. Gower was a known friend to Chaucer. Chaucer also seems to have borrowed from numerous religious encyclopaedias and liturgical writings, such as John Bromyard's "Summa praedicantium", a preacher's handbook, and Jerome's "Adversus Jovinianum". Many scholars say there is a good possibility Chaucer met Petrarch or Boccaccio. "The Canterbury Tales" is a collection of stories built around a frame narrative or frame tale, a common and already long established genre of its period. Chaucer's "Tales" differs from most other story "collections" in this genre chiefly in its intense variation. Most story collections focused on a theme, usually a religious one. Even in the "Decameron", storytellers are encouraged to stick to the theme decided on for the day. The idea of a pilgrimage to get such a diverse collection of people together for literary purposes was also unprecedented, though "the association of pilgrims and storytelling was a familiar one". Introducing a competition among the tales encourages the reader to compare the tales in all their variety, and allows Chaucer to showcase the breadth of his skill in different genres and literary forms. While the structure of the "Tales" is largely linear, with one story following another, it is also much more than that. In the "General Prologue", Chaucer describes not the tales to be told, but the people who will tell them, making it clear that structure will depend on the characters rather than a general theme or moral. This idea is reinforced when the Miller interrupts to tell his tale after the Knight has finished his. Having the Knight go first gives one the idea that all will tell their stories by class, with the Monk following the Knight. However, the Miller's interruption makes it clear that this structure will be abandoned in favour of a free and open exchange of stories among all classes present. General themes and points of view arise as the characters tell their tales, which are responded to by other characters in their own tales, sometimes after a long lapse in which the theme has not been addressed. Lastly, Chaucer does not pay much attention to the progress of the trip, to the time passing as the pilgrims travel, or to specific locations along the way to Canterbury. His writing of the story seems focused primarily on the stories being told, and not on the pilgrimage itself. The variety of Chaucer's tales shows the breadth of his skill and his familiarity with many literary forms, linguistic styles, and rhetorical devices. Medieval schools of rhetoric at the time encouraged such diversity, dividing literature (as Virgil suggests) into high, middle, and low styles as measured by the density of rhetorical forms and vocabulary. Another popular method of division came from St. Augustine, who focused more on audience response and less on subject matter (a Virgilian concern). Augustine divided literature into "majestic persuades", "temperate pleases", and "subdued teaches". Writers were encouraged to write in a way that kept in mind the speaker, subject, audience, purpose, manner, and occasion. Chaucer moves freely between all of these styles, showing favouritism to none. He not only considers the readers of his work as an audience, but the other pilgrims within the story as well, creating a multi-layered rhetoric With this, Chaucer avoids targeting any specific audience or social class of readers, focusing instead on the characters of the story and writing their tales with a skill proportional to their social status and learning. However, even the lowest characters, such as the Miller, show surprising rhetorical ability, although their subject matter is more lowbrow. Vocabulary also plays an important part, as those of the higher classes refer to a woman as a "lady", while the lower classes use the word "wenche", with no exceptions. At times the same word will mean entirely different things between classes. The word "pitee", for example, is a noble concept to the upper classes, while in the "Merchant's Tale" it refers to sexual intercourse. Again, however, tales such as the "Nun's Priest's Tale" show surprising skill with words among the lower classes of the group, while the "Knight's Tale" is at times extremely simple. Chaucer uses the same meter throughout almost all of his tales, with the exception of "Sir Thopas" and his prose tales. It is a decasyllable line, probably borrowed from French and Italian forms, with riding rhyme and, occasionally, a caesura in the middle of a line. His meter would later develop into the heroic meter of the 15th and 16th centuries and is an ancestor of iambic pentameter. He avoids allowing couplets to become too prominent in the poem, and four of the tales (the Man of Law's, Clerk's, Prioress', and Second Nun's) use rhyme royal. "The Canterbury Tales" was written during a turbulent time in English history. The Catholic Church was in the midst of the Western Schism and, although it was still the only Christian authority in Western Europe, it was the subject of heavy controversy. Lollardy, an early English religious movement led by John Wycliffe, is mentioned in the "Tales", which also mention a specific incident involving pardoners (sellers of indulgences, which were believed to relieve the temporal punishment due for sins that were already forgiven in the Sacrament of Confession) who nefariously claimed to be collecting for St. Mary Rouncesval hospital in England. "The Canterbury Tales" is among the first English literary works to mention paper, a relatively new invention that allowed dissemination of the written word never before seen in England. Political clashes, such as the 1381 Peasants' Revolt and clashes ending in the deposing of King Richard II, further reveal the complex turmoil surrounding Chaucer in the time of the "Tales"' writing. Many of his close friends were executed and he himself moved to Kent to get away from events in London. While some readers look to interpret the characters of "The Canterbury Tales" as historical figures, other readers choose to interpret its significance in less literal terms. After analysis of Chaucer's diction and historical context, his work appears to develop a critique of society during his lifetime. Within a number of his descriptions, his comments can appear complimentary in nature, but through clever language, the statements are ultimately critical of the pilgrim's actions. It is unclear whether Chaucer would intend for the reader to link his characters with actual persons. Instead, it appears that Chaucer creates fictional characters to be general representations of people in such fields of work. With an understanding of medieval society, one can detect subtle satire at work. The "Tales" reflect diverse views of the Church in Chaucer's England. After the Black Death, many Europeans began to question the authority of the established Church. Some turned to lollardy, while others chose less extreme paths, starting new monastic orders or smaller movements exposing church corruption in the behaviour of the clergy, false church relics or abuse of indulgences. Several characters in the "Tales" are religious figures, and the very setting of the pilgrimage to Canterbury is religious (although the prologue comments ironically on its merely seasonal attractions), making religion a significant theme of the work. Two characters, the Pardoner and the Summoner, whose roles apply the Church's secular power, are both portrayed as deeply corrupt, greedy, and abusive. Pardoners in Chaucer's day were those people from whom one bought Church "indulgences" for forgiveness of sins, who were guilty of abusing their office for their own gain. Chaucer's Pardoner openly admits the corruption of his practice while hawking his wares. Summoners were Church officers who brought sinners to the Church court for possible excommunication and other penalties. Corrupt summoners would write false citations and frighten people into bribing them to protect their interests. Chaucer's Summoner is portrayed as guilty of the very kinds of sins for which he is threatening to bring others to court, and is hinted as having a corrupt relationship with the Pardoner. In The Friar's Tale, one of the characters is a summoner who is shown to be working on the side of the devil, not God. Churchmen of various kinds are represented by the Monk, the Prioress, the Nun's Priest, and the Second Nun. Monastic orders, which originated from a desire to follow an ascetic lifestyle separated from the world, had by Chaucer's time become increasingly entangled in worldly matters. Monasteries frequently controlled huge tracts of land on which they made significant sums of money, while peasants worked in their employ. The Second Nun is an example of what a Nun was expected to be: her tale is about a woman whose chaste example brings people into the church. The Monk and the Prioress, on the other hand, while not as corrupt as the Summoner or Pardoner, fall far short of the ideal for their orders. Both are expensively dressed, show signs of lives of luxury and flirtatiousness and show a lack of spiritual depth. The Prioress's Tale is an account of Jews murdering a deeply pious and innocent Christian boy, a blood libel against Jews that became a part of English literary tradition. The story did not originate in the works of Chaucer and was well known in the 14th century. Pilgrimage was a very prominent feature of medieval society. The ultimate pilgrimage destination was Jerusalem, but within England Canterbury was a popular destination. Pilgrims would journey to cathedrals that preserved relics of saints, believing that such relics held miraculous powers. Saint Thomas Becket, Archbishop of Canterbury, had been murdered in Canterbury Cathedral by knights of Henry II during a disagreement between Church and Crown. Miracle stories connected to his remains sprang up soon after his death, and the cathedral became a popular pilgrimage destination. The pilgrimage in the work ties all of the stories together and may be considered a representation of Christians' striving for heaven, despite weaknesses, disagreement, and diversity of opinion. The upper class or nobility, represented chiefly by the Knight and his Squire, was in Chaucer's time steeped in a culture of chivalry and courtliness. Nobles were expected to be powerful warriors who could be ruthless on the battlefield yet mannerly in the King's Court and Christian in their actions. Knights were expected to form a strong social bond with the men who fought alongside them, but an even stronger bond with a woman whom they idealised to strengthen their fighting ability. Though the aim of chivalry was to noble action, its conflicting values often degenerated into violence. Church leaders frequently tried to place restrictions on jousts and tournaments, which at times ended in the death of the loser. The Knight's Tale shows how the brotherly love of two fellow knights turns into a deadly feud at the sight of a woman whom both idealise. To win her, both are willing to fight to the death. Chivalry was on the decline in Chaucer's day, and it is possible that The Knight's Tale was intended to show its flaws, although this is disputed. Chaucer himself had fought in the Hundred Years' War under Edward III, who heavily emphasised chivalry during his reign. Two tales, "Sir Topas" and "The Tale of Melibee", are told by Chaucer himself, who is travelling with the pilgrims in his own story. Both tales seem to focus on the ill-effects of chivalry—the first making fun of chivalric rules and the second warning against violence. The "Tales" constantly reflect the conflict between classes. For example, the division of the three estates: the characters are all divided into three distinct classes, the classes being "those who pray" (the clergy), "those who fight" (the nobility), and "those who work" (the commoners and peasantry). Most of the tales are interlinked by common themes, and some "quit" (reply to or retaliate against) other tales. Convention is followed when the Knight begins the game with a tale, as he represents the highest social class in the group. But when he is followed by the Miller, who represents a lower class, it sets the stage for the "Tales" to reflect both a respect for and a disregard for upper class rules. Helen Cooper, as well as Mikhail Bakhtin and Derek Brewer, call this opposition "the ordered and the grotesque, Lent and Carnival, officially approved culture and its riotous, and high-spirited underside." Several works of the time contained the same opposition. Chaucer's characters each express different—sometimes vastly different—views of reality, creating an atmosphere of testing, empathy, and relativism. As Helen Cooper says, "Different genres give different readings of the world: the fabliau scarcely notices the operations of God, the saint's life focuses on those at the expense of physical reality, tracts and sermons insist on prudential or orthodox morality, romances privilege human emotion." The sheer number of varying persons and stories renders the "Tales" as a set unable to arrive at any definite truth or reality. The concept of liminality figures prominently within "The Canterbury Tales". A liminal space, which can be both geographical as well as metaphorical or spiritual, is the transitional or transformational space between a “real” (secure, known, limited) world and an unknown or imaginary space of both risk and possibility. The notion of a pilgrimage is itself a liminal experience, because it centres on travel between destinations and because pilgrims undertake it hoping to become more holy in the process. Thus, the structure of "The Canterbury Tales" itself is liminal; it not only covers the distance between London and Canterbury, but the majority of the tales refer to places entirely outside the geography of the pilgrimage. Jean Jost summarises the function of liminality in "The Canterbury Tales", Liminality is also evident in the individual tales. An obvious instance of this is the Friar's Tale in which the yeoman devil is a liminal figure because of his transitory nature and function; it is his purpose to issue souls from their current existence to hell, an entirely different one. The Franklin's Tale is a Breton Lai tale, which takes the tale into a liminal space by invoking not only the interaction of the supernatural and the mortal, but also the relation between the present and the imagined past. It is sometimes argued that the greatest contribution that this work made to English literature was in popularising the literary use of the vernacular English, rather than French or Latin. English had, however, been used as a literary language for centuries before Chaucer's life, and several of Chaucer's contemporaries—John Gower, William Langland, and the Pearl Poet—also wrote major literary works in English. It is unclear to what extent Chaucer was responsible for starting a trend rather than simply being part of it. Although Chaucer had a powerful influence in poetic and artistic terms, which can be seen in the great number of forgeries and mistaken attributions (such as "The Floure and the Leafe", which was translated by John Dryden), modern English spelling and orthography owe much more to the innovations made by the Court of Chancery in the decades during and after his lifetime. While Chaucer clearly states the addressees of many of his poems (the "Book of the Duchess" is believed to have been written for John of Gaunt on the occasion of his wife's death in 1368), the intended audience of "The Canterbury Tales" is more difficult to determine. Chaucer was a courtier, leading some to believe that he was mainly a court poet who wrote exclusively for the nobility. He is referred to as a noble translator and poet by Eustache Deschamps and by his contemporary John Gower. It has been suggested that the poem was intended to be read aloud, which is probable as this was a common activity at the time. However, it also seems to have been intended for private reading as well, since Chaucer frequently refers to himself as the writer, rather than the speaker, of the work. Determining the intended audience directly from the text is even more difficult, since the audience is part of the story. This makes it difficult to tell when Chaucer is writing to the fictional pilgrim audience or the actual reader. Chaucer's works may have been distributed in some form during his lifetime in part or in whole. Scholars speculate that manuscripts were circulated among his friends, but likely remained unknown to most people until after his death. However, the speed with which copyists strove to write complete versions of his tale in manuscript form shows that Chaucer was a famous and respected poet in his own day. The Hengwrt and Ellesmere manuscripts are examples of the care taken to distribute the work. More manuscript copies of the poem exist than for any other poem of its day except "The Prick of Conscience", causing some scholars to give it the medieval equivalent of bestseller status. Even the most elegant of the illustrated manuscripts, however, is not nearly as highly decorated as the work of authors of more respectable works such as John Lydgate's religious and historical literature. John Lydgate and Thomas Occleve were among the first critics of Chaucer's "Tales", praising the poet as the greatest English poet of all time and the first to show what the language was truly capable of poetically. This sentiment was universally agreed upon by later critics into the mid-15th century. Glosses included in "The Canterbury Tales" manuscripts of the time praised him highly for his skill with "sentence" and rhetoric, the two pillars by which medieval critics judged poetry. The most respected of the tales was at this time the Knight's, as it was full of both. The incompleteness of the "Tales" led several medieval authors to write additions and supplements to the tales to make them more complete. Some of the oldest existing manuscripts of the tales include new or modified tales, showing that even early on, such additions were being created. These emendations included various expansions of the "Cook's Tale", which Chaucer never finished, "The Plowman's Tale", "The Tale of Gamelyn", the "Siege of Thebes", and the "Tale of Beryn". The "Tale of Beryn", written by an anonymous author in the 15th century, is preceded by a lengthy prologue in which the pilgrims arrive at Canterbury and their activities there are described. While the rest of the pilgrims disperse throughout the town, the Pardoner seeks the affections of Kate the barmaid, but faces problems dealing with the man in her life and the innkeeper Harry Bailey. As the pilgrims turn back home, the Merchant restarts the storytelling with "Tale of Beryn". In this tale, a young man named Beryn travels from Rome to Egypt to seek his fortune only to be cheated by other businessmen there. He is then aided by a local man in getting his revenge. The tale comes from the French tale "Bérinus" and exists in a single early manuscript of the tales, although it was printed along with the tales in a 1721 edition by John Urry. John Lydgate wrote "The Siege of Thebes" in about 1420. Like the "Tale of Beryn", it is preceded by a prologue in which the pilgrims arrive in Canterbury. Lydgate places himself among the pilgrims as one of them and describes how he was a part of Chaucer's trip and heard the stories. He characterises himself as a monk and tells a long story about the history of Thebes before the events of the "Knight's Tale". John Lydgate's tale was popular early on and exists in old manuscripts both on its own and as part of the "Tales". It was first printed as early as 1561 by John Stow, and several editions for centuries after followed suit. There are actually two versions of "The Plowman's Tale", both of which are influenced by the story "Piers Plowman", a work written during Chaucer's lifetime. Chaucer describes a Plowman in the "General Prologue" of his tales, but never gives him his own tale. One tale, written by Thomas Occleve, describes the miracle of the Virgin and the Sleeveless Garment. Another tale features a pelican and a griffin debating church corruption, with the pelican taking a position of protest akin to John Wycliffe's ideas. "The Tale of Gamelyn" was included in an early manuscript version of the tales, Harley 7334, which is notorious for being one of the lower-quality early manuscripts in terms of editor error and alteration. It is now widely rejected by scholars as an authentic Chaucerian tale, although some scholars think he may have intended to rewrite the story as a tale for the Yeoman. Dates for its authorship vary from 1340 to 1370. Many literary works (both fiction and non-fiction alike) have used a similar frame narrative to "The Canterbury Tales" as an homage. Science-fiction writer Dan Simmons wrote his Hugo Award winning novel "Hyperion" based on an extra-planetary group of pilgrims. Evolutionary biologist Richard Dawkins used "The Canterbury Tales" as a structure for his 2004 non-fiction book about evolution titled "". His animal pilgrims are on their way to find the common ancestor, each telling a tale about evolution. Henry Dudeney's book "The Canterbury Puzzles" contains a part reputedly lost from what modern readers know as Chaucer's tales. Historical-mystery novelist P.C. Doherty wrote a series of novels based on "The Canterbury Tales", making use of both the story frame and Chaucer's characters. Canadian author Angie Abdou translates "The Canterbury Tales" to a cross section of people, all snow-sports enthusiasts but from different social backgrounds, converging on a remote back-country ski cabin in British Columbia in the 2011 novel "The Canterbury Trail". "The Two Noble Kinsmen", by William Shakespeare and John Fletcher, a retelling of "The Knight's Tale", was first performed in 1613 or 1614 and published in 1634. In 1961, Erik Chisholm completed his opera, "The Canterbury Tales". The opera is in three acts: The Wyf of Bath's Tale, The Pardoner's Tale and The Nun's Priest's Tale. Nevill Coghill's modern English version formed the basis of a musical version that was first staged in 1964. "A Canterbury Tale", a 1944 film jointly written and directed by Michael Powell and Emeric Pressburger, is loosely based on the narrative frame of Chaucer's tales. The movie opens with a group of medieval pilgrims journeying through the Kentish countryside as a narrator speaks the opening lines of the "General Prologue". The scene then makes a now-famous transition to the time of World War II. From that point on, the film follows a group of strangers, each with his or her own story and in need of some kind of redemption, who are making their way to Canterbury together. The film's main story takes place in an imaginary town in Kent and ends with the main characters arriving at Canterbury Cathedral, bells pealing and Chaucer's words again resounding. "A Canterbury Tale" is recognised as one of the Powell-Pressburger team's most poetic and artful films. It was produced as wartime propaganda, using Chaucer's poetry, referring to the famous pilgrimage, and offering photography of Kent to remind the public of what made Britain worth fighting for. In one scene a local historian lectures an audience of British soldiers about the pilgrims of Chaucer's time and the vibrant history of England. Pier Paolo Pasolini's 1972 film "The Canterbury Tales" features several of the tales, some of which keep close to the original tale and some of which are embellished. The "Cook's Tale", for instance, which is incomplete in the original version, is expanded into a full story, and the "Friar's Tale" extends the scene in which the Summoner is dragged down to hell. The film includes these two tales as well as the "Miller's Tale", the "Summoner's Tale", the "Wife of Bath's Tale", and the "Merchant's Tale". On 26 April 1986, American radio personality Garrison Keillor opened "The News from Lake Wobegon" portion of the first live TV broadcast of his "A Prairie Home Companion" radio show with a reading of the original Middle English text of the General Prologue. He commented, "Although those words were written more than 600 years ago, they still describe spring." The 2001 film "A Knight's Tale" starring Heath Ledger takes its title from Chaucer's "The Knight's Tale" and features Chaucer as a character. Television adaptations include Alan Plater's 1975 re-telling of the stories in a series of plays for BBC2: "Trinity Tales". In 2003, the BBC again featured modern re-tellings of selected tales. General Online texts Facsimiles
https://en.wikipedia.org/wiki?curid=7627
Christine de Pizan Christine de Pizan or Pisan (), born Cristina da Pizzano (1364 – c. 1430), was a poet and author at the court of King Charles VI of France. She is best remembered for defending women in "The Book of the City of Ladies" and "The Treasure of the City of Ladies". Venetian by birth, Christine was a prominent moralist and political thinker in medieval France. Christine's patrons included dukes Louis I of Orleans, Philip the Bold, and John the Fearless. Her books of advice to princesses, princes, and knights remained in print until the 16th century. In recent decades, Christine's work has been returned to prominence by the efforts of scholars Charity Cannon Willard, Earl Jeffrey Richards, Suzanne Solente, Mathilde Laigle and Marie-Josephe Pinet. Christine de Pizan was born in 1364 in Venice, Italy. She was the daughter of Tommaso di Benvenuto da Pizzano. Her father became known as Thomas de Pizan, named for the family's origins in the town of Pizzano, southeast of Bologna. Her father worked as a physician, court astrologer and Councillor of the Republic of Venice. Thomas de Pizan accepted an appointment to the court of Charles V of France as the king's astrologer and in 1368 Christine moved to Paris. In 1379 Christine de Pizan married the notary and royal secretary Etienne du Castel. She had three children. Her daughter became a nun at the Dominican Abbey in Poissy in 1397 as a companion to the King's daughter Marie. Christine's husband died of the plague in 1389, and her father had died the year before. Christine was left to support her mother and her children. When she tried to collect money from her husband's estate, she faced complicated lawsuits regarding the recovery of salary due her husband. On 4 June 1389, in a judgment concerning a lawsuit filed against her by the archbishop of Sens and François Chanteprime, councillors of the King, Christine was styled "damoiselle" and widow of "Estienne du Castel". In order to support herself and her family, Christine turned to writing. By 1393, she was writing love ballads, which caught the attention of wealthy patrons within the court. Christine became a prolific writer. Her involvement in the production of her books and her skillful use of patronage in turbulent political times has earned her the title of the first professional woman of letters in Europe. Although Italian by birth, Christine expressed a fervent nationalism for France. Affectively and financially she became attached to the French royal family, donating or dedicating her early ballads to its members, including Isabeau of Bavaria, Louis I, Duke of Orléans, and Marie of Berry. Of Queen Isabeau she wrote in 1402 "High, excellent crowned Queen of France, very redoubtable princess, powerful lady, born at a lucky hour". France was ruled by Charles VI who experienced a series of mental breakdowns, causing a crisis of leadership for the French monarchy. He was often absent from court and could eventually only make decisions with the approval of a royal council. Queen Isabeau was nominally in charge of governance when her husband was absent from court, but could not extinguish the quarrel between members of the royal family. In the past, Blanche of Castile had played a central role in the stability of the royal court and had acted as regent of France. Christine published a series of works on the virtues of women, referencing Queen Blanche and dedicating them to Queen Isabeau. Christine believed that France had been founded by the descendants of the Trojans and that its governance by the royal family adhered to the Aristotelian ideal. In 1400 Christine published "L'Épistre de Othéa a Hector" ("Letter of Othea to Hector"). When first published, the book was dedicated to Louis of Orléans, the brother of Charles VI, who was at court seen as potential regent of France. In "L'Épistre de Othéa a Hector" Hector of Troy is tutored in statecraft and the political virtues by the goddess of wisdom Othéa. Christine produced richly illustrated luxury editions of "L'Épistre de Othéa a Hector" in 1400. Between 1408 and 1415 Christine produced further editions of the book. Throughout her career she produced rededicated editions of the book with customised prologues for patrons, including an edition for Philip the Bold in 1403, and editions for Jean of Berry and Henry IV of England in 1404. Patronage changed in the late Middle Ages. Texts were still produced and circulated as continuous roll manuscripts, but were increasingly replaced by the bound codex. Members of royal family became patrons of writers by commissioning books. As materials became cheaper a book trade developed, so writers and bookmakers produced books for the French nobility, who could afford to establish their own libraries. Christine thus had no single patron who consistently supported her financially and became associated with the royal court and the different fractions of the royal family – the Burgundy, Orleans and Berry – each having their own respective courts. Throughout her career Christine undertook concurrent paid projects for individual patrons and subsequently published these works for dissemination among the nobility of France. In 1402 Christine became involved in a renowned literary controversy, the "Querelle du Roman de la Rose". Christine instigated this debate by questioning the literary merits of Jean de Meun's popular "Romance of the Rose". "Romance of the Rose" satirizes the conventions of courtly love while critically depicting women as nothing more than seducers. In the midst of the Hundred Years' War between French and English kings, Christine published the dream allegory "Le Chemin de long estude" in 1403. In the first person narrative she and Cumaean Sibyl travel together and witness a debate on the state of the world between the four allegories – Wealth, Nobility, Chivalry and Wisdom. Christine suggests that justice could be brought to earth by a single monarch who had the necessary qualities. In 1404 Christine chronicled the life of Charles V, portraying him as the ideal king and political leader, in "Le Livre des fais et bonnes meurs du sage roy Charles V". The chronicle had been commissioned by Philip the Bold and in the chronicle Christine passed judgement on the state of the royal court. When praising the efforts of Charles V in studying Latin, Christine lamented that her contemporaries had to resort to strangers to read the law to them. Before the book was completed, Philip the Bold died, and Christine offered the book to Jean of Berry in 1405, finding a new royal patron. She was paid 100 livre for the book by Philip's successor John the Fearless in 1406 and would receive payments from his court for books until 1412. In 1405 Christine published "Le Livre de la cité des dames" ("The Book of the City of Ladies") and "Le Livre des trois vertus" ("Book of Three Virtues", known as "The Treasure of the City of Ladies"). In "Le Livre de la cité des dames" Christine presented intellectual and royal female leaders, such as Queen Zenobia. Christine dedicated "Le Livre des trois vertus" to the dauphine Margaret of Nevers, advising the young princess on what she had to learn. As Queen Isabeau's oldest son Louis of Guyenne came of age Christine addressed three works to him with the intention of promoting wise and effective government. The earliest of the three works has been lost. In "Livre du Corps de policie" ("The Book of the Body Politic"), published in 1407 and dedicated to the dauphin, Christine set out a political treatise which analysed and described the customs and governments of late medieval European societies. Christine favoured hereditary monarchies, arguing in reference to Italian city-states that were governed by princes or trades, that "such governance is not profitable at all for the common good". Christine also devoted several chapters to the duties of a king as military leader and she described in detail the role of the military class in society. France was at the verge of all out civil war since 1405. In 1407 John I of Burgundy, also known as John the Fearless, plunged France into a crisis when he had Louis of Orléans assassinated. The Duke of Burgundy fled Paris when his complicity in the assassination became known, but was appointed regent of France on behalf of Charles VI in late 1408 after his military victory in the Battle of Othee. It is not certain who commissioned Christine to write a treatise on military warfare, but in 1410 Christine published the manual on chivalry, entitled "Livre des fais d'armes et de chevalerie" ("The Book of Feats of Arms and of Chivalry"). Christine received 200 livre from the royal treasury in early 1411 for the book. In the preface Christine explained that she published the manual in French so that it could be read by practitioners of war not well versed in Latin. The book opened with a discussion of the just war theory advanced by Honoré Bonet. Christine also referenced classical writers on military warfare, such as Vegetius, Frontinus and Valerius Maximus. Christine discussed contemporary matters relating to what she termed the "Laws of War", such as capital punishment, the payment of troops, as well as the treatment of noncombatants and prisoners of war. Christine opposed trial by combat, but articulated the medieval belief that God is the lord and governor of battle and that wars are the proper execution of justice. Nevertheless, she acknowledged that in a war "many great wrongs, extortions, and grievous deeds are committed, as well as raping, killings, forced executions, and arsons". Christine limited the right to wage war to sovereign kings because as head of states they were responsible for the welfare of their subjects. In 1411 the royal court published an edict prohibiting nobles from raising an army. After civil war had broken out in France, Christine in 1413 offered guidance to the young dauphin on how to govern well, publishing "Livre de la paix" ("The Book of Peace"). "Livre de la paix" would be Christine's last major work and contained detailed formulations of her thoughts on good governance. The period was marked by bouts of civil war and failed attempts to bring John the Fearless to justice for assassinating his cousin. Christine addressed Louis of Guyenne directly, encouraging him to continue the quest for peace in France. She argued that "Every kingdom divided in itself will be made desolate, and every city and house divided against itself will not stand". Christine was acquainted with William of Tignonville, an ambassador to the royal court, and referenced Tignonville's speeches on the Armagnac–Burgundian Civil War. Christine's drew a utopian vision of a just ruler, who could take advice from those older or wiser. In arguing that peace and justice were possible on earth as well as in heaven, Christine was influenced by Dante, who she had referenced in "Le Chemin de long estude". Christine encouraged the dauphin to deserve respect, by administering justice promptly and living by worthy example. Christine urged young princes to make themselves available to their subjects, avoid anger and cruelty, to act liberally, clement and truthful. Christine's interpretation of the virtuous Christian prince built on the advice to rulers by St Benedict, Peter Abelard and Cicero. In 1414 Christine presented Queen Isabeau with a lavishly decorated collection of her works (now known as "British Library Harley 4431"). The bound book contained 30 of Christine's writings and 130 miniatures. She had been asked by the queen to produce the book. Noted for its quality miniature illuminations, Christine herself and her past royal patrons were depicted. As a mark of ownership and authorship the opening frontispiece depicted Queen Isabeau being presented with the book by Christine. In 1418 Christine published a consolation for women who had lost family members in the Battle of Agincourt under the title "Epistre de la prison de vie Humaine" ("Letter Concerning the Prison of Human Life"). In it Christine did not express any optimism or hope that peace could be found on earth. Instead she expressed the view that the soul was trapped in the body and imprisoned in hell. The previous year she had presented the "Epistre de la prison de vie Humaine" to Marie of Berry, the administrator of the Duchy of Bourbon whose husband was held in English captivity. Historians assume that Christine spent the last ten years of her life in the Dominican Convent of Poissy because of the civil war and the occupation of Paris by the English. Away from the royal court her literary activity ceased. However, in 1429, after Joan of Arc's military victory over the English, Christine published the poem "Ditié de Jehanne d'Arc" ("The Tale of Joan of Arc"). Published just a few days after the coronation of Charles VII, Christine expressed renewed optimism. She cast Joan as the fulfilment of prophecies by Merlin, Cumaean Sibyl and Saint Bede, helping Charles VII to fulfill the predictions of Charlemagne. Christine is believed to have died in 1430, before Joan was tried and executed by the English. After her death the political crisis in France was resolved when Queen Isabeau's only surviving son Charles VII and John the Fearless' successor as Duke of Burgundy, Philip the Good, signed the Peace of Arras in 1435. Christine produced a large number of vernacular works, in both prose and verse. Her works include political treatises, mirrors for princes, epistles, and poetry. Christine's book "Le Dit de la Rose" ("The Tale of the Rose") was published in 1402 as a direct attack on Jean de Meun's extremely popular book "Romance of the Rose" which characterised women as seducers. Christine claimed that Meun's views were misogynistic, vulgar, immoral, and slanderous to women. The exchange between the two authors involved them sending each other their treatises, defending their respective views. At the height of the exchange Christine published "Querelle du Roman de la Rose" ("Letters on the Debate of the Rose"). In this particular apologetic response, Christine belittles her own writing style, employing a rhetorical strategy by writing against the grain of her meaning, also known as antiphrasis. By 1405 Christine had completed her most famous literary works, "The Book of the City of Ladies" ("Le Livre de la cité des dames") and "The Treasure of the City of Ladies" ("Le Livre des trois vertus"). The first of these shows the importance of women's past contributions to society, and the second strives to teach women of all estates how to cultivate useful qualities. In "The Book of the City of Ladies" Christine created a symbolic city in which women are appreciated and defended. She constructed three allegorical figures – Reason, Justice, and Rectitude – in the common pattern of literature in that era, when many books and poetry utilized stock allegorical figures to express ideas or emotions. She enters into a dialogue, a movement between question and answer, with these allegorical figures that is from a completely female perspective. Together, they create a forum to speak on issues of consequence to all women. Only female voices, examples and opinions provide evidence within this text. Through Lady Reason in particular Christine argues that stereotypes of women can be sustained only if women are prevented from entering into the conversation. In "City of Ladies" Christine deliberated on the debate whether the virtues of men and women differ, a frequently debated topic in late medieval Europe, particularly in the context of Aristotelian virtue ethics and his views on women. Christine repeatedly used the theological argument that men and women are created in God's image and both have souls capable of embracing God's goodness. Among the inhabitants of the "City of Ladies" are female saints, women from the Old Testament and virtuous women from the pagan antiquity as portrait by Giovanni Boccaccio. In "The Treasure of the City of Ladies" Christine addressed the "community" of women with the stated objective of instructing them in the means of achieving virtue. She took the position that all women were capable of humility, diligence and moral rectitude, and that duly educated all women could become worthy residents of the imaginary "City of Ladies". Drawing on her own life, Christine advised women on how to navigate the perils of early 15th century French society. With reference to Augustine of Hippo and other saints Christine offered advice on how the noble lady could achieve the love of God. Christine speaks through the allegorical figures of God's daughters – Reason, Rectitude and Justice – who represent the Three Virtues most important to women's success. Through secular examples of these three virtues, Christine urged women to discover meaning and achieve worthy acts in their lives. Christine argued that women's success depends on their ability to manage and mediate by speaking and writing effectively. Christine specifically sought out other women to collaborate in the creation of her work. She makes special mention of a manuscript illustrator we know only as Anastasia, whom she described as the most talented of her day. Christine published 41 known pieces of poetry and prose in her lifetime and she gained fame across Europe as the first professional woman writer. She achieved such credibility that royalty commissioned her prose and contemporary intellectuals kept copies of her works in their libraries. After her death in 1430 Christine's influence was acknowledged by a variety of authors and her writings remained popular. Her book "Le Livre de la cité des dames" remained in print. Portuguese and Dutch editions of it exist from the 15th century, and French editions were still being printed in 1536. In 1521 "The Book of the City of Ladies" was published in English. Christine's "Le Livre des trois vertus" ("The Treasure of the City of Ladies") became an important reference point for royal women in the 15th and 16th century. Anne of France, who acted as regent of France, used it as a basis for her 1504 book of "Enseignemens", written for her daughter Suzanne Duchess of Bourbon, who as agnatic heir to the Bourbon lands became co-regent. Christine's advice to princesses was translated and circulated as manuscript or printed book among the royal families of France and Portugal. The "City of Ladies" was acknowledged and referenced by 16th century French women writers, including Anne de Beaujeu, Gabrielle de Bourbon, Marguerite de Navarre and Georgette de Montenay. Christine's political writings received some attention too. "Livre de la paix" was referenced by the humanist Gabriel Naudé and Christine was given large entries in encyclopedias by Denis Diderot, Louis Moréri and Prosper Marchand. In 1470 Jean V de Bueil reproduced Christine's detailed accounts of the armies and material needed to defend a castle or town against a siege in "Le Jouvence". "Livre des fais d'armes et de chevalerie" was published in its entirety by the book printer Antoine Vérard in 1488, but Vérard claimed that it was his translation of Vegetius. Philippe Le Noir authored an abridged version of Christine's book in 1527 under the title "L'Arbre des Batailles et fleur de chevalerie" ("The tree of battles and flower of chivalry"). "Livre des fais d'armes et de chevalerie" was translated into English by William Caxton for Henry VII in 1489 and was published under the title "The Book of Feats of Arms and of Chivalry" as print one year later, attributing Christine as author. English editions of "The Book of the City of Ladies" and "Livre du corps de policie" ("The Book of the Body Politic") were printed in 1521 without referencing Christine as the author. Elizabeth I had in her court library copies of "The Book of the City of Ladies", "L'Épistre de Othéa a Hector" ("Letter of Othea to Hector") and "The Book of Feats of Arms and of Chivalry". Among the possessions of the English queen were tapestries with scenes from the "City of Ladies". However, when in the early 19th century Raimond Thomassy published an overview of Christine's political writings, he noted that modern editions of these writings were not published and that as a political theorist Christine was descending into obscurity. Suzanne Solente, Mathilde Laigle and Marie-Josephe Pinet are credited with reviving the work of de Pizan in the 20th century. A writer who had been forgotten in France but noted elsewhere. Laigle noticed that de Pizan's work for instance had not been translated into Spanish but other writers had borrowed extensively from her work. While de Pizan's mixture of classical philosophy and humanistic ideals was in line with the style of other popular authors at the time, her outspoken defence of women was an anomaly. In her works she vindicated women against popular misogynist texts, such as Ovid's "Art of Love", Jean de Meun's "Romance of the Rose" and Matheolus's "Lamentations". Her activism has drawn the fascination of modern feminists. Simone de Beauvoir wrote in 1949 that "Épître au Dieu d'Amour" was "the first time we see a woman take up her pen in defence of her sex". The 1979 artwork "The Dinner Party" features a place setting for Christine de Pizan. In the 1980s Sandra Hindman published a study of the political events referenced in the illuminations of Christine's published works.
https://en.wikipedia.org/wiki?curid=7628
Catharism Catharism (; from the Greek: , "katharoi", "the pure [ones]") was a Christian dualist or Gnostic revival movement between the 12th and 14th centuries which thrived in Southern Europe, particularly what is now northern Italy and southern France. Followers were known as Cathars, or Good Christians, and are now mainly remembered for a prolonged period of persecution by the Catholic Church, which did not recognise their belief as being Christian. Catharism appeared in Europe in the Languedoc region of France in the 11th century, when the name first appeared. The adherents were sometimes known as Albigensians, after the city Albi in southern France where the movement first took hold. The belief may have originated in the Byzantine Empire. Catharism was initially taught by ascetic leaders who set few guidelines and so some Catharist practices and beliefs varied by region and over time. The Catholic Church denounced its practices, including the "consolamentum" ritual by which Cathar individuals were baptised and raised to the status of "perfect". Catharism may have had its roots in the Paulician movement in Armenia and eastern Byzantine Anatolia and certainly in the Bogomils of the First Bulgarian Empire, who were influenced by the Paulicians resettled in Thrace (Philipopolis) by the Byzantines. Though the term "Cathar" () has been used for centuries to identify the movement, whether it identified itself with the name is debated. In Cathar texts, the terms "Good Men" ("Bons Hommes"), "Good Women" ("Bonnes Femmes"), or "Good Christians" ("Bons Chrétiens") are the common terms of self-identification. The idea of two gods or deistic principles, one good and the other evil, was central to Cathar beliefs. This was antithetical to the monotheistic Catholic Church, whose fundamental principle was that there was only one God, who created all things visible and invisible. Cathars believed that the good God was the God of the New Testament, creator of the spiritual realm, whereas the evil God was the God of the Old Testament, creator of the physical world whom many Cathars identified as Satan. Cathars believed human spirits were the sexless spirits of angels trapped in the material realm of the evil god, destined to be reincarnated until they achieved salvation through the consolamentum, when they would return to the good god. From the beginning of his reign, Pope Innocent III attempted to end Catharism by sending missionaries and by persuading the local authorities to act against them. In 1208, Pierre de Castelnau, Innocent's papal legate, was murdered while returning to Rome after excommunicating Count Raymond VI of Toulouse, who, in his view, was too lenient with the Cathars. Pope Innocent III then abandoned the option of sending Catholic missionaries and jurists, declared Pierre de Castelnau a martyr and launched the Albigensian Crusade in 1209. The Crusade ended in 1229 with the defeat of the Cathars. Catharism underwent persecution by the Medieval Inquisition, which succeeded in eradicating it by 1350. The origins of the Cathars' beliefs are unclear, but most theories agree they came from the Byzantine Empire, mostly by the trade routes and spread from the First Bulgarian Empire to the Netherlands. The name of Bulgarians ("Bougres") was also applied to the Albigensians, and they maintained an association with the similar Christian movement of the Bogomils ("Friends of God") of Thrace. "That there was a substantial transmission of ritual and ideas from Bogomilism to Catharism is beyond reasonable doubt." Their doctrines have numerous resemblances to those of the Bogomils and the Paulicians, who influenced them, as well as the earlier Marcionites, who were found in the same areas as the Paulicians, the Manicheans and the Christian Gnostics of the first few centuries AD, although, as many scholars, most notably Mark Pegg, have pointed out, it would be erroneous to extrapolate direct, historical connections based on theoretical similarities perceived by modern scholars. John Damascene, writing in the 8th century AD, also notes of an earlier sect called the "Cathari", in his book "On Heresies", taken from the epitome provided by Epiphanius of Salamis in his "Panarion". He says of them: "They absolutely reject those who marry a second time, and reject the possibility of penance [that is, forgiveness of sins after baptism]". These are probably the same Cathari (actually Novations) who are mentioned in Canon 8 of the First Ecumenical Council of Nicaea in the year 325, which states "... [I]f those called Cathari come over [to the faith], let them first make profession that they are willing to communicate [share full communion] with the twice-married, and grant pardon to those who have lapsed ..." The writings of the Cathars were mostly destroyed because of the doctrine´s threat perceived by the Papacy; thus, the historical record of the Cathars is derived primarily from their opponents. Cathar ideology continues to be debated, with commentators regularly accusing opposing perspectives of speculation, distortion and bias. Only a few texts of the Cathars remain, as preserved by their opponents (such as the "Rituel Cathare de Lyon") which give a glimpse into the ideologies of their faith. One large text has survived, "The Book of Two Principles" ("Liber de duobus principiis"), which elaborates the principles of dualistic theology from the point of view of some Albanenses Cathars. It is now generally agreed by most scholars that identifiable historical Catharism did not emerge until at least 1143, when the first confirmed report of a group espousing similar beliefs is reported being active at Cologne by the cleric Eberwin of Steinfeld. A landmark in the "institutional history" of the Cathars was the Council, held in 1167 at Saint-Félix-Lauragais, attended by many local figures and also by the Bogomil "papa" Nicetas, the Cathar bishop of (northern) France and a leader of the Cathars of Lombardy. The Cathars were largely local, Western European/Latin Christian phenomena, springing up in the Rhineland cities (particularly Cologne) in the mid-12th century, northern France around the same time, and particularly the Languedoc—and the northern Italian cities in the mid-late 12th century. In the Languedoc and northern Italy, the Cathars attained their greatest popularity, surviving in the Languedoc, in much reduced form, up to around 1325 and in the Italian cities until the Inquisitions of the 14th century finally extirpated them. Cathar cosmology identified two twin, opposing deities. The first was a good God, portrayed in the New Testament and creator of the spirit, while the second was an evil God, depicted in the Old Testament and creator of matter and the physical world. The latter, often called "Rex Mundi" ("King of the World"), was identified as the God of Judaism, and was also either conflated with Satan or considered Satan's father, creator or seducer. They solved the problem of evil by stating that the good God's power to do good was limited by the evil God's works and vice versa. All visible matter, including the human body, was created by this "Rex Mundi"; matter was therefore tainted with sin. Under this view, humans were actually angels seduced by Satan before a war in heaven against the army of Michael, after which they would have been forced to spend an eternity trapped in the evil God's material realm. The Cathars taught that to regain angelic status one had to renounce the material self completely. Until one was prepared to do so, they would be stuck in a cycle of reincarnation, condemned to live on the corrupt Earth. Zoé Oldenbourg compared the Cathars to "Western Buddhists" because she considered that their view of the doctrine of "resurrection" taught by Christ was similar to the Buddhist doctrine of rebirth. Cathars venerated Jesus Christ and followed what they considered to be His true teachings, labelling themselves as "Good Christians." However, they denied His physical incarnation. Authors believe that their conception of Jesus resembled docetism, believing Him the human form of an angel, whose physical body was only an appearance. This illusory form would have possibly been given by the Virgin Mary, another angel in human form. Most did not accept the normative Trinitarian understanding of Jesus, instead resembling nontrinitarian modalistic monarchianism (Sabellianism) in the West and adoptionism in the East, which might or might not be combined with the mentioned docetism. Bernard of Clairvaux's biographer and other sources accuse some Cathars of Arianism, and some scholars see Cathar Christology as having traces of earlier Arian roots. In any case, Cathars firmly rejected the Resurrection of Jesus, seeing it as representing reincarnation, and the Christian symbol of the cross, considering it to be not more than a material instrument of torture and evil. They also saw John the Baptist, identified also with Elijah, as an evil being sent to hinder Jesus's teaching through the false sacrament of baptism. However, those beliefs were far from unanimous. Some Cathar communities believed in a mitigated dualism similar to their Bogomil predecessors, stating that the evil god, Satan, had previously been the true God's servant before rebelling against him. Others, likely a majority given the influence reflected on the "Book of the Two Principles", believed in an absolute dualism, where the two gods were twin entities of the same power and importance. In the same line, some communities might have believed in the existence of a spirit realm created by the good God, the "Land of the Living", whose history and geography would have served as the basis for the evil god's corrupt creation. Under this view, the history of Jesus would have happened roughly as told, only in the spirit realm. The physical Jesus from the material world would have been evil, a false messiah and a lustful lover of the material Mary Magdalene. However, the true Jesus would have influenced the physical world in a way similar to the Harrowing of Hell, only by inhabiting the body of Paul. Cathars also possibly believed in a Day of Judgement that would come when the number of just equated that of angels who fell, in which the believers would ascend to the spirit realm while the sinners would be thrown to everlasting fire along with Satan. 13th century chronicler Pierre des Vaux-de-Cernay recorded those views. The alleged sacred texts of the Cathars, besides the New Testament, included the previously Bogomil text "The Gospel of the Secret Supper" (also called "John's Interrogation") and the Cathar original work "The Book of the Two Principles". They regarded the Old Testament as written by Satan except for a few books which they accepted. Cathars, in general, formed an anti-sacerdotal party in opposition to the pre-Reformation Catholic Church, protesting against what they perceived to be the moral, spiritual and political corruption of the Church. In contrast, the Cathars had but one central rite, the Consolamentum, or Consolation. This involved a brief spiritual ceremony to remove all sin from the believer and to induct him into the next higher level as a perfect. Many believers would receive the Consolamentum as death drew near, performing the ritual of liberation at a moment when the heavy obligations of purity required of Perfecti would be temporally short. Some of those who received the sacrament of the consolamentum upon their death-beds may thereafter have shunned further food or drink and, more often and in addition, expose themselves to extreme cold, in order to speed death. This has been termed the "endura". It was claimed by some of the church writers that when a Cathar, after receiving the Consolamentum, began to show signs of recovery he or she would be smothered in order to ensure his or her entry into paradise. Other than at such moments of "extremis", little evidence exists to suggest this was a common Cathar practice. The Cathars also refused the sacrament of the eucharist saying that it could not possibly be the body of Christ. They also refused to partake in the practice of Baptism by water. The following two quotes are taken from the Inquisitor Bernard Gui's experiences with the Cathar practices and beliefs: Killing was abhorrent to the Cathars. Consequently, abstention from all animal food (sometimes exempting fish) was enjoined of the Perfecti. The Perfecti avoided eating anything considered to be a by-product of sexual reproduction. War and capital punishment were also condemned—an abnormality in Medieval Europe. In a world where few could read, their rejection of oath-taking marked them as rebels against social order. To the Cathars, reproduction was a moral evil to be avoided, as it continued the chain of reincarnation and suffering in the material world. It was claimed by their opponents that, given this loathing for procreation, they generally resorted to sodomy. Such was the situation that a charge of heresy leveled against a suspected Cathar was usually dismissed if the accused could show he was legally married. When Bishop Fulk of Toulouse, a key leader of the anti-Cathar persecutions, excoriated the Languedoc Knights for not pursuing the heretics more diligently, he received the reply, "We cannot. We have been reared in their midst. We have relatives among them and we see them living lives of perfection." It has been alleged that the Cathar Church of the Languedoc had a relatively flat structure, distinguishing between the baptised "perfecti" (a term they did not use; instead, "bonhommes") and ordinary unbaptised believers ("credentes"). By about 1140, liturgy and a system of doctrine had been established. They created a number of bishoprics, first at Albi around 1165 and after the 1167 Council at Saint-Félix-Lauragais sites at Toulouse, Carcassonne, and Agen, so that four bishoprics were in existence by 1200. In about 1225, during a lull in the Albigensian Crusade, the bishopric of Razès was added. Bishops were supported by their two assistants: a "filius maior" (typically the successor) and a "filius minor", who were further assisted by deacons. The "perfecti" were the spiritual elite, highly respected by many of the local people, leading a life of austerity and charity. In the apostolic fashion they ministered to the people and travelled in pairs. Catharism has been seen as giving women the greatest opportunities for independent action since women were found as being believers as well as Perfecti, who were able to administer the sacrament of the "consolamentum." Cathars believed that one would be repeatedly reincarnated until one commits to the self-denial of the material world. A man could be reincarnated as a woman and vice versa. The spirit was of utmost importance to the Cathars and was described as being immaterial and sexless. Because of this belief, the Cathars saw women as equally capable of being spiritual leaders. Women accused of being heretics in early medieval Christianity included those labeled Gnostics, Cathars, and, later, the Beguines, as well as several other groups that were sometimes "tortured and executed". Cathars, like the Gnostics who preceded them, assigned more importance to the role of Mary Magdalene in the spread of early Christianity than the church previously did. Her vital role as a teacher contributed to the Cathar belief that women could serve as spiritual leaders. Women were found to be included in the Perfecti in significant numbers, with numerous receiving the "consolamentum" after being widowed. Having reverence for the Gospel of John, the Cathars saw Mary Magdalene as perhaps even more important than Saint Peter, the founder of the church. Catharism attracted numerous women with the promise of a leadership role that the Catholic Church did not allow. Catharism let women become a perfect. These female perfects were required to adhere to a strict and ascetic lifestyle, but were still able to have their own houses. Although many women found something attractive in Catharism, not all found its teachings convincing. A notable example is Hildegard of Bingen, who in 1163 gave a widely renowned sermon against the Cathars in Cologne. During this sermon, Hildegard announced a state of eternal punishment and damnation to all who accepted Cathar beliefs. While women perfects rarely traveled to preach the faith, they still played a vital role in the spreading of the Catharism by establishing group homes for women. Though it was extremely uncommon, there were isolated cases of female Cathars leaving their homes to spread the faith. In Cathar communal homes (ostals), women were educated in the faith, and these women would go on to bear children who would then also become believers. Through this pattern the faith grew exponentially through the efforts of women as each generation passed. Despite women having a role in the growing of the faith, Catharism was not completely equal, for example the belief that one's last incarnation had to be experienced as a man to break the cycle. This belief was inspired by later French Cathars, who taught that women must be reborn as men in order to achieve salvation. Another example was that the sexual allure of women impeded man's ability to reject the material world. Toward the end of the Cathar movement, Catharism became less equal and started the practice of excluding women perfects. However, this trend remained limited (Later Italian perfects still included women.) In 1147, Pope Eugene III sent a legate to the Cathar district in order to arrest the progress of the Cathars. The few isolated successes of Bernard of Clairvaux could not obscure the poor results of this mission, which clearly showed the power of the sect in the Languedoc at that period. The missions of Cardinal Peter of Saint Chrysogonus to Toulouse and the Toulousain in 1178, and of Henry of Marcy, cardinal-bishop of Albano, in 1180–81, obtained merely momentary successes. Henry's armed expedition, which took the stronghold at Lavaur, did not extinguish the movement. Decisions of Catholic Church councils—in particular, those of the Council of Tours (1163) and of the Third Council of the Lateran (1179)—had scarcely more effect upon the Cathars. When Pope Innocent III came to power in 1198, he was resolved to deal with them. At first Innocent tried peaceful conversion, and sent a number of legates into the Cathar regions. They had to contend not only with the Cathars, the nobles who protected them, and the people who respected them, but also with many of the bishops of the region, who resented the considerable authority the Pope had conferred upon his legates. In 1204, Innocent III suspended a number of bishops in Occitania; in 1205 he appointed a new and vigorous bishop of Toulouse, the former troubadour Foulques. In 1206 Diego of Osma and his canon, the future Saint Dominic, began a programme of conversion in Languedoc; as part of this, Catholic-Cathar public debates were held at Verfeil, Servian, Pamiers, Montréal and elsewhere. Dominic met and debated with the Cathars in 1203 during his mission to the Languedoc. He concluded that only preachers who displayed real sanctity, humility and asceticism could win over convinced Cathar believers. The institutional Church as a general rule did not possess these spiritual warrants. His conviction led eventually to the establishment of the Dominican Order in 1216. The order was to live up to the terms of his famous rebuke, "Zeal must be met by zeal, humility by humility, false sanctity by real sanctity, preaching falsehood by preaching truth." However, even Dominic managed only a few converts among the Cathari. In January 1208 the papal legate, Pierre de Castelnau—a Cistercian monk, theologian and canon lawyer—was sent to meet the ruler of the area, Raymond VI, Count of Toulouse. Known for excommunicating noblemen who protected the Cathars, Castelnau excommunicated Raymond for abetting heresy following an allegedly fierce argument during which Raymond supposedly threatened Castelnau with violence. Shortly thereafter, Castelnau was murdered as he returned to Rome, allegedly by a knight in the service of Count Raymond. His body was returned and laid to rest in the Abbey at Saint Gilles. As soon as he heard of the murder, the Pope ordered the legates to preach a crusade against the Cathars and wrote a letter to Philip Augustus, King of France, appealing for his intervention—or an intervention led by his son, Louis. This was not the first appeal but some see the murder of the legate as a turning point in papal policy. The chronicler of the crusade which followed, Peter of Vaux de Cernay, portrays the sequence of events in such a way that, having failed in his effort to peaceably demonstrate the errors of Catharism, the Pope then called a formal crusade, appointing a series of leaders to head the assault. The French King refused to lead the crusade himself, and could not spare his son to do so either—despite his victory against John, King of England, there were still pressing issues with Flanders and the empire and the threat of an Angevin revival. Philip did sanction the participation of some of his barons, notably Simon de Montfort and Bouchard de Marly. There followed twenty years of war against the Cathars and their allies in the Languedoc: the Albigensian Crusade. This war pitted the nobles of France against those of the Languedoc. The widespread northern enthusiasm for the Crusade was partially inspired by a papal decree permitting the confiscation of lands owned by Cathars and their supporters. This angered not only the lords of the south but also the French King, who was at least nominally the suzerain of the lords whose lands were now open to seizure. Philip Augustus wrote to Pope Innocent in strong terms to point this out—but the Pope did not change his policy. As the Languedoc was supposedly teeming with Cathars and Cathar sympathisers, this made the region a target for northern French noblemen looking to acquire new fiefs. The barons of the north headed south to do battle. Their first target was the lands of the Trencavel, powerful lords of Carcassonne, Béziers, Albi and the Razes. Little was done to form a regional coalition and the crusading army was able to take Carcassonne, the Trencavel capital, incarcerating Raymond Roger Trencavel in his own citadel where he died within three months; champions of the Occitan cause claimed that he was murdered. Simon de Montfort was granted the Trencavel lands by the Pope and did homage for them to the King of France, thus incurring the enmity of Peter II of Aragon who had held aloof from the conflict, even acting as a mediator at the time of the siege of Carcassonne. The remainder of the first of the two Cathar wars now focused on Simon's attempt to hold on to his gains through winters where he was faced, with only a small force of confederates operating from the main winter camp at Fanjeaux, with the desertion of local lords who had sworn fealty to him out of necessity—and attempts to enlarge his newfound domains in the summer when his forces were greatly augmented by reinforcements from France, Germany and elsewhere. Summer campaigns saw him not only retake what he had lost in the "close" season, but also seek to widen his sphere of operation—and we see him in action in the Aveyron at St. Antonin and on the banks of the Rhône at Beaucaire. Simon's greatest triumph was the victory against superior numbers at the Battle of Muret—a battle which saw not only the defeat of Raymond of Toulouse and his Occitan allies—but also the death of Peter of Aragon—and the effective end of the ambitions of the house of Aragon/Barcelona in the Languedoc. This was in the medium and longer term of much greater significance to the royal house of France than it was to de Montfort—and with the Battle of Bouvines was to secure the position of Philip Augustus vis a vis England and the Empire. The Battle of Muret was a massive step in the creation of the unified French kingdom and the country we know today—although Edward III, Edward the Black Prince and Henry V would threaten later to shake these foundations. The crusader army came under the command, both spiritually and militarily, of the papal legate Arnaud-Amaury, Abbot of Cîteaux. In the first significant engagement of the war, the town of Béziers was besieged on 22 July 1209. The Catholic inhabitants of the city were granted the freedom to leave unharmed, but many refused and opted to stay and fight alongside the Cathars. The Cathars spent much of 1209 fending off the crusaders. The Béziers army attempted a sortie but was quickly defeated, then pursued by the crusaders back through the gates and into the city. Arnaud-Amaury, the Cistercian abbot-commander, is supposed to have been asked how to tell Cathars from Catholics. His reply, recalled by Caesarius of Heisterbach, a fellow Cistercian, thirty years later was ""Caedite eos. Novit enim Dominus qui sunt eius""—"Kill them all, the Lord will recognise His own". The doors of the church of St Mary Magdalene were broken down and the refugees dragged out and slaughtered. Reportedly at least 7,000 men, women and children were killed there by Catholic forces. Elsewhere in the town, many more thousands were mutilated and killed. Prisoners were blinded, dragged behind horses, and used for target practice. What remained of the city was razed by fire. Arnaud-Amaury wrote to Pope Innocent III, "Today your Holiness, twenty thousand heretics were put to the sword, regardless of rank, age, or sex." "The permanent population of Béziers at that time was then probably no more than 5,000, but local refugees seeking shelter within the city walls could conceivably have increased the number to 20,000." After the success of his siege of Carcassonne, which followed the Massacre at Béziers in 1209, Simon de Montfort was designated as leader of the Crusader army. Prominent opponents of the Crusaders were Raymond Roger Trencavel, viscount of Carcassonne, and his feudal overlord Peter II of Aragon, who held fiefdoms and had a number of vassals in the region. Peter died fighting against the crusade on 12 September 1213 at the Battle of Muret. Simon de Montfort was killed on 25 June 1218 after maintaining a siege of Toulouse for nine months. The official war ended in the Treaty of Paris (1229), by which the king of France dispossessed the house of Toulouse of the greater part of its fiefs, and that of the Trencavels (Viscounts of Béziers and Carcassonne) of the whole of their fiefs. The independence of the princes of the Languedoc was at an end. But in spite of the wholesale massacre of Cathars during the war, Catharism was not yet extinguished and Catholic forces would continue to pursue Cathars. In 1215, the bishops of the Catholic Church met at the Fourth Council of the Lateran under Pope Innocent III; part of the agenda was combating the Cathar heresy. The Inquisition was established in 1233 to uproot the remaining Cathars. Operating in the south at Toulouse, Albi, Carcassonne and other towns during the whole of the 13th century, and a great part of the 14th, it succeeded in crushing Catharism as a popular movement and driving its remaining adherents underground. Cathars who refused to recant were hanged, or burnt at the stake. On Friday, 13 May 1239, 183 men and women convinced of Catharism were burned at the stake on the orders of Robert le Bougre. Mount Guimar was already denounced as a place of heresy by the letter of the bishop of Liège to Pope Lucius II in 1144. Augustine, bishop of Hippo Regius, had expelled from the city a Fortunatus who had fled Africa in 392; he is a Fortunatus who is reported as a monk from Africa and protected by the lord of Widomarum. From May 1243 to March 1244, the Cathar fortress of Montségur was besieged by the troops of the seneschal of Carcassonne and the archbishop of Narbonne. On 16 March 1244, a large and symbolically important massacre took place, where over 200 Cathar Perfects were burnt in an enormous pyre at the "prat dels cremats" ("field of the burned") near the foot of the castle. Moreover, the church decreed lesser chastisements against laymen suspected of sympathy with Cathars, at the 1235 Council of Narbonne. A popular though as yet unsubstantiated theory holds that a small party of Cathar Perfects escaped from the fortress before the massacre at "prat dels cremats". It is widely held in the Cathar region to this day that the escapees took with them "le trésor cathar". What this treasure consisted of has been a matter of considerable speculation: claims range from sacred Gnostic texts to the Cathars' accumulated wealth, which might have included the Holy Grail (see the Section on Historical Scholarship, below). Hunted by the Inquisition and deserted by the nobles of their districts, the Cathars became more and more scattered fugitives: meeting surreptitiously in forests and mountain wilds. Later insurrections broke out under the leadership of Roger-Bernard II, Count of Foix, Aimery III of Narbonne, and Bernard Délicieux, a Franciscan friar later prosecuted for his adherence to another heretical movement, that of the Spiritual Franciscans at the beginning of the 14th century. But by this time the Inquisition had grown very powerful. Consequently, many presumed to be Cathars were summoned to appear before it. Precise indications of this are found in the registers of the Inquisitors, Bernard of Caux, Jean de St Pierre, Geoffroy d'Ablis, and others. The parfaits it was said only rarely recanted, and hundreds were burnt. Repentant lay believers were punished, but their lives were spared as long as they did not relapse. Having recanted, they were obliged to sew yellow crosses onto their outdoor clothing and to live apart from other Catholics, at least for a while. After several decades of harassment and re-proselytising, and, perhaps even more important, the systematic destruction of their religious texts, the sect was exhausted and could find no more adepts. The leader of a Cathar revival in the Pyrenean foothills, Peire Autier, was captured and executed in April 1310 in Toulouse. After 1330, the records of the Inquisition contain very few proceedings against Cathars. The last known Cathar perfectus in the Languedoc, Guillaume Bélibaste, was executed in the autumn of 1321. From the mid-12th century onwards, Italian Catharism came under increasing pressure from the Pope and the Inquisition, "spelling the beginning of the end". Other movements, such as the Waldensians and the pantheistic Brethren of the Free Spirit, which suffered persecution in the same area, survived in remote areas and in small numbers into the 14th and 15th centuries. Some Waldensian ideas were absorbed into other proto-Protestant sects, such as the Hussites, Lollards, and the Moravian Church (Herrnhuters of Germany). Cathars were in no way Protestant, and very few if any Protestants consider them as their forerunners (as opposed to groups like Waldensians, Hussites, Lollards and Arnoldists). After the suppression of Catharism, the descendants of Cathars were discriminated against and at times required to live outside towns and their defences. They retained their Cathar identity, despite their reintegration into Catholicism. As such, any use of the term "Cathar" to refer to people after the suppression of Catharism in the 14th century is a cultural or ancestral reference and has no religious implication. Nevertheless, interest in the Cathars and their history, legacy and beliefs continues. The term "Pays cathare", French meaning "Cathar Country", is used to highlight the Cathar heritage and history of the region in which Catharism was traditionally strongest. The area is centred around fortresses such as Montségur and Carcassonne; also, the French département of the Aude uses the title "Pays cathare" in tourist brochures. The areas have ruins from the wars against the Cathars that are still visible today. Some criticise the promotion of the identity of "Pays cathare" as an exaggeration for tourism purposes. Many of the promoted Cathar castles were not built by Cathars but by local lords, and many of them were layer rebuilt and extended for strategic purposes. Good examples are the magnificent castles of Queribus and Peyrepertuse, which are both perched on the side of precipitous drops on the last folds of the Corbieres mountains. They were for several hundred years frontier fortresses belonging to the French crown, and most of what is still there dates from a post-Cathar era. Many consider the County of Foix to be the actual historical centre of Catharism. In an effort to find the few remaining heretics in and around the village of Montaillou, Jacques Fournier, Bishop of Pamiers, future Pope Benedict XII, had those suspected of heresy interrogated in the presence of scribes who recorded their conversations. The late 13th- to early-14th-century document, discovered in the Vatican archives in the 1960s and edited by Jean Duvernoy, is the basis for Emmanuel Le Roy Ladurie's work "Montaillou: The Promised Land of Error". The publication of the early scholarly book "Crusade Against the Grail" by the young German Otto Rahn in the 1930s rekindled interest in the connection between the Cathars and the Holy Grail, especially in Germany. Rahn was convinced that the 13th-century work "Parzival" by Wolfram von Eschenbach was a veiled account of the Cathars. The philosopher and Nazi government official Alfred Rosenberg speaks favourably of the Cathars in "The Myth of the Twentieth Century". Academic books in English first appeared at the beginning of the millennium: for example, Malcolm Lambert's "The Cathars" and Malcolm Barber's "The Cathars". Starting in the 1990s and continuing to the present day, historians like R. I. Moore have radically challenged the extent to which Catharism, as an institutionalized religion, actually existed. Building on the work of French historians such as Monique Zerner and Uwe Brunn, Moore's "The War on Heresy" argues that Catharism was "contrived from the resources of [the] well-stocked imaginations" of churchmen, "with occasional reinforcement from miscellaneous and independent manifestations of local anticlericalism or apostolic enthusiasm". In short, Moore claims that the men and women persecuted as Cathars were not the followers of a secret religion imported from the East, instead they were part of a broader spiritual revival taking place in the later twelfth and early thirteenth century. Moore's work is indicative of a larger historiographical trend towards examination of how heresy was constructed by the church. The principal legacy of the Cathar movement is in the poems and songs of the Cathar troubadors, though this artistic legacy is only a smaller part of the wider Occitan linguistic and artistic heritage. The Occitan song "Lo Boièr" is particularly associated to Catharism. Recent artistic projects concentrating on the Cathar element in Provençal and troubador art include commercial recording projects by Thomas Binkley, electric hurdy-gurdy artist Valentin Clastrier and his CD Heresie dedicated to the church at Cathars, La Nef, and Jordi Savall. The Cathars are depicted in Jacques Tissinier's cement sculpture "Les Chevaliers Cathares", along l'autoroute des Deux Mers in Narbonne. In recent popular culture, Catharism has been linked with the Knights Templar, an active sect of monks founded during the First Crusade (1095–1099). This link has caused fringe theories about the Cathars and the possibility of their possession of the Holy Grail such as the pseudohistorical The Holy Blood and the Holy Grail.
https://en.wikipedia.org/wiki?curid=7630
Cerebrospinal fluid Cerebrospinal fluid (CSF) is a clear, colorless body fluid found in the brain and spinal cord. It is produced by specialised ependymal cells in the choroid plexuses of the ventricles of the brain, and absorbed in the arachnoid granulations. There is about 125 mL of CSF at any one time, and about 500 mL is generated every day. CSF acts as a cushion or buffer, providing basic mechanical and immunological protection to the brain inside the skull. CSF also serves a vital function in the cerebral autoregulation of cerebral blood flow. CSF occupies the subarachnoid space (between the arachnoid mater and the pia mater) and the ventricular system around and inside the brain and spinal cord. It fills the ventricles of the brain, cisterns, and sulci, as well as the central canal of the spinal cord. There is also a connection from the subarachnoid space to the bony labyrinth of the inner ear via the perilymphatic duct where the perilymph is continuous with the cerebrospinal fluid. The ependymal cells of the choroid plexuses have multiple motile cilia on their apical surfaces that beat to move the CSF through the ventricles. A sample of CSF can be taken via lumbar puncture. This can reveal the intracranial pressure, as well as indicate diseases including infections of the brain or its surrounding meninges. Although noted by Hippocrates, it was only in the 18th century that Emanuel Swedenborg was credited with its rediscovery, and as late as 1914 Harvey Cushing demonstrated CSF was secreted by the choroid plexus. There is about 125–150 mL of CSF at any one time. This CSF circulates within the ventricular system of the brain. The ventricles are a series of cavities filled with CSF. The majority of CSF is produced from within the two lateral ventricles. From here, CSF passes through the interventricular foramina to the third ventricle, then the cerebral aqueduct to the fourth ventricle. From the fourth ventricle, the fluid passes into the subarachnoid space through four openings the central canal of the spinal cord, the median aperture, and the two lateral apertures. CSF is present within the subarachnoid space, which covers the brain, spinal cord, and stretches below the end of the spinal cord to the sacrum. There is a connection from the subarachnoid space to the bony labyrinth of the inner ear making the cerebrospinal fluid continuous with the perilymph in 93% of people. CSF moves in a single outward direction from the ventricles, but multidirectionally in the subarachnoid space. Fluid movement is pulsatile, matching the pressure waves generated in blood vessels by the beating of the heart. Some authors dispute this, posing that there is no unidirectional CSF circulation, but cardiac cycle-dependent bi-directional systolic-diastolic to-and-from cranio-spinal CSF movements. CSF is derived from blood plasma and is largely similar to it, except that CSF is nearly protein-free compared with plasma and has some different electrolyte levels. Due to the way it is produced, CSF has a higher chloride level than plasma, and an equivalent sodium level. CSF contains approximately 0.3% plasma proteins, or approximately 15 to 40 mg/dL, depending on sampling site. In general, globular proteins and albumin are in lower concentration in ventricular CSF compared to lumbar or cisternal fluid. This continuous flow into the venous system dilutes the concentration of larger, lipid-insoluble molecules penetrating the brain and CSF. CSF is normally free of red blood cells, and at most contains less than 5 white blood cells per mm³. Any white blood cell count higher than this constitutes pleocytosis. CSF contains nucleic acids, in particular cell-free DNA. At around the third week of development, the embryo is a three-layered disc, covered with ectoderm, mesoderm and endoderm. A tube-like formation develops in the midline, called the notochord. The notochord releases extracellular molecules that affect the transformation of the overlying ectoderm into nervous tissue. The neural tube, forming from the ectoderm, contains CSF prior to the development of the choroid plexuses. The open neuropores of the neural tube close after the first month of development, and CSF pressure gradually increases. As the brain develops, by the fourth week of embryological development three swellings have formed within the embryo around the canal, near where the head will develop. These swellings represent different components of the central nervous system: the prosencephalon, mesencephalon and rhombencephalon. Subarachnoid spaces are first evident around the 32nd day of development near the rhombencephalon; circulation is visible from the 41st day. At this time, the first choroid plexus can be seen, found in the fourth ventricle, although the time at which they first secrete CSF is not yet known. The developing forebrain surrounds the neural cord. As the forebrain develops, the neural cord within it becomes a ventricle, ultimately forming the lateral ventricles. Along the inner surface of both ventricles, the ventricular wall remains thin, and a choroid plexus develops, producing and releasing CSF. CSF quickly fills the neural canal. Arachnoid villi are formed around the 35th week of development, with arachnoid granulations noted around the 39th, and continuing developing until 18 months of age. The subcommissural organ secretes SCO-spondin, which forms Reissner's fiber within CSF assisting movement through the cerebral aqueduct. It is present in early intrauterine life but disappears during early development. CSF serves several purposes: The brain produces roughly 500 mL of cerebrospinal fluid per day, at a rate of about 25 mL an hour. This transcellular fluid is constantly reabsorbed, so that only 125–150 mL is present at any one time. CSF volume is higher on a mL/kg basis in children compared to adults. Infants have a CSF volume of 4 mL/kg, children have a CSF volume of 3 mL/kg, and adults have a CSF volume of 1.5-2 mL/kg. A high CSF volume is why a larger dose of local anesthetic, on a mL/kg basis, is needed in infants. Additionally, the larger CSF volume may be one reason as to why children have lower rates of postdural puncture headache. Most (about two-thirds to 80%) of CSF is produced by the choroid plexus. The choroid plexus is a network of blood vessels present within sections of the four ventricles of the brain. It is present throughout the ventricular system except for the cerebral aqueduct, and the frontal and occipital horns of the lateral ventricles. CSF is also produced by the single layer of column-shaped ependymal cells which line the ventricles; by the lining surrounding the subarachnoid space; and a small amount directly from the tiny spaces surrounding blood vessels around the brain. CSF is produced by the choroid plexus in two steps. Firstly, a filtered form of plasma moves from fenestrated capillaries in the choroid plexus into an interstitial space, with movement guided by a difference in pressure between the blood in the capillaries and the interstitial fluid. This fluid then needs to pass through the epithelium cells lining the choroid plexus into the ventricles, an active process requiring the transport of sodium, potassium and chloride that draws water into CSF by creating osmotic pressure. Unlike blood passing from the capillaries into the choroid plexus, the epithelial cells lining the choroid plexus contain tight junctions between cells, which act to prevent most substances flowing freely into CSF. Cilia on the apical surfaces of the ependymal cells beat to help transport the CSF. Water and carbon dioxide from the interstitial fluid diffuse into the epithelial cells. Within these cells, carbonic anhydrase converts the substances into bicarbonate and hydrogen ions. These are exchanged for sodium and chloride on the cell surface facing the interstitium. Sodium, chloride, bicarbonate and potassium are then actively secreted into the ventricular lumen. This creates osmotic pressure and draws water into CSF, facilitated by aquaporins. Chloride, with a negative charge, moves with the positively charged sodium, to maintain electroneutrality. Potassium and bicarbonate are also transported out of CSF. As a result, CSF contains a higher concentration of sodium and chloride than blood plasma, but less potassium, calcium and glucose and protein. Choroid plexuses also secrete growth factors, iodine, vitamins B1, B12, C, folate, beta-2 microglobulin, arginine vasopressin and nitric oxide into CSF. A Na-K-Cl cotransporter and Na/K ATPase found on the surface of the choroid endothelium, appears to play a role in regulating CSF secretion and composition. Orešković and Klarica hypothesise that CSF is not primarily produced by the choroid plexus, but is being permanently produced inside the entire CSF system, as a consequence of water filtration through the capillary walls into the interstitial fluid of the surrounding brain tissue, regulated by AQP-4. There are circadian variations in CSF secretion, with the mechanisms not fully understood, but potentially relating to differences in the activation of the autonomic nervous system over the course of the day. Choroid plexus of the lateral ventricle produces CSF from the arterial blood provided by the anterior choroidal artery. In the fourth ventricle, CSF is produced from the arterial blood from the anterior inferior cerebellar artery (cerebellopontine angle and the adjacent part of the lateral recess), the posterior inferior cerebellar artery (roof and median opening), and the superior cerebellar artery. CSF returns to the vascular system by entering the dural venous sinuses via arachnoid granulations. These are outpouchings of the arachnoid mater into the venous sinuses around the brain, with valves to ensure one-way drainage. This occurs because of a pressure difference between the arachnoid mater and venous sinuses. CSF has also been seen to drain into lymphatic vessels, particularly those surrounding the nose via drainage along the olfactory nerve through the cribriform plate. The pathway and extent are currently not known, but may involve CSF flow along some cranial nerves and be more prominent in the neonate. CSF turns over at a rate of three to four times a day. CSF has also been seen to be reabsorbed through the sheathes of cranial and spinal nerve sheathes, and through the ependyma. The composition and rate of CSF generation are influenced by hormones and the content and pressure of blood and CSF. For example, when CSF pressure is higher, there is less of a pressure difference between the capillary blood in choroid plexuses and CSF, decreasing the rate at which fluids move into the choroid plexus and CSF generation. The autonomic nervous system influences choroid plexus CSF secretion, with activation of the sympathetic nervous system increasing secretion and the parasympathetic nervous system decreasing it. Changes in the pH of the blood can affect the activity of carbonic anhydrase, and some drugs (such as frusemide, acting on the Na-Cl cotransporter) have the potential to impact membrane channels. CSF pressure, as measured by lumbar puncture, is 10–18 cmH2O (8–15 mmHg or 1.1–2 kPa) with the patient lying on the side and 20–30 cmH2O (16–24 mmHg or 2.1–3.2 kPa) with the patient sitting up. In newborns, CSF pressure ranges from 8 to 10 cmH2O (4.4–7.3 mmHg or 0.78–0.98 kPa). Most variations are due to coughing or internal compression of jugular veins in the neck. When lying down, the CSF pressure as estimated by lumbar puncture is similar to the intracranial pressure. Hydrocephalus is an abnormal accumulation of CSF in the ventricles of the brain. Hydrocephalus can occur because of obstruction of the passage of CSF, such as from an infection, injury, mass, or congenital abnormality. Hydrocephalus without obstruction associated with normal CSF pressure may also occur. Symptoms can include problems with gait and coordination, urinary incontinence, nausea and vomiting, and progressively impaired cognition. In infants, hydrocephalus can cause an enlarged head, as the bones of the skull have not yet fused, seizures, irritability and drowsiness. A CT scan or MRI scan may reveal enlargement of one or both lateral ventricles, or causative masses or lesions, and lumbar puncture may be used to demonstrate and in some circumstances relieve high intracranial pressure. Hydrocephalus is usually treated through the insertion of a shunt, such as a ventriculo-peritoneal shunt, which diverts fluid to another part of the body. Idiopathic intracranial hypertension is a condition of unknown cause characterized by a rise in CSF pressure. It is associated with headaches, double vision, difficulties seeing, and a swollen optic disc. It can occur in association with the use of Vitamin A and tetracycline antibiotics, or without any identifiable cause at all, particularly in younger obese women. Management may include ceasing any known causes, a carbonic anhydrase inhibitor such as acetazolamide, repeated drainage via lumbar puncture, or the insertion of a shunt such as a ventriculoperitoneal shunt. CSF can leak from the dura as a result of different causes such as physical trauma or a lumbar puncture, or from no known cause when it is termed a spontaneous cerebrospinal fluid leak. It is usually associated with intracranial hypotension: low CSF pressure. It can cause headaches, made worse by standing, moving and coughing, as the low CSF pressure causes the brain to "sag" downwards and put pressure on its lower structures. If a leak is identified, a beta-2 transferrin test of the leaking fluid, when positive, is highly specific and sensitive for the detection for CSF leakage. Medical imaging such as CT scans and MRI scans can be used to investigate for a presumed CSF leak when no obvious leak is found but low CSF pressure is identified. Caffeine, given either orally or intravenously, often offers symptomatic relief. Treatment of an identified leak may include injection of a person's blood into the epidural space (an epidural blood patch), spinal surgery, or fibrin glue. CSF can be tested for the diagnosis of a variety of neurological diseases, usually obtained by a procedure called lumbar puncture. Lumbar puncture is carried out under sterile conditions by inserting a needle into the subarachnoid space, usually between the third and fourth lumbar vertebrae. CSF is extracted through the needle, and tested. About one third of people experience a headache after lumbar puncture, and pain or discomfort at the needle entry site is common. Rarer complications may include bruising, meningitis or ongoing post lumbar-puncture leakage of CSF. Testing often including observing the colour of the fluid, measuring CSF pressure, and counting and identifying white and red blood cells within the fluid; measuring protein and glucose levels; and culturing the fluid. The presence of red blood cells and xanthochromia may indicate subarachnoid hemorrhage; whereas central nervous system infections such as meningitis, may be indicated by elevated white blood cell levels. A CSF culture may yield the microorganism that has caused the infection, or PCR may be used to identify a viral cause. Investigations to the total type and nature of proteins reveal point to specific diseases, including multiple sclerosis, paraneoplastic syndromes, systemic lupus erythematosus, neurosarcoidosis, cerebral angiitis; and specific antibodies such as Aquaporin 4 may be tested for to assist in the diagnosis of autoimmune conditions. A lumbar puncture that drains CSF may also be used as part of treatment for some conditions, including idiopathic intracranial hypertension and normal pressure hydrocephalus. Lumbar puncture can also be performed to measure the intracranial pressure, which might be increased in certain types of hydrocephalus. However, a lumbar puncture should never be performed if increased intracranial pressure is suspected due to certain situations such as a tumour, because it can lead to fatal brain herniation. Some anaesthetics and chemotherapy are injected intrathecally into the subarachnoid space, where they spread around CSF, meaning substances that cannot cross the blood-brain barrier can still be active throughout the central nervous system. Baricity refers to the density of a substance compared to the density of human cerebrospinal fluid and is used in regional anesthesia to determine the manner in which a particular drug will spread in the intrathecal space. Various comments by ancient physicians have been read as referring to CSF. Hippocrates discussed "water" surrounding the brain when describing congenital hydrocephalus, and Galen referred to "excremental liquid" in the ventricles of the brain, which he believed was purged into the nose. But for some 16 intervening centuries of ongoing anatomical study, CSF remained unmentioned in the literature. This is perhaps because of the prevailing autopsy technique, which involved cutting off the head, thereby removing evidence of CSF before the brain was examined. The modern rediscovery of CSF is credited to Emanuel Swedenborg. In a manuscript written between 1741 and 1744, unpublished in his lifetime, Swedenborg referred to CSF as "spirituous lymph" secreted from the roof of the fourth ventricle down to the medulla oblongata and spinal cord. This manuscript was eventually published in translation in 1887. Albrecht von Haller, a Swiss physician and physiologist, made note in his 1747 book on physiology that the "water" in the brain was secreted into the ventricles and absorbed in the veins, and when secreted in excess, could lead to hydrocephalus. Francois Magendie studied the properties of CSF by vivisection. He discovered the foramen Magendie, the opening in the roof of the fourth ventricle, but mistakenly believed that CSF was secreted by the pia mater. Thomas Willis (noted as the discoverer of the circle of Willis) made note of the fact that the consistency of CSF is altered in meningitis. In 1869 Gustav Schwalbe proposed that CSF drainage could occur via lymphatic vessels. In 1891, W. Essex Wynter began treating tubercular meningitis by removing CSF from the subarachnoid space, and Heinrich Quincke began to popularize lumbar puncture, which he advocated for both diagnostic and therapeutic purposes. In 1912, a neurologist William Mestrezat gave the first accurate description of the chemical composition of CSF. In 1914, Harvey W. Cushing published conclusive evidence that CSF is secreted by the choroid plexus. During phylogenesis, CSF is present within the neuraxis before it circulates. The CSF of Teleostei fish is contained within the ventricles of the brains, but not in a nonexistent subarachnoid space. In mammals, where a subarachnoid space is present, CSF is present in it. Absorption of CSF is seen in amniotes and more complex species, and as species become progressively more complex, the system of absorption becomes progressively more enhanced, and the role of spinal epidural veins in absorption plays a progressively smaller and smaller role. The amount of cerebrospinal fluid varies by size and species. In humans and other mammals, cerebrospinal fluid, produced, circulating, and reabsorbed in a similar manner to humans, and with a similar function, turns over at a rate of 3–5 times a day. Problems with CSF circulation leading to hydrocephalus occur in other animals.
https://en.wikipedia.org/wiki?curid=7632
Charles F. Hockett Charles Francis Hockett (January 17, 1916 – November 3, 2000) was an American linguist who developed many influential ideas in American structuralist linguistics. He represents the post-Bloomfieldian phase of structuralism often referred to as "distributionalism" or "taxonomic structuralism". His academic career spanned over half a century at Cornell and Rice universities. Hockett was also a firm believer of linguistics as a branch of anthropology, making contributions that were significant to the field of anthropology as well. At the age of 16, Hockett enrolled at Ohio State University in Columbus, Ohio where he received a Bachelor of Arts and Master of Arts in ancient history. While enrolled at Ohio State, Hockett became interested in the work of Leonard Bloomfield, a leading figure in the field of structural linguistics. Hockett continued his education at Yale University where he studied anthropology and linguistics and received his PhD in anthropology in 1939. While studying at Yale, Hockett studied with several other influential linguists such as Edward Sapir, George P. Murdock, and Benjamin Whorf. Hockett's dissertation was based on his fieldwork in Potawatomi; his paper on Potawatomi syntax was published in "Language" in 1939. In 1948 his dissertation was published as a series in the International Journal of American Linguistics. Following fieldwork in Kickapoo and Michoacán, Mexico, Hockett did two years of postdoctoral study with Leonard Bloomfield in Chicago and Michigan. Hockett began his teaching career in 1946 as an assistant professor of linguistics in the Division of Modern Languages at Cornell University where he was responsible for directing the Chinese language program. In 1957, Hockett became a member of Cornell's anthropology department and continued to teach anthropology and linguistics until he retired to emeritus status in 1982. In 1986, he took up an adjunct post at Rice University in Houston, Texas, where he remained active until his death in 2000. Charles Hockett held membership among many academic institutions such as the National Academy of Sciences the American Academy of Arts and Sciences, and the Society of Fellows at Harvard University. He served as president of both the Linguistic Society of America and the Linguistic Association of Canada and the United States. In addition to making many contributions to the field of structural linguistics, Hockett also considered such things as Whorfian Theory, jokes, the nature of writing systems, slips of the tongue, and animal communication and their relativeness to speech. Outside the realm of linguistics and anthropology, Hockett practiced musical performance and composition. Hockett composed a full-length opera called "The Love of Doña Rosita" which was based on a play by Federico García Lorca and premiered at Ithaca College by the Ithaca Opera. Hockett and his wife Shirley were vital leaders in the development of the Cayuga Chamber Orchestra in Ithaca, New York. In appreciation of the Hocketts' hard work and dedication to the Ithaca community, Ithaca College established the Charles F. Hockett Music Scholarship, the Shirley and Chas Hockett Chamber Music Concert Series, and the Hockett Family Recital Hall. In his paper "A Note on Structure", he proposes that linguistics can be seen as "a game and as a science." A linguist as a player in the game of languages has the freedom to experiment on all utterances of a language, but must ensure that "all the utterances of the corpus must be taken into account." Late in his career, he was known for his stinging criticism of Chomskyan linguistics. After carefully examining the generative school's proposed innovations in Linguistics, Hockett decided that this approach was of little value. His book "The State of the Art" outlined his criticisms of the generative approach. In his paraphrase a key principle of the Chomskyan paradigm is that there are an infinite number of grammatical sentences in any particular language. The grammar of a language is a finite system that characterizes an infinite set of (well-formed) sentences. More specifically, the grammar of a language is a "well-defined system" by definition not more powerful than a universal Turing machine (and, in fact, surely a great deal weaker). The crux of Hockett's rebuttal is that the set of grammatical sentences in a language is not infinite, but rather ill-defined. Hockett proposes that "no physical system is well-defined". Later in "Where the tongue slips, there slip I" he writes as follows. It is currently fashionable to assume that, underlying the actual more or less bumbling speech behavior of any human being, there is a subtle and complicated but determinate linguistic "competence": a sentence-generating device whose design can only be roughly guessed at by any techniques so far available to us. This point of view makes linguistics very hard and very erudite, so that anyone who actually does discover facts about underlying "competence" is entitled to considerable kudos. Within this popular frame of reference, a theory of "performance" -- of the "generation of speech" -- must take more or less the following form. If a sentence is to be uttered aloud, or even thought silently to oneself, it must first be built by the internal "competence" of the speaker, the functioning of which is by definition such that the sentence will be legal ("grammatical") in every respect. But that is not enough; the sentence as thus constructed must then be "performed", either overtly so that others may hear it, or covertly so that it is perceived only by the speaker himself. It is in this second step that blunders may appear. That which is generated by the speaker's internal "competence"is what the speaker "intends to say," and is the only real concern of linguistics: blunders in actually performed speech are instructions from elsewhere. Just if there are no such intrusions is what is performed an instance of "smooth speech". I believe this view is unmitigated nonsense, unsupported by any empirical evidence of any sort. In its place, I propose the following. "All" speech, smooth as well as blunderful, can be and must be accounted for essentially in terms of the three mechanisms we have listed: analogy, blending, and editing. An individual's language, at a given moment, is a set of habits--that is, of analogies, where different analogies are in conflict, one may appear as a constraint on the working of another. Speech actualizes habits--and changes the habits as it does so. Speech reflects awareness of norms; but norms are themselves entirely a matter of analogy (that is, of habit), not some different kind of thing. Despite his criticisms, Hockett always expressed gratitude to the generative school for seeing real problems in the preexisting approaches. There are many situations in which bracketing does not serve to disambiguate. As already noted, words that belong together cannot always be spoken together, and when they are not, bracketing is difficult or impossible. In the 1950s this drove some grammarians to drink and other to transformations, but both are only anodynes, not answers One of Hockett's most important contributions was his development of the design-feature approach to comparative linguistics. He attempted to distinguish the similarities and differences among animal communication systems and human language. Hockett initially developed seven features, which were published in the 1959 paper “Animal ‘Languages’ and Human Language.” However, after many revisions, he settled on 13 design-features in the "Scientific American " "The Origin of Speech." Hockett argued that while every communication system has some of the 13 design features, only human, spoken language has all 13 features. In turn, that differentiates human spoken language from animal communication and other human communication systems such as written language. While Hockett believed that all communication systems, animal and human alike, share many of these features, only human language contains all 13 design features. Additionally, traditional transmission, and duality of patterning are key to human language. Foraging honey bees communicate with other members of their hive when they have discovered a relevant source of pollen, nectar, or water. In an effort to convey information about the location and the distance of such resources, honeybees participate in a particular figure-eight dance known as the waggle dance. In Hockett's "The Origin of Speech", he determined that the honeybee communication system of the waggle dance holds the following design features: Gibbons are small apes in the family Hylobatidae. While they share the same kingdom, phylum, class, and order of humans and are relatively close to man, Hockett distinguishes between the gibbon communication system and human language by noting that gibbons are devoid of the last four design features. Gibbons possess the first nine design features, but do not possess the last four (displacement, productivity, traditional transmission, and duality of patterning). In a report published in 1968 with anthropologist and scientist Stuart A. Altmann, Hockett derived three more Design Features, bringing the total to 16. These are the additional three: Cognitive scientist and linguist at the University of Sussex Larry Trask offered an alternative term and definition for number 14, Prevarication: There has since been one more Feature added to the list, by Dr. William Taft Stuart, a director of the Undergraduate Studies program at the University of Maryland: College Park's Anthropology school, part of the College of Behavioral and Social Sciences. His “extra” Feature is: This follows the definition of Grammar and Syntax, as given by Merriam-Webster's Dictionary: Additionally, Dr. Stuart defends his postulation with references to famous linguist Noam Chomsky and University of New York psychologist Gary Marcus. Chomsky theorized that humans are unique in the animal world because of their ability to utilize Design Feature 5: Total Feedback, or recursive grammar. This includes being able to correct oneself and insert explanatory or even non sequitur statements into a sentence, without breaking stride, and keeping proper grammar throughout. While there have been studies attempting to disprove Chomsky, Marcus states that, "An intriguing possibility is that the capacity to recognize recursion might be found only in species that can acquire new patterns of vocalization, for example, songbirds, humans and perhaps some cetaceans." This is in response to a study performed by psychologist Timothy Gentner of the University of California at San Diego. Gentner's study found that starling songbirds use recursive grammar to identify “odd” statements within a given “song.” However, the study does not necessarily debunk Chomsky's observation because it has not yet been proven that songbirds have the semantic ability to generalize from patterns. There is also thought that symbolic thought is necessary for grammar-based speech, and thus Homo Erectus and all preceding “humans” would have been unable to comprehend modern speech. Rather, their utterances would have been halting and even quite confusing to us, today. The University of Oxford: Phonetics Laboratory Faculty of Linguistics, Philology and Phonetics published the following chart, detailing how Hockett's (and Altmann's) Design Features fit into other forms of communication, in animals:
https://en.wikipedia.org/wiki?curid=7635
Consilience In science and history, consilience (also convergence of evidence or concordance of evidence) is the principle that evidence from independent, unrelated sources can "converge" on strong conclusions. That is, when multiple sources of evidence are in agreement, the conclusion can be very strong even when none of the individual sources of evidence is significantly so on its own. Most established scientific knowledge is supported by a convergence of evidence: if not, the evidence is comparatively weak, and there will not likely be a strong scientific consensus. The principle is based on the unity of knowledge; measuring the same result by several different methods should lead to the same answer. For example, it should not matter whether one measures the distance between the Giza pyramid complex by laser rangefinding, by satellite imaging, or with a meter stick – in all three cases, the answer should be approximately the same. For the same reason, different dating methods in geochronology should concur, a result in chemistry should not contradict a result in geology, etc. The word "consilience" was originally coined as the phrase "consilience of inductions" by William Whewell ("consilience" refers to a "jumping together" of knowledge). The word comes from Latin "com-" "together" and "-siliens" "jumping" (as in resilience). Consilience requires the use of independent methods of measurement, meaning that the methods have few shared characteristics. That is, the mechanism by which the measurement is made is different; each method is dependent on an unrelated natural phenomenon. For example, the accuracy of laser rangefinding measurements is based on the scientific understanding of lasers, while satellite pictures and meter sticks rely on different phenomena. Because the methods are independent, when one of several methods is in error, it is very unlikely to be in error in the "same way" as any of the other methods, and a difference between the measurements will be observed. If the scientific understanding of the properties of lasers were inaccurate, then the laser measurement would be inaccurate but the others would not. As a result, when several different methods agree, this is strong evidence that "none" of the methods are in error and the conclusion is correct. This is because of a greatly reduced likelihood of errors: for a consensus estimate from multiple measurements to be wrong, the errors would have to be similar for all samples and all methods of measurement, which is extremely unlikely. Random errors will tend to cancel out as more measurements are made, due to regression to the mean; systematic errors will be detected by differences between the measurements (and will also tend to cancel out since the direction of the error will still be random). This is how scientific theories reach high confidence – over time, they build up a large degree of evidence which converges on the same conclusion. When results from different strong methods do appear to conflict, this is treated as a serious problem to be reconciled. For example, in the 19th century, the Sun appeared to be no more than 20 million years old, but the Earth appeared to be no less than 300 million years (resolved by the discovery of nuclear fusion and radioactivity, and the theory of quantum mechanics); or current attempts to resolve theoretical differences between quantum mechanics and general relativity. Because of consilience, the strength of evidence for any particular conclusion is related to how many independent methods are supporting the conclusion, as well as how different these methods are. Those techniques with the fewest (or no) shared characteristics provide the strongest consilience and result in the strongest conclusions. This also means that confidence is usually strongest when considering evidence from different fields, because the techniques are usually very different. For example, the theory of evolution is supported by a convergence of evidence from genetics, molecular biology, paleontology, geology, biogeography, comparative anatomy, comparative physiology, and many other fields. In fact, the evidence within each of these fields is itself a convergence providing evidence for the theory. (As a result, to disprove evolution, most or all of these independent lines of evidence would have to be found to be in error.) The strength of the evidence, considered together as a whole, results in the strong scientific consensus that the theory is correct. In a similar way, evidence about the history of the universe is drawn from astronomy, astrophysics, planetary geology, and physics. Finding similar conclusions from multiple independent methods is also evidence for the reliability of the methods themselves, because consilience eliminates the possibility of all potential errors that do not affect all the methods equally. This is also used for the validation of new techniques through comparison with the consilient ones. If only partial consilience is observed, this allows for the detection of errors in methodology; any weaknesses in one technique can be compensated for by the strengths of the others. Alternatively, if using more than one or two techniques for every experiment is infeasible, some of the benefits of consilience may still be obtained if it is well-established that these techniques usually give the same result. Consilience is important across all of science, including the social sciences, and is often used as an argument for scientific realism by philosophers of science. Each branch of science studies a subset of reality that depends on factors studied in other branches. Atomic physics underlies the workings of chemistry, which studies emergent properties that in turn are the basis of biology. Psychology is not separate from the study of properties emergent from the interaction of neurons and synapses. Sociology, economics, and anthropology are each, in turn, studies of properties emergent from the interaction of countless individual humans. The concept that all the different areas of research are studying one real, existing universe is an apparent explanation of why scientific knowledge determined in one field of inquiry has often helped in understanding other fields. Consilience does not forbid deviations: in fact, since not all experiments are perfect, some deviations from established knowledge are expected. However, when the convergence is strong enough, then new evidence inconsistent with the previous conclusion is not usually enough to outweigh that convergence. Without an equally strong convergence on the new result, the weight of evidence will still favor the established result. This means that the new evidence is most likely to be wrong. Science denialism (for example, AIDS denialism) is often based on a misunderstanding of this property of consilience. A denier may promote small gaps not yet accounted for by the consilient evidence, or small amounts of evidence contradicting a conclusion without accounting for the pre-existing strength resulting from consilience. More generally, to insist that all evidence converge precisely with no deviations would be naïve falsificationism, equivalent to considering a single contrary result to falsify a theory when another explanation, such as equipment malfunction or misinterpretation of results, is much more likely. Historical evidence also converges in an analogous way. For example: if five ancient historians, none of whom knew each other, all claim that Julius Caesar seized power in Rome in 49 BCE, this is strong evidence in favor of that event occurring even if each individual historian is only partially reliable. By contrast, if the same historian had made the same claim five times in five different places (and no other types of evidence were available), the claim is much weaker because it originates from a single source. The evidence from the ancient historians could also converge with evidence from other fields, such as archaeology: for example, evidence that many senators fled Rome at the time, that the battles of Caesar’s civil war occurred, and so forth. Consilience has also been discussed in reference to Holocaust denial. That is, individually the evidence may underdetermine the conclusion, but together they overdetermine it. A similar way to state this is that to ask for one "particular" piece of evidence in favor of a conclusion is a flawed question. In addition to the sciences, consilience can be important to the arts, ethics and religion. Both artists and scientists have identified the importance of biology in the process of artistic innovation. Consilience has its roots in the ancient Greek concept of an intrinsic orderliness that governs our cosmos, inherently comprehensible by logical process, a vision at odds with mystical views in many cultures that surrounded the Hellenes. The rational view was recovered during the high Middle Ages, separated from theology during the Renaissance and found its apogee in the Age of Enlightenment. Whewell’s definition was that: More recent descriptions include: Although the concept of consilience in Whewell's sense was widely discussed by philosophers of science, the term was unfamiliar to the broader public until the end of the 20th century, when it was revived in "Consilience: The Unity of Knowledge," a 1998 book by the author and biologist E.O. Wilson, as an attempt to bridge the culture gap between the sciences and the humanities that was the subject of C. P. Snow's "The Two Cultures and the Scientific Revolution" (1959). Wilson held that with the rise of the modern sciences, the sense of unity gradually was lost in the increasing fragmentation and specialization of knowledge in the last two centuries. He asserted that the sciences, humanities, and arts have a common goal: to give a purpose to understanding the details, to lend to all inquirers "a conviction, far deeper than a mere working proposition, that the world is orderly and can be explained by a small number of natural laws." An important point made by Wilson is that hereditary human nature and evolution itself profoundly effect the evolution of culture, in essence a sociobiological concept. Wilson's concept is a much broader notion of consilience than that of Whewell, who was merely pointing out that generalizations invented to account for one set of phenomena often account for others as well. A parallel view lies in the term universology, which literally means "the science of the universe." Universology was first promoted for the study of the interconnecting principles and truths of all domains of knowledge by Stephen Pearl Andrews, a 19th-century utopian futurist and anarchist.
https://en.wikipedia.org/wiki?curid=7638
Clarence Brown Clarence Leon Brown (May 10, 1890 – August 17, 1987) was an American film director. Born in Clinton, Massachusetts, to Larkin Harry Brown, a cotton manufacturer, and Katherine Ann Brown (née Gaw), Brown moved to Tennessee when he was 11 years old. He attended Knoxville High School and the University of Tennessee, both in Knoxville, Tennessee, graduating from the university at the age of 19 with two degrees in engineering. An early fascination in automobiles led Brown to a job with the Stevens-Duryea Company, then to his own Brown Motor Car Company in Alabama. He later abandoned the car dealership after developing an interest in motion pictures around 1913. He was hired by the Peerless Studio at Fort Lee, New Jersey, and became an assistant to the French-born director Maurice Tourneur. After serving in World War I, Brown was given his first co-directing credit (with Tourneur) for "The Great Redeemer" (1920). Later that year, he directed a major portion of "The Last of the Mohicans" after Tourneur was injured in a fall. Brown moved to Universal in 1924, and then to MGM, where he stayed until the mid-1950s. At MGM he was one of the main directors of their female stars; he directed Joan Crawford six times and Greta Garbo seven. He was nominated five times (see below) for the Academy Award as a director and once as a producer, but he never received an Oscar. However, he won Best Foreign Film for "Anna Karenina", starring Garbo at the 1935 Venice International Film Festival. Brown's films gained a total of 38 Academy Award nominations and earned nine Oscars. Brown himself received six Academy Award nominations and in 1949, he won the British Academy Award for the film version of William Faulkner's "Intruder in the Dust". In 1957, Brown was awarded The George Eastman Award, given by George Eastman House for distinguished contribution to the art of film. Brown retired a wealthy man due to his real estate investments, but refused to watch new movies, as he feared they might cause him to restart his career. The Clarence Brown Theater, on the campus of the University of Tennessee, is named in his honor. He holds the record for most nominations for the Academy Award for Best Director without a win, with six. Clarence Brown was married four times. His first marriage was to Paul Herndon Pratt in 1913, which lasted until their divorce in 1920. The couple produced a daughter, Adrienne Brown, in 1917. His second marriage was to Ona Wilson, which lasted from 1922 until their divorce in 1927. He was engaged to Dorothy Sebastian and Mona Maris, although he did not marry either of them, with Maris later saying she ended their relationship because she had her "own ideas of marriage then." He married his third wife, Alice Joyce, in 1933 and they divorced in 1945. His last marriage was to Marian Spies in 1946, which lasted until his death in 1987. Brown died at the Saint John's Health Center in Santa Monica, California from kidney failure on August 17, 1987, at the age of 97. He is interred at Forest Lawn Memorial Park in Glendale, California. On February 8, 1960, Brown received a star on the Hollywood Walk of Fame at 1752 Vine Street, for his contributions to the motion pictures industry
https://en.wikipedia.org/wiki?curid=7642
Conciliation Conciliation is an alternative dispute resolution (ADR) process whereby the parties to a dispute use a conciliator, who meets with the parties both separately and together in an attempt to resolve their differences. They do this by lowering tensions, improving communications, interpreting issues, encouraging parties to explore potential solutions and assisting parties in finding a mutually acceptable outcome. Conciliation differs from arbitration in that the conciliation process, in and of itself, has no legal standing, and the conciliator usually has no authority to seek evidence or call witnesses, usually writes no decision, and makes no award. Conciliation differs from mediation in that in conciliation, often the parties are in need of restoring or repairing a relationship, either personal or business. A conciliator assists each of the parties to independently develop a list of all of their objectives (the outcomes which they desire to obtain from the conciliation). The conciliator then has each of the parties separately prioritize their own list from most to least important. He/She then goes back and forth between the parties and encourages them to "give" on the objectives one at a time, starting with the least important and working toward the most important for each party in turn. The parties rarely place the same priorities on all objectives, and usually have some objectives that are not listed by the other party. Thus the conciliator can quickly build a string of successes and help the parties create an atmosphere of trust which the conciliator can continue to develop. Most successful conciliators are highly skilled negotiators. Some conciliators operate under the auspices of any one of several non-governmental entities, and for governmental agencies such as the Federal Mediation and Conciliation Service in the United States. Historical conciliation is an applied conflict resolution approach that utilizes historical narratives to positively transform relations between societies in conflicts. Historical conciliation can utilize many different methodologies, including mediation, sustained dialogues, apologies, acknowledgement, support of public commemoration activities, and public diplomacy. Historical conciliation is not an excavation of objective facts. The point of facilitating historical questions is not to discover all the facts in regard to who was right or wrong. Rather, the objective is to discover the complexity, ambiguity, and emotions surrounding both dominant and non-dominant cultural and individual narratives of history. It is also not a rewriting of history. The goal is not to create a combined narrative that everyone agrees upon. Instead, the aim is to create room for critical thinking and more inclusive understanding of the past and conceptions of “the other.” Conflicts that are addressed through historical conciliation have their roots in conflicting identities of the people involved. Whether the identity at stake is their ethnicity, religion or culture, it requires a comprehensive approach that takes people's needs, hopes, fears, and concerns into account. Japanese law makes extensive use of in civil disputes. The most common forms are civil conciliation and domestic conciliation, both of which are managed under the auspice of the court system by one judge and two non-judge "conciliators." Civil conciliation is a form of dispute resolution for small lawsuits, and provides a simpler and cheaper alternative to litigation. Depending on the nature of the case, non-judge experts (doctors, appraisers, actuaries, and so on) may be called by the court as conciliators to help decide the case. Domestic conciliation is most commonly used to handle contentious divorces, but may apply to other domestic disputes such as the annulment of a marriage or acknowledgment of paternity. Parties in such cases are required to undergo conciliation proceedings and may only bring their case to court once conciliation has failed.
https://en.wikipedia.org/wiki?curid=7643
Cyclone (programming language) The Cyclone programming language is intended to be a safe dialect of the C language. Cyclone is designed to avoid buffer overflows and other vulnerabilities that are possible in C programs, without losing the power and convenience of C as a tool for system programming. Cyclone development was started as a joint project of AT&T Labs Research and Greg Morrisett's group at Cornell in 2001. Version 1.0 was released on May 8, 2006. Cyclone attempts to avoid some of the common pitfalls of C, while still maintaining its look and performance. To this end, Cyclone places the following limits on programs: To maintain the tool set that C programmers are used to, Cyclone provides the following extensions: For a better high-level introduction to Cyclone, the reasoning behind Cyclone and the source of these lists, see this paper. Cyclone looks, in general, much like C, but it should be viewed as a C-like language. Cyclone implements three kinds of pointer: The purpose of introducing these new pointer types is to avoid common problems when using pointers. Take for instance a function, called codice_17 that takes a pointer to an int: Although the person who wrote the function codice_17 could have inserted codice_1 checks, let us assume that for performance reasons they did not. Calling codice_20 will result in undefined behavior (typically, although not necessarily, a SIGSEGV signal being sent to the application). To avoid such problems, Cyclone introduces the codice_14 pointer type, which can never be codice_1. Thus, the "safe" version of codice_17 would be: This tells the Cyclone compiler that the argument to codice_17 should never be codice_1, avoiding the aforementioned undefined behavior. The simple change of codice_13 to codice_14 saves the programmer from having to write codice_1 checks and the operating system from having to trap codice_1 pointer dereferences. This extra limit, however, can be a rather large stumbling block for most C programmers, who are used to being able to manipulate their pointers directly with arithmetic. Although this is desirable, it can lead to buffer overflows and other "off-by-one"-style mistakes. To avoid this, the codice_16 pointer type is delimited by a known bound, the size of the array. Although this adds overhead due to the extra information stored about the pointer, it improves safety and security. Take for instance a simple (and naïve) codice_31 function, written in C: This function assumes that the string being passed in is terminated by NULL (codice_32). However, what would happen if codice_33 were passed to this string? This is perfectly legal in C, yet would cause codice_31 to iterate through memory not necessarily associated with the string codice_35. There are functions, such as codice_36 which can be used to avoid such problems, but these functions are not standard with every implementation of ANSI C. The Cyclone version of codice_31 is not so different from the C version: Here, codice_31 bounds itself by the length of the array passed to it, thus not going over the actual length. Each of the kinds of pointer type can be safely cast to each of the others, and arrays and strings are automatically cast to codice_16 by the compiler. (Casting from codice_16 to codice_13 invokes a bounds check, and casting from codice_16 to codice_14 invokes both a codice_1 check and a bounds check. Casting from codice_13 to codice_16 results in no checks whatsoever; the resulting codice_16 pointer has a size of 1.) Consider the following code, in C: Function codice_48 allocates an array of chars codice_49 on the stack and returns a pointer to the start of codice_49. However the memory used on the stack for codice_49 is deallocated when the function returns, so the returned value cannot be used safely outside of the function. While gcc and other compilers will warn about such code, the following will typically compile without warnings: gcc can produce warnings for such code as a side-effect of option -O2 or -O3, but there are no guarantees that all such errors will be detected. Cyclone does regional analysis of each segment of code, preventing dangling pointers, such as the one returned from this version of codice_48. All of the local variables in a given scope are considered to be part of the same region, separate from the heap or any other local region. Thus, when analyzing codice_48, the Cyclone compiler would see that codice_54 is a pointer into the local stack, and would report an error. Presentations:
https://en.wikipedia.org/wiki?curid=7645
Counter (digital) In digital logic and computing, a counter is a device which stores (and sometimes displays) the number of times a particular event or process has occurred, often in relationship to a clock. The most common type is a sequential digital logic circuit with an input line called the "clock" and multiple output lines. The values on the output lines represent a number in the binary or BCD number system. Each pulse applied to the clock input increments or decrements the number in the counter. A counter circuit is usually constructed of a number of flip-flops connected in cascade. Counters are a very widely used component in digital circuits, and are manufactured as separate integrated circuits and also incorporated as parts of larger integrated circuits. In electronics, counters can be implemented quite easily using register-type circuits such as the flip-flop, and a wide variety of classified into: Each is useful for different applications. Usually, counter circuits are digital in nature, and count in natural binary. Many types of counter circuits are available as digital building blocks, for example a number of chips in the 4000 and 4500 series implement different counters. Occasionally there are advantages to using a counting sequence other than the natural binary sequence—such as the binary coded decimal counter, a linear-feedback shift register counter, or a Gray-code counter. Counters are useful for digital clocks and timers, and in oven timers, VCR clocks, etc. An asynchronous (ripple) counter is a single d-type flip-flop, with its J (data) input fed from its own inverted output. This circuit can store one bit, and hence can count from zero to one before it overflows (starts over from 0). This counter will increment once for every clock cycle and takes two clock cycles to overflow, so every cycle it will alternate between a transition from 0 to 1 and a transition from 1 to 0. Notice that this creates a new clock with a 50% duty cycle at exactly half the frequency of the input clock. If this output is then used as the clock signal for a similarly arranged D flip-flop (remembering to invert the output to the input), one will get another 1 bit counter that counts half as fast. Putting them together yields a two-bit counter: You can continue to add additional flip-flops, always inverting the output to its own input, and using the output from the previous flip-flop as the clock signal. The result is called a ripple counter, which can count to where "n" is the number of bits (flip-flop stages) in the counter. Ripple counters suffer from unstable outputs as the overflows "ripple" from stage to stage, but they do find frequent application as dividers for clock signals, where the instantaneous count is unimportant, but the division ratio overall is (to clarify this, a 1-bit counter is exactly equivalent to a divide by two circuit; the output frequency is exactly half that of the input when fed with a regular train of clock pulses). The use of flip-flop outputs as clocks leads to timing skew between the count data bits, making this ripple technique incompatible with normal synchronous circuit design styles. In synchronous counters, the clock inputs of all the flip-flops are connected together and are triggered by the input pulses. Thus, all the flip-flops change state simultaneously (in parallel). The circuit below is a 4-bit synchronous counter. The J and K inputs of FF0 are connected to HIGH. FF1 has its J and K inputs connected to the output of FF0, and the J and K inputs of FF2 are connected to the output of an AND gate that is fed by the outputs of FF0 and FF1. A simple way of implementing the logic for each bit of an ascending counter (which is what is depicted in the adjacent image) is for each bit to toggle when all of the less significant bits are at a logic high state. For example, bit 1 toggles when bit 0 is logic high; bit 2 toggles when both bit 1 and bit 0 are logic high; bit 3 toggles when bit 2, bit 1 and bit 0 are all high; and so on. Synchronous counters can also be implemented with hardware finite-state machines, which are more complex but allow for smoother, more stable transitions. A decade counter is one that counts in decimal digits, rather than binary. A decade counter may have each (that is, it may count in binary-coded decimal, as the 7490 integrated circuit did) or other binary encodings. "A decade counter is a binary counter that is designed to count to 1010 (decimal 10). An ordinary four-stage counter can be easily modified to a decade counter by adding a NAND gate as in the schematic to the right. Notice that FF2 and FF4 provide the inputs to the NAND gate. The NAND gate outputs are connected to the CLR input of each of the FFs." A decade counter is one that counts in decimal digits, rather than binary. It counts from 0 to 9 and then resets to zero. The counter output can be set to zero by pulsing the reset line low. The count then increments on each clock pulse until it reaches 1001 (decimal 9). When it increments to 1010 (decimal 10) both inputs of the NAND gate go high. The result is that the NAND output goes low, and resets the counter to zero. D going low can be a CARRY OUT signal, indicating that there has been a count of ten. A ring counter is a circular shift register which is initiated such that only one of its flip-flops is the state one while others are in their zero states. A ring counter is a shift register (a cascade connection of flip-flops) with the output of the last one connected to the input of the first, that is, in a ring. Typically, a pattern consisting of a single bit is circulated so the state repeats every n clock cycles if n flip-flops are used. A Johnson counter (or switch-tail ring counter, twisted ring counter, walking ring counter, or Möbius counter) is a modified ring counter, where the output from the last stage is inverted and fed back as input to the first stage. The register cycles through a sequence of bit-patterns, whose length is equal to twice the length of the shift register, continuing indefinitely. These counters find specialist applications, including those similar to the decade counter, digital-to-analog conversion, etc. They can be implemented easily using D- or JK-type flip-flops. In computability theory, a counter is considered a type of memory. A counter stores a single natural number (initially zero) and can be arbitrarily long. A counter is usually considered in conjunction with a finite-state machine (FSM), which can perform the following operations on the counter: The following machines are listed in order of power, with each one being strictly more powerful than the one below it: For the first and last, it doesn't matter whether the FSM is a deterministic finite automaton or a nondeterministic finite automaton. They have the same power. The first two and the last one are levels of the Chomsky hierarchy. The first machine, an FSM plus two counters, is equivalent in power to a Turing machine. See the article on counter machines for a proof. A web counter or hit counter is a computer software program that indicates the number of visitors, or hits, a particular webpage has received. Once set up, these counters will be incremented by one every time the web page is accessed in a web browser. The number is usually displayed as an inline digital image or in plain text or on a physical counter such as a mechanical counter. Images may be presented in a variety of fonts, or styles; the classic example is the wheels of an odometer. "Web counter" was popular in the mid to late 1990s and early 2000s, later replaced by more detailed and complete web traffic measures. Many automation systems use PC and laptops to monitor different parameters of machines and production data. Counters may count parameters such as the number of pieces produced, the production batch number, and measurements of the amounts of material used. Long before electronics became common, mechanical devices were used to count events. These are known as tally counters. They typically consist of a series of disks mounted on an axle, with the digits zero through nine marked on their edge. The right most disk moves one increment with each event. Each disk except the left-most has a protrusion that, after the completion of one revolution, moves the next disk to the left one increment. Such counters were used as odometers for bicycles and cars and in tape recorders, fuel dispensers, in production machinery as well as in other machinery. One of the largest manufacturers was the Veeder-Root company, and their name was often used for this type of counter. Hand held tally counters are used mainly for stocktaking and for counting people attending events. Electromechanical counters were used to accumulate totals in tabulating machines that pioneered the data processing industry.
https://en.wikipedia.org/wiki?curid=7647
Clay Mathematics Institute The Clay Mathematics Institute (CMI) is a private, non-profit foundation, based in Peterborough, New Hampshire, United States. CMI's scientific activities are managed from the President's office in Oxford, United Kingdom. The institute is "dedicated to increasing and disseminating mathematical knowledge." It gives out various awards and sponsorships to promising mathematicians. The institute was founded in 1998 through the sponsorship of Boston businessman Landon T. Clay. Harvard mathematician Arthur Jaffe was the first president of CMI. While the institute is best known for its Millennium Prize Problems, it carries out a wide range of activities, including a postdoctoral program (ten Clay Research Fellows are supported currently), conferences, workshops, and summer schools. The institute is run according to a standard structure comprising a scientific advisory committee that decides on grant-awarding and research proposals, and a board of directors that oversees and approves the committee's decisions. , the board is made up of members of the Clay family, whereas the advisory committee is composed of leading authorities in mathematics, namely Sir Andrew Wiles, Michael Hopkins, Carlos Kenig, Andrei Okounkov, and Simon Donaldson. Martin R. Bridson is the current president of CMI. The institute is best known for establishing the Millennium Prize Problems on May 24, 2000. These seven problems are considered by CMI to be "important classic questions that have resisted solution over the years." For each problem, the first person to solve it will be awarded $1,000,000 by the CMI. In announcing the prize, CMI drew a parallel to Hilbert's problems, which were proposed in 1900, and had a substantial impact on 20th century mathematics. Of the initial 23 Hilbert problems, most of which have been solved, only the Riemann hypothesis (formulated in 1859) is included in the seven Millennium Prize Problems. For each problem, the Institute had a professional mathematician write up an official statement of the problem, which will be the main standard by which a given solution will be measured against. The seven problems are: Some of the mathematicians who were involved in the selection and presentation of the seven problems were Michael Atiyah, Enrico Bombieri, Alain Connes, Pierre Deligne, Charles Fefferman, John Milnor, David Mumford, Andrew Wiles, and Edward Witten. In recognition of major breakthroughs in mathematical research, the institute has an annual prize — the Clay Research Award. Its recipients to date are Ian Agol, Manindra Agrawal, Yves Benoist, Manjul Bhargava, Tristan Buckmaster, Danny Calegari, Alain Connes, Nils Dencker, Alex Eskin, David Gabai, Ben Green, Mark Gross, Larry Guth, Christopher Hacon, Richard S. Hamilton, Michael Harris, Philip Isett, Jeremy Kahn, Nets Katz, Laurent Lafforgue, Gérard Laumon, Aleksandr Logunov, Eugenia Malinnikova, Vladimir Markovic, James McKernan, Jason Miller, Maryam Mirzakhani, Ngô Bảo Châu, Rahul Pandharipande, Jonathan Pila, Jean-François Quint, Peter Scholze, Oded Schramm, Scott Sheffield, Bernd Siebert, Stanislav Smirnov, Terence Tao, Clifford Taubes, Richard Taylor, Maryna Viazovska, Vlad Vicol, Claire Voisin, Jean-Loup Waldspurger, Andrew Wiles, Geordie Williamson, Edward Witten and Wei Zhang. Besides the Millennium Prize Problems, the Clay Mathematics Institute supports mathematics via the awarding of research fellowships (which range from two to five years, and are aimed at younger mathematicians), as well as shorter-term scholarships for programs, individual research, and book writing. The institute also has a yearly Clay Research Award, recognizing major breakthroughs in mathematical research. Finally, the institute organizes a number of summer schools, conferences, workshops, public lectures, and outreach activities aimed primarily at junior mathematicians (from the high school to postdoctoral level). CMI publications are available in PDF form at most six months after they appear in print. The episode of the television series "Elementary" entitled "Solve for X" (Season 2, Episode 2) mentions the Clay Mathematics Institute in reference to their involvement in the P versus NP problem.
https://en.wikipedia.org/wiki?curid=7655
Cerebral arteriovenous malformation A cerebral arteriovenous malformation (cerebral AVM, CAVM, cAVM) is an abnormal connection between the arteries and veins in the brain—specifically, an arteriovenous malformation in the cerebrum. The most frequently observed problems, related to an AVM, are headaches and seizures, cranial nerve deficits, backaches, neckaches and eventual nausea, as the coagulated blood makes its way down to be dissolved in the individual's spinal fluid. It is supposed that 15% of the population, at detection, have no symptoms at all. Other common symptoms are a pulsing noise in the head, progressive weakness and numbness and vision changes as well as debilitating, excruciating pain. In serious cases, the blood vessels rupture and there is bleeding within the brain (intracranial hemorrhage). Nevertheless, in more than half of patients with AVM, hemorrhage is the first symptom. Symptoms due to bleeding include loss of consciousness, sudden and severe headache, nausea, vomiting, incontinence, and blurred vision, amongst others. Impairments caused by local brain tissue damage on the bleed site are also possible, including seizure, one-sided weakness (hemiparesis), a loss of touch sensation on one side of the body and deficits in language processing (aphasia). Ruptured AVMs are responsible for considerable mortality and morbidity. AVMs in certain critical locations may stop the circulation of the cerebrospinal fluid, causing accumulation of the fluid within the skull and giving rise to a clinical condition called hydrocephalus. A stiff neck can occur as the result of increased pressure within the skull and irritation of the meninges. AVMs are an abnormal connection between the arteries and veins in the human brain. Arteriovenous malformations are most commonly of prenatal origin. In a normal brain oxygen enriched blood from the heart travels in sequence through smaller blood vessels going from arteries, to arterioles and then capillaries. Oxygen is removed in the latter vessel to be used by the brain. After the oxygen is removed blood reaches venules and later veins which will take it back to the heart and lungs. On the other hand, when there is an AVM blood goes directly from arteries to veins through the abnormal vessels disrupting the normal circulation of blood. An AVM diagnosis is established by neuroimaging studies after a complete neurological and physical examination. Three main techniques are used to visualize the brain and search for AVM: computed tomography (CT), magnetic resonance imaging (MRI), and cerebral angiography. A CT scan of the head is usually performed first when the subject is symptomatic. It can suggest the approximate site of the bleed. MRI is more sensitive than CT in the diagnosis of AVMs and provides better information about the exact location of the malformation. More detailed pictures of the tangle of blood vessels that compose an AVM can be obtained by using radioactive agents injected into the blood stream. If a CT is used in conjunctiangiogram, this is called a computerized tomography angiogram; while, if MRI is used it is called magnetic resonance angiogram. The best images of an AVM are obtained through cerebral angiography. This procedure involves using a catheter, threaded through an artery up to the head, to deliver a contrast agent into the AVM. As the contrast agent flows through the AVM structure, a sequence of X-ray images are obtained. A common method of grading cerebral AVMs is the Spetzler-Martin (SM) grade. This system was designed to assess the patient's risk of neurological deficit after open surgical resection (surgical morbidity), based on characteristics of the AVM itself. Based on this system, AVMs may be classified as grades 1 - 5. This system was not intended to characterize risk of hemorrhage. "Eloquent" is defined as areas within the brain that, if removed will result in loss of sensory processing or linguistic ability, minor paralysis, or paralysis. These include the sensorimotor cortices, deep cerebellar nuclei, cerebral peduncles, thalamus, hypothalamus, internal capsule, brainstem, and the visual cortex. The risk of post-surgical neurological deficit (difficulty with language, motor weakness, vision loss) increases with increasing Spetzler-Martin grade. A limitation of the Spetzler-Martin Grading system is that it does not include the following factors: Patient age, hemorrhage, diffuseness of nidus, and arterial supply. In 2010 a new supplemented Spetzler-Martin system (SM-supp, Lawton-Young) was devised adding these variables to the SM system. Under this new system AVMs are classified from grades 1 - 10. It has since been determined to have greater predictive accuracy that Spetzler-Martin grades alone. Treatment depends on the location and size of the AVM and whether there is bleeding or not. The treatment in the case of sudden bleeding is focused on restoration of vital function. Medical management Anticonvulsant medications such as phenytoin are often used to control seizure; medications or procedures may be employed to relieve intracranial pressure. Eventually, curative treatment may be required to prevent recurrent hemorrhage. However, any type of intervention may also carry a risk of creating a neurological deficit. Preventive treatment of as yet unruptured brain AVMs has been controversial, as several studies suggested favorable long-term outcome for unruptured AVM patients not undergoing intervention. The NIH-funded longitudinal ARUBA study ("A Randomized trial of Unruptured Brain AVMs) compares the risk of stroke and death in patients with preventive AVM eradication versus those followed without intervention. Interim results suggest that fewer strokes occur as long as patients with unruptured AVM do not undergo intervention. Because of the higher than expected event rate in the interventional arm of the ARUBA study, NIH/NINDS stopped patient enrollment in April 2013, while continuing to follow all participants to determine whether the difference in stroke and death in the two arms changes over time. Surgical management Surgical elimination of the blood vessels involved is the preferred curative treatment for many types of AVM. Surgery is performed by a neurosurgeon who temporarily removes part of the skull (craniotomy), separates the AVM from surrounding brain tissue, and resects the abnormal vessels. While surgery can result in an immediate, complete removal of the AVM, risks exist depending on the size and the location of the malformation. The AVM must be resected en bloc, for partial resection will likely cause severe hemorrhage. The preferred treatment of Spetzler-Martin grade 1 and 2 AVMs in young, healthy patients is surgical resection due to the relatively small risk of neurological damage compared to the high lifetime risk of hemorrhage. Grade 3 AVMs may or may not be amenable to surgery. Grade 4 and 5 AVMs are not usually surgically treated. Radiosurgical management Radiosurgery has been widely used on small AVMs with considerable success. The Gamma Knife is an apparatus used to precisely apply a controlled radiation dosage to the volume of the brain occupied by the AVM. While this treatment does not require an incision and craniotomy (with their own inherent risks), three or more years may pass before the complete effects are known, during which time patients are at risk of bleeding. Complete obliteration of the AVM may or may not occur after several years, and repeat treatment may be needed. Radiosurgery is itself not without risk. In one large study, nine percent of patients had transient neurological symptoms, including headache, after radiosurgery for AVM. However, most symptoms resolved, and the long-term rate of neurological symptoms was 3.8%. Neuroendovascular therapy Embolization is performed by interventional neuroradiologists and the occlusion of blood vessels most commonly is obtained with Ethylene-vinyl alcohol copolymer (Onyx) or N-butyl cyanoacrylate (NBCA). These substances are introduced by a radiographically guided catheter, and block vessels responsible for blood flow into the AVM. Embolization is frequently used as an adjunct to either surgery or radiation treatment. Embolization reduces the size of the AVM and during surgery it reduces the risk of bleeding. However, embolization alone may completely obliterate some AVMs. In high flow intranidal fistulas balloons can also be used to reduce the flow so that embolization can be done safely. The main risk is intracranial hemorrhage. This risk is difficult to quantify since many patients with asymptomatic AVMs will never come to medical attention. Small AVMs tend to bleed more often than do larger ones, the opposite of cerebral aneurysms. If a rupture or bleeding incident occurs, the blood may penetrate either into the brain tissue (cerebral hemorrhage) or into the subarachnoid space, which is located between the sheaths (meninges) surrounding the brain (subarachnoid hemorrhage). Bleeding may also extend into the ventricular system (intraventricular hemorrhage). Cerebral hemorrhage appears to be most common. One long-term study (mean follow up greater than 20 years) of over 150 symptomatic AVMs (either presenting with bleeding or seizures) found the risk of cerebral hemorrhage to be approximately 4% per year, slightly higher than the 2-3% seen in other studies. A simple, rough approximation of a patient's lifetime bleeding risk is 105 - (patient age in years), assuming a 3% bleed risk annually. For example, a healthy 30-year-old patient would have approximately a 75% lifetime risk of at least one bleeding event. Ruptured AVMs are a significant source or morbidity and mortality; post rupture, as many as 29% of patients will die, and only 55% will be able to live independently. The annual new detection rate incidence of AVMs is approximately 1 per 100,000 a year. The point prevalence in adults is approximately 18 per 100,000. AVMs are more common in males than females, although in females pregnancy may start or worsen symptoms due the increase in blood flow and volume it usually brings. There is a significant preponderance (15-20%) of AVM in patients with hereditary hemorrhagic telangiectasia (Osler-Weber-Rendu syndrome). No randomized, controlled clinical trial has established a survival benefit for treating patients (either with open surgery or radiosurgery) with AVMs that have not yet bled.
https://en.wikipedia.org/wiki?curid=7659
Comparative method In linguistics, the comparative method is a technique for studying the development of languages by performing a feature-by-feature comparison of two or more languages with common descent from a shared ancestor and then extrapolating backwards to infer the properties of that ancestor. The comparative method may be contrasted with the method of internal reconstruction in which the internal development of a single language is inferred by the analysis of features within that language. Ordinarily, both methods are used together to reconstruct prehistoric phases of languages; to fill in gaps in the historical record of a language; to discover the development of phonological, morphological and other linguistic systems and to confirm or to refute hypothesised relationships between languages. The comparative method was developed over the 19th century. Key contributions were made by the Danish scholars Rasmus Rask and Karl Verner and the German scholar Jacob Grimm. The first linguist to offer reconstructed forms from a proto-language was August Schleicher, in his "Compendium der vergleichenden Grammatik der indogermanischen Sprachen", originally published in 1861. Here is Schleicher's explanation of why he offered reconstructed forms: In the present work an attempt is made to set forth the inferred Indo-European original language side by side with its really existent derived languages. Besides the advantages offered by such a plan, in setting immediately before the eyes of the student the final results of the investigation in a more concrete form, and thereby rendering easier his insight into the nature of particular Indo-European languages, there is, I think, another of no less importance gained by it, namely that it shows the baselessness of the assumption that the non-Indian Indo-European languages were derived from Old-Indian (Sanskrit). The aim of the comparative method is to highlight and interpret systematic phonological and semantic correspondences between two or more attested languages. If those correspondences cannot be rationally explained as the result of language contact (borrowings, areal influence, etc.), and if they are spread across language aspects (lexicon, morphology, phonetics) in such a way that they cannot be dismissed as random mutations, then it must be assumed that they descend from a single proto-language. A sequence of regular sound changes (along with their underlying sound laws) can then be postulated to explain the correspondences between the attested forms, which eventually allows for the reconstruction of a proto-language by the methodical comparison of "linguistic facts" within a generalized system of correspondences. Relation is deemed certain only if at least a partial reconstruction of the common ancestor is feasible, and regular sound correspondences can be established, with chance similarities ruled out. "Descent" is defined as transmission across the generations: children learn a language from the parents' generation and, after being influenced by their peers, transmit it to the next generation, and so on. For example, a continuous chain of speakers across the centuries links Vulgar Latin to all of its modern descendants. Two languages are "genetically related" if they descended from the same ancestor language. For example, Italian and French both come from Latin and therefore belong to the same family, the Romance languages. Having a large component of vocabulary from a certain origin is not sufficient to establish relatedness; for example, heavy borrowing from Arabic into Persian has caused more of Modern Persian 's vocabulary to be from Arabic than from Persian's direct ancestor, Proto-Indo-Iranian, but Persian remains a member of the Indo-Iranian family and is not considered "related" to Arabic. However, it is possible for languages to have different degrees of relatedness. English, for example, is related to both German and Russian but is more closely related to the former than to the latter. Although all three languages share a common ancestor, Proto-Indo-European, English and German also share a more recent common ancestor, Proto-Germanic, but Russian does not. Therefore, English and German are considered to belong to a different subgroup, the Germanic languages. "Shared retentions" from the parent language are not sufficient evidence of a sub-group. For example, German and Russian both retain from Proto-Indo-European a contrast between the dative case and the accusative case, which English has lost. However, that similarity between German and Russian is not evidence that German is more closely related to Russian than to English but means only that the "innovation" in question, the loss of the accusative/dative distinction, happened more recently in English than the divergence of English from German. The division of related languages into sub-groups is accomplished more certainly by finding "shared linguistic innovations" that differentiate them from the parent language, rather than shared features that are retained from the parent language. In Antiquity, Romans were aware of the similarities between Greek and Latin, but did not study them systematically. They sometimes explained them mythologically, as the result of Rome being a Greek colony speaking a debased dialect. Even though grammarians of Antiquity had access to other languages around them (Oscan, Umbrian, Etruscan, Gaulish, Egyptian, Parthian…), they showed little interest in comparing, studying, or just documenting them. Comparison between languages really began after Antiquity. In the 9th or 10th century AD, Yehuda Ibn Quraysh compared the phonology and morphology of Hebrew, Aramaic and Arabic but attributed the resemblance to the Biblical story of Babel, with Abraham, Isaac and Joseph retaining Adam's language, with other languages at various removes becoming more altered from the original Hebrew. In publications of 1647 and 1654, Marcus van Boxhorn first described a rigid methodology for historical linguistic comparisons and proposed the existence of an Indo-European proto-language, which he called "Scythian", unrelated to Hebrew but ancestral to Germanic, Greek, Romance, Persian, Sanskrit, Slavic, Celtic and Baltic languages. The Scythian theory was further developed by Andreas Jäger (1686) and William Wotton (1713), who made early forays to reconstruct the primitive common language. In 1710 and 1723, Lambert ten Kate first formulated the regularity of sound laws, introducing among others, the term root vowel. Another early systematic attempt to prove the relationship between two languages on the basis of similarity of grammar and lexicon was made by the Hungarian János Sajnovics in 1770, when he attempted to demonstrate the relationship between Sami and Hungarian. That work was later extended to all Finno-Ugric languages in 1799 by his countryman Samuel Gyarmathi. However, the origin of modern historical linguistics is often traced back to Sir William Jones, an English philologist living in India, who in 1786 made his famous The Sanscrit language, whatever be its antiquity, is of a wonderful structure; more perfect than the Greek, more copious than the Latin, and more exquisitely refined than either, yet bearing to both of them a stronger affinity, both in the roots of verbs and the forms of grammar, than could possibly have been produced by accident; so strong indeed, that no philologer could examine them all three, without believing them to have sprung from some common source, which, perhaps, no longer exists. There is a similar reason, though not quite so forcible, for supposing that both the Gothick and the Celtick, though blended with a very different idiom, had the same origin with the Sanscrit; and the old Persian might be added to the same family. The comparative method developed out of attempts to reconstruct the proto-language mentioned by Jones, which he did not name but subsequent linguists have labelled Proto-Indo-European (PIE). The first professional comparison between the Indo-European languages that were then known was made by the German linguist Franz Bopp in 1816. He did not attempt a reconstruction but demonstrated that Greek, Latin and Sanskrit shared a common structure and a common lexicon. In 1808, Friedrich Schlegel first stated the importance of using the eldest possible form of a language when trying to prove its relationships; in 1818, Rasmus Christian Rask developed the principle of regular sound-changes to explain his observations of similarities between individual words in the Germanic languages and their cognates in Greek and Jacob Grimm, better known for his "Fairy Tales", used the comparative method in "Deutsche Grammatik" (published 1819–1837 in four volumes), which attempted to show the development of the Germanic languages from a common origin, which was the first systematic study of diachronic language change. Both Rask and Grimm were unable to explain apparent exceptions to the sound laws that they had discovered. Although Hermann Grassmann explained one of the anomalies with the publication of Grassmann's law in 1862, Karl Verner made a methodological breakthrough in 1875, when he identified a pattern now known as Verner's law, the first sound-law based on comparative evidence showing that a phonological change in one phoneme could depend on other factors within the same word (such as neighbouring phonemes and the position of the accent), which is now called "conditioning environments". Similar discoveries made by the "Junggrammatiker" (usually translated as "Neogrammarians") at the University of Leipzig in the late 19th century led them to conclude that all sound changes were ultimately regular, resulting in the famous statement by Karl Brugmann and Hermann Osthoff in 1878 that "sound laws have no exceptions". That idea is fundamental to the modern comparative method since it necessarily assumes regular correspondences between sounds in related languages and thus regular sound changes from the proto-language. The "Neogrammarian Hypothesis" led to the application of the comparative method to reconstruct Proto-Indo-European since Indo-European was then by far the most well-studied language family. Linguists working with other families soon followed suit, and the comparative method quickly became the established method for uncovering linguistic relationships. There is no fixed set of steps to be followed in the application of the comparative method, but some steps are suggested by Lyle Campbell and Terry Crowley, who are both authors of introductory texts in historical linguistics. This abbreviated summary is based on their concepts of how to proceed. This step involves making lists of words that are likely cognates among the languages being compared. If there is a regularly-recurring match between the phonetic structure of basic words with similar meanings, a genetic kinship can probably then be established. For example, linguists looking at the Polynesian family might come up with a list similar to the following (their actual list would be much longer): Borrowings or false cognates can skew or obscure the correct data. For example, English "taboo" () is like the six Polynesian forms because of borrowing from Tongan into English, not because of a genetic similarity. That problem can usually be overcome by using basic vocabulary, such as kinship terms, numbers, body parts and pronouns. Nonetheless, even basic vocabulary can be sometimes borrowed. Finnish, for example, borrowed the word for "mother", "äiti", from Proto-Germanic *aiþį̄ (compare to Gothic "aiþei"). English borrowed the pronouns "they", "them", and "their(s)" from Norse. Thai and various other East Asian languages borrowed their numbers from Chinese. An extreme case is represented by Pirahã, a Muran language of South America, which has been controversially claimed to have borrowed all of its pronouns from Nheengatu. The next step involves determining the regular sound-correspondences exhibited by the lists of potential cognates. For example, in the Polynesian data above, it is apparent that words that contain "t" in most of the languages listed have cognates in Hawaiian with "k" in the same position. That is visible in multiple cognate sets: the words glossed as 'one', 'three', 'man' and 'taboo' all show the relationship. The situation is called a "regular correspondence" between "k" in Hawaiian and "t" in the other Polynesian languages. Similarly, a regular correspondence can be seen between Hawaiian and Rapanui "h", Tongan and Samoan "f", Maori "ɸ", and Rarotongan "ʔ". Mere phonetic similarity, as between English "day" and Latin "dies" (both with the same meaning), has no probative value. English initial "d-" does not "regularly" match since a large set of English and Latin non-borrowed cognates cannot be assembled such that English "d" repeatedly and consistently corresponds to Latin "d" at the beginning of a word, and whatever sporadic matches can be observed are due either to chance (as in the above example) or to borrowing (for example, Latin "diabolus" and English "devil", both ultimately of Greek origin). However, English and Latin exhibit a regular correspondence of "t-" : "d-" (in which "A : B" means "A corresponds to B"), as in the following examples: If there are many regular correspondence sets of this kind (the more, the better), a common origin becomes a virtual certainty, particularly if some of the correspondences are non-trivial or unusual. During the late 18th to late 19th century, two major developments improved the method's effectiveness. First, it was found that many sound changes are conditioned by a specific "context". For example, in both Greek and Sanskrit, an aspirated stop evolved into an unaspirated one, but only if a second aspirate occurred later in the same word; this is Grassmann's law, first described for Sanskrit by Sanskrit grammarian Pāṇini and promulgated by Hermann Grassmann in 1863. Second, it was found that sometimes sound changes occurred in contexts that were later lost. For instance, in Sanskrit velars ("k"-like sounds) were replaced by palatals ("ch"-like sounds) whenever the following vowel was "*i" or "*e". Subsequent to this change, all instances of "*e" were replaced by "a". The situation could be reconstructed only because the original distribution of "e" and "a" could be recovered from the evidence of other Indo-European languages. For instance, the Latin suffix "que", "and", preserves the original "*e" vowel that caused the consonant shift in Sanskrit: Verner's Law, discovered by Karl Verner 1875, provides a similar case: the voicing of consonants in Germanic languages underwent a change that was determined by the position of the old Indo-European accent. Following the change, the accent shifted to initial position. Verner solved the puzzle by comparing the Germanic voicing pattern with Greek and Sanskrit accent patterns. This stage of the comparative method, therefore, involves examining the correspondence sets discovered in step 2 and seeing which of them apply only in certain contexts. If two (or more) sets apply in complementary distribution, they can be assumed to reflect a single original phoneme: "some sound changes, particularly conditioned sound changes, can result in a proto-sound being associated with more than one correspondence set". For example, the following potential cognate list can be established for Romance languages, which descend from Latin: They evidence two correspondence sets, "k : k" and "k : : Since French "" occurs only before "a" where the other languages also have "a", and French "k" occurs elsewhere, the difference is caused by different environments (being before "a" conditions the change), and the sets are complementary. They can, therefore, be assumed to reflect a single proto-phoneme (in this case "*k", spelled |c| in Latin). The original Latin words are "corpus", "crudus", "catena" and "captiare", all with an initial "k". If more evidence along those lines were given, one might conclude that an alteration of the original "k" took place because of a different environment. A more complex case involves consonant clusters in Proto-Algonquian. The Algonquianist Leonard Bloomfield used the reflexes of the clusters in four of the daughter languages to reconstruct the following correspondence sets: Although all five correspondence sets overlap with one another in various places, they are not in complementary distribution and so Bloomfield recognised that a different cluster must be reconstructed for each set. His reconstructions were, respectively, "*hk", "*xk", "*čk" (=), "*šk" (=), and "çk" (in which "x" and "ç" are arbitrary symbols, rather than attempts to guess the phonetic value of the proto-phonemes). Typology assists in deciding what reconstruction best fits the data. For example, the voicing of voiceless stops between vowels is common, but the devoicing of voiced stops in that environment is rare. If a correspondence "-t-" : "-d-" between vowels is found in two languages, the proto-phoneme is more likely to be "*-t-", with a development to the voiced form in the second language. The opposite reconstruction would represent a rare type. However, unusual sound changes occur. The Proto-Indo-European word for "two", for example, is reconstructed as "*dwō", which is reflected in Classical Armenian as "erku". Several other cognates demonstrate a regular change "*dw-" → "erk-" in Armenian. Similarly, in Bearlake, a dialect of the Athabaskan language of Slavey, there has been a sound change of Proto-Athabaskan "*ts" → Bearlake '. It is very unlikely that "*dw-" changed directly into "erk-" and "*ts" into ', but they probably instead went through several intermediate steps before they arrived at the later forms. It is not phonetic similarity that matters for the comparative method but rather regular sound correspondences. By the principle of economy, the reconstruction of a proto-phoneme should require as few sound changes as possible to arrive at the modern reflexes in the daughter languages. For example, Algonquian languages exhibit the following correspondence set: The simplest reconstruction for this set would be either "*m" or "*b". Both "*m" → "b" and "*b" → "m" are likely. Because "m" occurs in five of the languages and "b" in only one of them, if "*b" is reconstructed, it is necessary to assume five separate changes of "*b" → "m", but if "*m" is reconstructed, it is necessary to assume only one change of "*m" → "b" and so "*m" would be most economical. That argument assumes the languages other than Arapaho to be at least partly independent of one another. If they all formed a common subgroup, the development "*b" → "m" would have to be assumed to have occurred only once. In the final step, the linguist checks to see how the proto-phonemes fit the known typological constraints. For example, a hypothetical system, has only one voiced stop, "*b", and although it has an alveolar and a velar nasal, "*n" and "*ŋ", there is no corresponding labial nasal. However, languages generally maintain symmetry in their phonemic inventories. In this case, a linguist might attempt to investigate the possibilities that either what was earlier reconstructed as "*b" is in fact "*m" or that the "*n" and "*ŋ" are in fact "*d" and "*g". Even a symmetrical system can be typologically suspicious. For example, here is the traditional Proto-Indo-European stop inventory: An earlier voiceless aspirated row was removed on grounds of insufficient evidence. Since the mid-20th century, a number of linguists have argued that this phonology is implausible and that it is extremely unlikely for a language to have a voiced aspirated (breathy voice) series without a corresponding voiceless aspirated series. Thomas Gamkrelidze and Vyacheslav Ivanov provided a potential solution and argued that the series that are traditionally reconstructed as plain voiced should be reconstructed as glottalized: either implosive or ejective . The plain voiceless and voiced aspirated series would thus be replaced by just voiceless and voiced, with aspiration being a non-distinctive quality of both. That example of the application of linguistic typology to linguistic reconstruction has become known as the glottalic theory. It has a large number of proponents but is not generally accepted. The reconstruction of proto-sounds logically precedes the reconstruction of grammatical morphemes (word-forming affixes and inflectional endings), patterns of declension and conjugation and so on. The full reconstruction of an unrecorded protolanguage is an open-ended task. The limitations of the comparative method were recognized by the very linguists who developed it, but it is still seen as a valuable tool. In the case of Indo-European, the method seemed at least a partial validation of the centuries-old search for an Ursprache, the original language. The others were presumed to be ordered in a family tree, which was the tree model of the neogrammarians. The archaeologists followed suit and attempted to find archaeological evidence of a culture or cultures that could be presumed to have spoken a proto-language, such as Vere Gordon Childe's "The Aryans: a study of Indo-European origins", 1926. Childe was a philologist turned archaeologist. Those views culminated in the "Siedlungsarchaologie", or "settlement-archaeology", of Gustaf Kossinna, becoming known as "Kossinna's Law". Kossinna asserted that cultures represent ethnic groups, including their languages, but his law was rejected after World War II. The fall of Kossinna's Law removed the temporal and spatial framework previously applied to many proto-languages. Fox concludes: The Comparative Method "as such" is not, in fact, historical; it provides evidence of linguistic relationships to which we may give a historical interpretation... [Our increased knowledge about the historical processes involved] has probably made historical linguists less prone to equate the idealizations required by the method with historical reality... Provided we keep [the interpretation of the results and the method itself] apart, the Comparative Method can continue to be used in the reconstruction of earlier stages of languages. Proto-languages can be verified in many historical instances, such as Latin. Although no longer a law, settlement-archaeology is known to be essentially valid for some cultures that straddle history and prehistory, such as the Celtic Iron Age (mainly Celtic) and Mycenaean civilization (mainly Greek). None of those models can be or have been completely rejected, but none is sufficient alone. The foundation of the comparative method, and of comparative linguistics in general, is the Neogrammarians' fundamental assumption that "sound laws have no exceptions". When it was initially proposed, critics of the Neogrammarians proposed an alternate position that summarised by the maxim "each word has its own history". Several types of change actually alter words in irregular ways. Unless identified, they may hide or distort laws and cause false perceptions of relationship. All languages borrow words from other languages in various contexts. They are likely to have followed the laws of the languages from which they were borrowed, rather than the laws of the borrowing language. Therefore, studying borrowed words will probably mislead the investigator since they reflect the customs of the donor language, which is the source of the word. Borrowing on a larger scale occurs in areal diffusion, when features are adopted by contiguous languages over a geographical area. The borrowing may be phonological, morphological or lexical. A false proto-language over the area may be reconstructed for them or may be taken to be a third language serving as a source of diffused features. Several areal features and other influences may converge to form a Sprachbund, a wider region sharing features that appear to be related but are diffusional. For instance, the Mainland Southeast Asia linguistic area, before it was recognised, suggested several false classifications of such languages as Chinese, Thai and Vietnamese. Sporadic changes, such as irregular inflections, compounding and abbreviation, do not follow any laws. For example, the Spanish words "palabra" ('word'), "peligro" ('danger') and "milagro" ('miracle') would have been "parabla", "periglo", "miraglo" by regular sound changes from the Latin "parabŏla", "perīcŭlum" and "mīrācŭlum", but the "r" and "l" changed places by sporadic metathesis. Analogy is the sporadic change of a feature to be like another feature in the same or a different language. It may affect a single word or be generalized to an entire class of features, such as a verb paradigm. An example is the Russian word for "nine". The word, by regular sound changes from Proto-Slavic, should have been , but it is in fact . It is believed that the initial ' changed to ' under influence of the word for "ten" in Russian, . Those who study contemporary language changes, such as William Labov, acknowledge that even a systematic sound change is applied at first unsystematically, with the percentage of its occurrence in a person's speech dependent on various social factors. The sound change gradually spreads in a process known as lexical diffusion. While it does not invalidate the Neogrammarians' axiom that "sound laws have no exceptions", the gradual application of the very sound laws shows that they do not always apply to all lexical items at the same time. Hock notes, "While it probably is true in the long run every word has its own history, it is not justified to conclude as some linguists have, that therefore the Neogrammarian position on the nature of linguistic change is falsified". The comparative method cannot recover aspects of a language that were not inherited in its daughter idioms. For instance, the Latin declension pattern was lost in Romance languages, resulting in an impossibility to fully reconstruct such a feature via systematic comparison. The comparative method is used to construct a tree model (German "Stammbaum") of language evolution, in which daughter languages are seen as branching from the proto-language, gradually growing more distant from it through accumulated phonological, morpho-syntactic, and lexical changes. The tree model features nodes that are presumed to be distinct proto-languages existing independently in distinct regions during distinct historical times. The reconstruction of unattested proto-languages lends itself to that illusion since they cannot be verified, and the linguist is free to select whatever definite times and places seems best. Right from the outset of Indo-European studies, however, Thomas Young said:It is not, however, very easy to say what the definition should be that should constitute a separate language, but it seems most natural to call those languages distinct, of which the one cannot be understood by common persons in the habit of speaking the other... Still, however, it may remain doubtfull whether the Danes and the Swedes could not, in general, understand each other tolerably well... nor is it possible to say if the twenty ways of pronouncing the sounds, belonging to the Chinese characters, ought or ought not to be considered as so many languages or dialects... But... the languages so nearly allied must stand next to each other in a systematic order… The assumption of uniformity in a proto-language, implicit in the comparative method, is problematic. Even small language communities are always have differences in dialect, whether they are based on area, gender, class or other factors. The Pirahã language of Brazil is spoken by only several hundred people but has at least two different dialects, one spoken by men and one by women. Campbell points out: It is not so much that the comparative method 'assumes' no variation; rather, it is just that there is nothing built into the comparative method which would allow it to address variation directly... This assumption of uniformity is a reasonable idealization; it does no more damage to the understanding of the language than, say, modern reference grammars do which concentrate on a language's general structure, typically leaving out consideration of regional or social variation. Different dialects, as they evolve into separate languages, remain in contact with and influence one another. Even after they are considered distinct, languages near one another continue to influence one another and often share grammatical, phonological, and lexical innovations. A change in one language of a family may spread to neighboring languages, and multiple waves of change are communicated like waves across language and dialect boundaries, each with its own randomly delimited range. If a language is divided into an inventory of features, each with its own time and range (isoglosses), they do not all coincide. History and prehistory may not offer a time and place for a distinct coincidence, as may be the case for proto-Italic, for which the proto-language is only a concept. However, Hock observes: The discovery in the late nineteenth century that isoglosses can cut across well-established linguistic boundaries at first created considerable attention and controversy. And it became fashionable to oppose a wave theory to a tree theory... Today, however, it is quite evident that the phenomena referred to by these two terms are complementary aspects of linguistic change... The reconstruction of unknown proto-languages is inherently subjective. In the Proto-Algonquian example above, the choice of "*m" as the parent phoneme is only "likely", not "certain". It is conceivable that a Proto-Algonquian language with "*b" in those positions split into two branches, one that preserved "*b" and one that changed it to "*m" instead, and while the first branch developed only into Arapaho, the second spread out more widely and developed into all the other Algonquian tribes. It is also possible that the nearest common ancestor of the Algonquian languages used some other sound instead, such as "*p", which eventually mutated to "*b" in one branch and to "*m" in the other. Examples of strikingly complicated and even circular developments are indeed known to have occurred (such as Proto-Indo-European "*t" > Pre-Proto-Germanic "*þ" > Proto-Germanic "*ð" > Proto-West-Germanic "*d" > Old High German "t" in "fater" > Modern German "Vater"), but in the absence of any evidence or other reason to postulate a more complicated development, the preference of a simpler explanation is justified by the principle of parsimony, also known as Occam's razor. Since reconstruction involves many such choices, some linguists prefer to view the reconstructed features as abstract representations of sound correspondences, rather than as objects with a historical time and place. The existence of proto-languages and the validity of the comparative method is verifiable if the reconstruction can be matched to a known language, which may be known only as a shadow in the loanwords of another language. For example, Finnic languages such as Finnish have borrowed many words from an early stage of Germanic, and the shape of the loans matches the forms that have been reconstructed for Proto-Germanic. Finnish "kuningas" 'king' and "kaunis" 'beautiful' match the Germanic reconstructions *"kuningaz" and *"skauniz" (> German "König" 'king', "schön" 'beautiful'). The wave model was developed in the 1870s as an alternative to the tree model to represent the historical patterns of language diversification. Both the tree-based and the wave-based representations are compatible with the comparative method. By contrast, some approaches are incompatible with the comparative method, including glottochronology and mass lexical comparison, both of which are considered by most historical linguists to be flawed and unreliable.
https://en.wikipedia.org/wiki?curid=7660
Council of Constance The Council of Constance was a 15th-century ecumenical council recognized by the Catholic Church, held from 1414 to 1418 in the Bishopric of Constance in present-day Germany. The council ended the Western Schism by deposing or accepting the resignation of the remaining papal claimants and by electing Pope Martin V. The council also condemned Jan Hus as a heretic and facilitated his execution by the civil authority, and ruled on issues of national sovereignty, the rights of pagans and just war, in response to a conflict between the Grand Duchy of Lithuania, Kingdom of Poland and the Order of the Teutonic Knights. The council is also important for its relationship to ecclesial conciliarism and Papal supremacy. The council's main purpose was to end the Papal schism which had resulted from the confusion following the Avignon Papacy. Pope Gregory XI's return to Rome in 1377, followed by his death (in 1378) and the controversial election of his successor, Pope Urban VI, resulted in the defection of a number of cardinals and the election of a rival pope based at Avignon in 1378. After thirty years of schism, the rival courts convened the Council of Pisa seeking to resolve the situation by deposing the two claimant popes and electing a new one. The council claimed that in such a situation, a council of bishops had greater authority than just one bishop, even if he were the bishop of Rome. Though the elected Antipope Alexander V and his successor, Antipope John XXIII (not to be confused with the 20th-century Pope John XXIII), gained widespread support, especially at the cost of the Avignon antipope, the schism remained, now involving not two but three claimants: Gregory XII at Rome, Benedict XIII at Avignon, and John XXIII. Therefore, many voices, including Sigismund, King of the Romans and of Hungary (and later Holy Roman Emperor), pressed for another council to resolve the issue. That council was called by John XXIII and was held from 16 November 1414 to 22 April 1418 in Constance, Germany. According to Joseph McCabe, the council was attended by roughly 29 cardinals, 100 "learned doctors of law and divinity", 134 abbots, and 183 bishops and archbishops. Sigismund arrived on Christmas Eve 1414 and exercised a profound and continuous influence on the course of the council in his capacity of imperial protector of the church. An innovation at the council was that instead of voting as individuals, the bishops voted in national blocs. The vote by nations was in great measure the initiative of the English, German, and French members. The legality of this measure, in imitation of the "nations" of the universities, was more than questionable, but during February 1415 it carried and thenceforth was accepted in practice, though never authorized by any formal decree of the council. The four "nations" consisted of England, France, Italy, and Germany, with Poles, Hungarians, Danes, and Scandinavians counted with the Germans. While the Italian representatives made up half of those in attendance, they were equal in influence to the English who sent twenty deputies and three bishops. Many members of the new assembly (comparatively few bishops, but many doctors of theology and of canon and civil law, procurators of bishops, deputies of universities, cathedral chapters, provosts, etc., agents and representatives of princes, etc.) strongly favored the voluntary abdication of all three popes, as did King Sigismund. Although the Italian bishops who had accompanied John XXIII in large numbers supported his legitimacy, he grew increasingly more suspicious of the council. Partly in response to a fierce anonymous attack on his character from an Italian source, on 2 March 1415 he promised to resign. However, on 20 March he secretly fled the city and took refuge at Schaffhausen in territory of his friend Frederick, Duke of Austria-Tyrol. The famous decree "Haec Sancta Synodus," which gave primacy to the authority of the council and thus became a source for ecclesial conciliarism, was promulgated in the fifth session, 6 April 1415: "Haec Sancta Synodus" marks the high-water mark of the Conciliar movement of reform. This decree, however, is not considered valid by the Magisterium of the Catholic Church, since it was never approved by Pope Gregory XII or his successors, and was passed by the council in a session before his confirmation. The church declared the first sessions of the Council of Constance an invalid and illicit assembly of bishops, gathered under the authority of John XXIII. The acts of the council were not made public until 1442, at the behest of the Council of Basel; they were printed in 1500. The creation of a book on how to die was ordered by the council, and thus written in 1415 under the title "Ars moriendi". With the support of King Sigismund, enthroned before the high altar of the cathedral of Constance, the Council of Constance recommended that all three papal claimants abdicate, and that another be chosen. In part because of the constant presence of the King, other rulers demanded that they have a say in who would be pope. Gregory XII then sent representatives to Constance, whom he granted full powers to summon, open, and preside over an Ecumenical Council; he also empowered them to present his resignation to the Papacy. This would pave the way for the end of the Western Schism. The legates were received by King Sigismund and by the assembled Bishops, and the King yielded the presidency of the proceedings to the papal legates, Cardinal Giovanni Dominici of Ragusa and Prince Carlo Malatesta. On 4 July 1415 the Bull of Gregory XII which appointed Dominici and Malatesta as his proxies at the council was formally read before the assembled Bishops. The cardinal then read a decree of Gregory XII which convoked the council and authorized its succeeding acts. Thereupon, the Bishops voted to accept the summons. Prince Malatesta immediately informed the council that he was empowered by a commission from Pope Gregory XII to resign the Papal Throne on the Pontiff's behalf. He asked the council whether they would prefer to receive the abdication at that point or at a later date. The Bishops voted to receive the Papal abdication immediately. Thereupon the commission by Gregory XII authorizing his proxy to resign the Papacy on his behalf was read and Malatesta, acting in the name of Gregory XII, pronounced the resignation of the papacy by Gregory XII and handed a written copy of the resignation to the assembly. Former Pope Gregory XII was then created titular Cardinal Bishop of Porto and Santa Ruffina by the council, with rank immediately below the Pope (which made him the highest-ranking person in the church, since, due to his abdication, the See of Peter in Rome was vacant). Gregory XII's cardinals were accepted as true cardinals by the council, but the members of the council delayed electing a new pope for fear that a new pope would restrict further discussion of pressing issues in the church. By the time the anti-popes were all deposed and the new Pope, Martin V, was elected, two years had passed since Gregory XII's abdication, and Gregory was already dead. The council took great care to protect the legitimacy of the succession, ratified all his acts, and a new pontiff was chosen. The new pope, Martin V, elected November 1417, soon asserted the absolute authority of the papal office. A second goal of the council was to continue the reforms begun at the Council of Pisa (1409). The reforms were largely directed against John Wycliffe, mentioned in the opening session and condemned in the eighth on 4 May 1415, and Jan Hus, along with their followers. Hus, summoned to Constance under a letter of safe conduct, was found guilty of heresy by the council and turned over to the secular court. "This holy synod of Constance, seeing that God's church has nothing more that it can do, relinquishes Jan Hus to the judgment of the secular authority and decrees that he is to be relinquished to the secular court." (Council of Constance Session 15 – 6 July 1415). The secular court sentenced him to the stake. Jerome of Prague, a supporter of Hus, came to Constance to offer assistance but was similarly arrested, judged, found guilty of heresy and turned over to the same secular court, with the same outcome as Hus. Poggio Bracciolini attended the council and related the unfairness of the process against Jerome. Paweł Włodkowic and the other Polish representatives to the Council of Constance publicly defended Hus. In 1411, the First Peace of Thorn ended the Polish–Lithuanian–Teutonic War, in which the Teutonic Knights fought the Kingdom of Poland and Grand Duchy of Lithuania. However, the peace was not stable and further conflicts arose regarding demarcation of the Samogitian borders. The tensions erupted into the brief Hunger War in summer 1414. It was concluded that the disputes would be mediated by the Council of Constance. The Polish-Lithuanian position was defended by Paulus Vladimiri, rector of the Jagiellonian University, who challenged legality of the Teutonic crusade against Lithuania. He argued that a forced conversion was incompatible with free will, which was an essential component of a genuine conversion. Therefore, the Knights could only wage a defensive war if pagans violated natural rights of the Christians. Vladimiri further stipulated that infidels had rights which had to be respected, and neither the Pope nor the Holy Roman Emperor had the authority to violate them. Lithuanians also brought a group of Samogitian representatives to testify to atrocities committed by the Knights. The Dominican theologian John of Falkenberg proved to be the fiercest opponent of the Poles. In his "Liber de doctrina", Falkenberg argued that "the Emperor has the right to slay even peaceful infidels simply because they are pagans. ... The Poles deserve death for defending infidels, and should be exterminated even more than the infidels; they should be deprived of their sovereignty and reduced to slavery." In "Satira", he attacked Polish-Lithuanian King Jogaila, calling him a "mad dog" unworthy to be king. Falkenberg was condemned and imprisoned for such libel. Other opponents included Grand Master's proctor Peter Wormditt, Dominic of San Gimignano, John Urbach, Ardecino de Porta of Novara, and Bishop of Ciudad Rodrigo Andrew Escobar. They argued that the Knights were perfectly justified in their crusade as it was a sacred duty of Christians to spread the true faith. Cardinal Pierre d'Ailly published an independent opinion that attempted to somewhat balance both Polish and Teutonic positions. The council did not make any political decisions. It established the Diocese of Samogitia, with its seat in Medininkai and subordinated to Lithuanian dioceses, and appointed Matthias of Trakai as the first bishop. Pope Martin V appointed Polish-Lithuanian King Jogaila and Lithuanian Grand Duke Vytautas as vicars general in Pskov and Veliky Novgorod in recognition of their Catholicism. After another round of futile negotiations, the Gollub War broke out in 1422. It ended with the Treaty of Melno. Polish-Lithuanian-Teutonic wars continued for another hundred years.
https://en.wikipedia.org/wiki?curid=7661
Canadian Unitarian Council Canadian Unitarian Council () (CUC) formed on May 14, 1961 to be the national organization for Canadians who belong to the Unitarian Universalist Association (UUA) (the UUA formed a day later, on May 15, 1961). Until 2002, almost all member congregations of the CUC were also members of the UUA, and most services to CUC member congregations were provided by the UUA. However, after an agreement between the CUC and the UUA, most services since 2002 have been provided by the CUC to its own member congregations, with the UUA continuing to provide ministerial settlement services. Some Canadian congregations have continued to be members of both the CUC and UUA, while others are members of only the CUC. The CUC is currently the only national body for Unitarian Universalist congregations in Canada, and is a member of the International Council of Unitarians and Universalists. The CUC is made up of 46 member congregations and emerging groups, who are the legal owners of the organization, and who are, for governance and service delivery, divided into four regions: "BC" (British Columbia), "Western" (Alberta to Thunder Bay), "Central" (between Thunder Bay and Kingston), and "Eastern" (Kingston, Ottawa and everything east of that). However, for youth ministry, the "Central" and "Eastern" regions are combined to form a youth region known as "QuOM" (Quebec, Ontario and the Maritimes), giving the youth only three regions for their activities. The organization as a whole is governed by the CUC Board of Trusties (Board), whose mandate it is to govern in the best interests of the CUC's owners. The Board is made up of 8 members who are elected by congregational delegates at the CUC's Annual General Meeting. This consists of two Trustees from each region, who are eligible to serve a maximum of two three-year terms. Board meetings also include Official Observers to the Board, who participate without a vote and represent UU Youth and Ministers. As members of the CUC, congregations and emerging groups are served by volunteer Service Consultants, Congregational Networks, and a series of other committees. There are two directors of regional services, one for the Western two regions, and one for the Eastern two regions. The Director of Lifespan Learning oversees development of religious exploration programming and youth and young adults are served by a Youth and Young Adult Ministry Development staff person. Policies and business of the CUC are determined at the Annual Conference and Meeting (ACM), consisting of the Annual Conference, in which workshops are held, and the Annual General Meeting, in which business matters and plenary meetings are performed. The ACM features two addresses, a Keynote and a Confluence Lecture. The Confluence Lecture is comparable to the UUA's Ware Lecture in prestige. In early days this event simply consisted of the Annual General Meeting component as the Annual Conference component was not added to much later. And starting in 2017 the conference portion will only tack place every second year. Past ACMs have been held in the following locations: ^Not an ACM, but an "Annual General Meeting" and "Symposium", and unlike ACMs it was organized by the CUC and the Unitarian Universalist Ministers of Canada instead of a local congregation.#Not a keynote presenter or lecturer, rather a symposium "Provocateur".*Upcoming locations The CUC does not have a central creed in which members are required to believe, but they have found it useful to articulate their common values in what has become known as The Principles and Sources of our Religious Faith, which are currently based on the UUA's Principles and Purposes. The CUC had a task force whose mandate was to consider revising them. The principles and sources as published in church literature and on the CUC website: The CUC formed on May 14, 1961 to be the national organization for Canadians within the about to form, UUA (they formed a day later on May 15, 1961). And until 2002, almost all member congregations of the CUC were also members of the UUA and most services to CUC member congregations were provided by the UUA. However, after an agreement between the UUA and the CUC, since 2002 most services have been provided by the CUC to its own member congregations, with the UUA continuing to provide ministerial settlement services. And also since 2002, some Canadian congregations have continued to be members of both the UUA and CUC while others are members of only the CUC. The Canadian Unitarian Universalist youth of the day disapproved of the 2002 change in relationship between the CUC and UUA. It is quite evident in the words of this statement, which was adopted by the attendees of the 2001 youth conference held at the Unitarian Church of Montreal: We the youth of Canada are deeply concerned about the direction the CUC seems to be taking. As stewards of our faith, adults have a responsibility to take into consideration the concerns of youth. We are opposed to making this massive jump in our evolutionary progress. While the name of the organization is the Canadian Unitarian Council, the CUC includes congregations with Unitarian, Universalist, Unitarian Universalist, and Universalist Unitarian in their names. Changing the name of the CUC has occasionally been debated, but there have been no successful motions. To recognize this diversity, some members of the CUC abbreviate Unitarian Universalist as U*U (and playfully read it as "You star, you"). Note, not all CUC members like this playful reading and so when these people write the abbreviation they leave out the star(*), just writing UU instead.
https://en.wikipedia.org/wiki?curid=7663
Charles Mingus Charles Mingus Jr. (April 22, 1922 – January 5, 1979) was an American jazz double bassist, pianist, composer and bandleader. A major proponent of collective improvisation, he is considered to be one of the greatest jazz musicians and composers in history, with a career spanning three decades and collaborations with other jazz legends such as Louis Armstrong, Duke Ellington, Charlie Parker, Dizzy Gillespie, Dannie Richmond, and Herbie Hancock. Mingus' compositions continue to be played by contemporary musicians ranging from the repertory bands Mingus Big Band, Mingus Dynasty, and Mingus Orchestra, to the high school students who play the charts and compete in the Charles Mingus High School Competition. In 1993, the Library of Congress acquired Mingus's collected papers—including scores, sound recordings, correspondence and photos—in what they described as "the most important acquisition of a manuscript collection relating to jazz in the Library's history". Charles Mingus was born in Nogales, Arizona. His father, Charles Mingus Sr., was a sergeant in the U.S. Army. Mingus was largely raised in the Watts area of Los Angeles. His maternal grandfather was a Chinese British subject from Hong Kong, and his maternal grandmother was an African-American from the southern United States. Mingus was the third great-grandson of the family's founding patriarch who was, by most accounts, a German immigrant. His ancestors included German American, African American, and Native American. In Mingus's autobiography "Beneath the Underdog" his mother was described as "the daughter of an English/Chinese man and a South-American woman", and his father was the son "of a black farm worker and a Swedish woman". Charles Mingus Sr. claims to have been raised by his mother and her husband as a white person until he was fourteen, when his mother revealed to her family that the child's true father was a black slave, after which he had to run away from his family and live on his own. The autobiography does not confirm whether Charles Mingus Sr. or Mingus himself believed this story was true, or whether it was merely an embellished version of the Mingus family's lineage. His mother allowed only church-related music in their home, but Mingus developed an early love for other music, especially Duke Ellington. He studied trombone, and later cello, although he was unable to follow the cello professionally because, at the time, it was nearly impossible for a black musician to make a career of classical music, and the cello was not yet accepted as a jazz instrument. Despite this, Mingus was still attached to the cello; as he studied bass with Red Callender in the late 1930s, Callender even commented that the cello was still Mingus's main instrument. In "Beneath the Underdog", Mingus states that he did not actually start learning bass until Buddy Collette accepted him into his swing band under the stipulation that he be the band's bass player. Due to a poor education, the young Mingus could not read musical notation quickly enough to join the local youth orchestra. This had a serious impact on his early musical experiences, leaving him feeling ostracized from the classical music world. These early experiences, in addition to his lifelong confrontations with racism, were reflected in his music, which often focused on themes of racism, discrimination and (in)justice. Much of the cello technique he learned was applicable to double bass when he took up the instrument in high school. He studied for five years with Herman Reinshagen, principal bassist of the New York Philharmonic, and compositional techniques with Lloyd Reese. Throughout much of his career, he played a bass made in 1927 by the German maker Ernst Heinrich Roth. Beginning in his teen years, Mingus was writing quite advanced pieces; many are similar to Third Stream because they incorporate elements of classical music. A number of them were recorded in 1960 with conductor Gunther Schuller, and released as "Pre-Bird", referring to Charlie "Bird" Parker; Mingus was one of many musicians whose perspectives on music were altered by Parker into "pre- and post-Bird" eras. Mingus gained a reputation as a bass prodigy. His first major professional job was playing with former Ellington clarinetist Barney Bigard. He toured with Louis Armstrong in 1943, and by early 1945 was recording in Los Angeles in a band led by Russell Jacquet, which also included Teddy Edwards, Maurice Simon, Bill Davis, and Chico Hamilton, and in May that year, in Hollywood, again with Teddy Edwards, in a band led by Howard McGhee. He then played with Lionel Hampton's band in the late 1940s; Hampton performed and recorded several of Mingus's pieces. A popular trio of Mingus, Red Norvo and Tal Farlow in 1950 and 1951 received considerable acclaim, but Mingus's race caused problems with club owners and he left the group. Mingus was briefly a member of Ellington's band in 1953, as a substitute for bassist Wendell Marshall. Mingus's notorious temper led to his being one of the few musicians personally fired by Ellington (Bubber Miley and drummer Bobby Durham are among the others), after a back-stage fight between Mingus and Juan Tizol. Also in the early 1950s, before attaining commercial recognition as a bandleader, Mingus played gigs with Charlie Parker, whose compositions and improvisations greatly inspired and influenced him. Mingus considered Parker the greatest genius and innovator in jazz history, but he had a love-hate relationship with Parker's legacy. Mingus blamed the Parker mythology for a derivative crop of pretenders to Parker's throne. He was also conflicted and sometimes disgusted by Parker's self-destructive habits and the romanticized lure of drug addiction they offered to other jazz musicians. In response to the many sax players who imitated Parker, Mingus titled a song, "If Charlie Parker were a Gunslinger, There'd be a Whole Lot of Dead Copycats" (released on "Mingus Dynasty" as "Gunslinging Bird"). Mingus was married four times. His wives were Jeanne Gross, Lucille (Celia) Germanis, Judy Starkey, and Susan Graham Ungaro. In 1961, Mingus spent time staying at the house of his mother's sister (Louise) and her husband, Fess Williams in Jamaica, Queens. Subsequently, Mingus invited Williams to play at the 1962 Town Hall Concert. In 1952 Mingus co-founded Debut Records with Max Roach so he could conduct his recording career as he saw fit. The name originated from his desire to document unrecorded young musicians. Despite this, the best-known recording the company issued was of the most prominent figures in bebop. On May 15, 1953, Mingus joined Dizzy Gillespie, Parker, Bud Powell, and Roach for a concert at Massey Hall in Toronto, which is the last recorded documentation of Gillespie and Parker playing together. After the event, Mingus chose to overdub his barely audible bass part back in New York; the original version was issued later. The two 10" albums of the Massey Hall concert (one featured the trio of Powell, Mingus and Roach) were among Debut Records' earliest releases. Mingus may have objected to the way the major record companies treated musicians, but Gillespie once commented that he did not receive any royalties "for years and years" for his Massey Hall appearance. The records, however, are often regarded as among the finest live jazz recordings. One story has it that Mingus was involved in a notorious incident while playing a 1955 club date billed as a "reunion" with Parker, Powell, and Roach. Powell, who suffered from alcoholism and mental illness (possibly exacerbated by a severe police beating and electroshock treatments), had to be helped from the stage, unable to play or speak coherently. As Powell's incapacitation became apparent, Parker stood in one spot at a microphone, chanting "Bud Powell...Bud Powell..." as if beseeching Powell's return. Allegedly, Parker continued this incantation for several minutes after Powell's departure, to his own amusement and Mingus's exasperation. Mingus took another microphone and announced to the crowd, "Ladies and Gentlemen, please don't associate me with any of this. This is not jazz. These are sick people." This was Parker's last public performance; about a week later he died after years of substance abuse. Mingus often worked with a mid-sized ensemble (around 8–10 members) of rotating musicians known as the Jazz Workshop. Mingus broke new ground, constantly demanding that his musicians be able to explore and develop their perceptions on the spot. Those who joined the Workshop (or Sweatshops as they were colorfully dubbed by the musicians) included Pepper Adams, Jaki Byard, Booker Ervin, John Handy, Jimmy Knepper, Charles McPherson and Horace Parlan. Mingus shaped these musicians into a cohesive improvisational machine that in many ways anticipated free jazz. Some musicians dubbed the workshop a "university" for jazz. The decade that followed is generally regarded as Mingus's most productive and fertile period. Over a ten-year period, he made 30 records for a number of labels (Atlantic, Candid, Columbia, Impulse and others), a pace perhaps unmatched by any other musicians except Ellington. Mingus had already recorded around ten albums as a bandleader, but 1956 was a breakthrough year for him, with the release of "Pithecanthropus Erectus", arguably his first major work as both a bandleader and composer. Like Ellington, Mingus wrote songs with specific musicians in mind, and his band for "Erectus" included adventurous musicians: piano player Mal Waldron, alto saxophonist Jackie McLean and the Sonny Rollins-influenced tenor of J. R. Monterose. The title song is a ten-minute tone poem, depicting the rise of man from his hominid roots ("Pithecanthropus erectus") to an eventual downfall. A section of the piece was free improvisation, free of structure or theme. Another album from this period, "The Clown" (1957 also on Atlantic Records), the title track of which features narration by humorist Jean Shepherd, was the first to feature drummer Dannie Richmond, who remained his preferred drummer until Mingus's death in 1979. The two men formed one of the most impressive and versatile rhythm sections in jazz. Both were accomplished performers seeking to stretch the boundaries of their music while staying true to its roots. When joined by pianist Jaki Byard, they were dubbed "The Almighty Three". In 1959 Mingus and his jazz workshop musicians recorded one of his best-known albums, "Mingus Ah Um". Even in a year of standout masterpieces, including Dave Brubeck's "Time Out", Miles Davis's "Kind of Blue", John Coltrane's "Giant Steps", and Ornette Coleman's prophetic "The Shape of Jazz to Come", this was a major achievement, featuring such classic Mingus compositions as "Goodbye Pork Pie Hat" (an elegy to Lester Young) and the vocal-less version of "Fables of Faubus" (a protest against segregationist Arkansas governor Orval Faubus that features double-time sections). Also during 1959, Mingus recorded the album "Blues & Roots", which was released the following year. As Mingus explained in his liner notes: "I was born swinging and clapped my hands in church as a little boy, but I've grown up and I like to do things other than just swing. But blues can do more than just swing." Mingus witnessed Ornette Coleman's legendary—and controversial—1960 appearances at New York City's Five Spot jazz club. He initially expressed rather mixed feelings for Coleman's innovative music: "...if the free-form guys could play the same tune twice, then I would say they were playing something...Most of the time they use their fingers on the saxophone and they don't even know what's going to come out. They're experimenting." That same year, however, Mingus formed a quartet with Richmond, trumpeter Ted Curson and multi-instrumentalist Eric Dolphy. This ensemble featured the same instruments as Coleman's quartet, and is often regarded as Mingus rising to the challenging new standard established by Coleman. The quartet recorded on both "Charles Mingus Presents Charles Mingus" and "Mingus". The former also features the version of "Fables of Faubus" with lyrics, aptly titled "Original Faubus Fables". Only one misstep occurred in this era: "The Town Hall Concert" in October 1962, a "live workshop"/recording session. With an ambitious program, the event was plagued with troubles from its inception. Mingus's vision, now known as "Epitaph", was finally realized by conductor Gunther Schuller in a concert in 1989, a decade after Mingus died. In 1963, Mingus released "The Black Saint and the Sinner Lady", described as "one of the greatest achievements in orchestration by any composer in jazz history." The album was also unique in that Mingus asked his psychotherapist, Dr. Edmund Pollock, to provide notes for the record. Mingus also released "Mingus Plays Piano", an unaccompanied album featuring some fully improvised pieces, in 1963. In addition, 1963 saw the release of "Mingus Mingus Mingus Mingus Mingus", an album praised by critic Nat Hentoff. In 1964 Mingus put together one of his best-known groups, a sextet including Dannie Richmond, Jaki Byard, Eric Dolphy, trumpeter Johnny Coles, and tenor saxophonist Clifford Jordan. The group was recorded frequently during its short existence; Coles fell ill and left during a European tour. Dolphy stayed in Europe after the tour ended, and died suddenly in Berlin on June 28, 1964. 1964 was also the year that Mingus met his future wife, Sue Graham Ungaro. The couple were married in 1966 by Allen Ginsberg. Facing financial hardship, Mingus was evicted from his New York home in 1966. Mingus's pace slowed somewhat in the late 1960s and early 1970s. In 1974, after his 1970 sextet with Charles McPherson, Eddie Preston and Bobby Jones disbanded, he formed a quintet with Richmond, pianist Don Pullen, trumpeter Jack Walrath and saxophonist George Adams. They recorded two well-received albums, "Changes One" and "Changes Two". Mingus also played with Charles McPherson in many of his groups during this time. "Cumbia and Jazz Fusion" in 1976 sought to blend Colombian music (the "Cumbia" of the title) with more traditional jazz forms. In 1971, Mingus taught for a semester at the University at Buffalo, The State University of New York as the Slee Professor of Music. By the mid-1970s, Mingus was suffering from amyotrophic lateral sclerosis (ALS). His once formidable bass technique declined until he could no longer play the instrument. He continued composing, however, and supervised a number of recordings before his death. At the time of his death, he was working with Joni Mitchell on an album eventually titled "Mingus", which included lyrics added by Mitchell to his compositions, including "Goodbye Pork Pie Hat". The album featured the talents of Wayne Shorter, Herbie Hancock, and another influential bassist and composer, Jaco Pastorius. Mingus died, aged 56, in Cuernavaca, Mexico, where he had traveled for treatment and convalescence. His ashes were scattered in the Ganges River. His compositions retained the hot and soulful feel of hard bop, drawing heavily from black gospel music and blues, while sometimes containing elements of Third Stream, free jazz, and classical music. He once cited Duke Ellington and church as his main influences. Mingus espoused collective improvisation, similar to the old New Orleans jazz parades, paying particular attention to how each band member interacted with the group as a whole. In creating his bands, he looked not only at the skills of the available musicians, but also their personalities. Many musicians passed through his bands and later went on to impressive careers. He recruited talented and sometimes little-known artists, whom he utilized to assemble unconventional instrumental configurations. As a performer, Mingus was a pioneer in double bass technique, widely recognized as one of the instrument's most proficient players. Because of his brilliant writing for midsize ensembles, and his catering to and emphasizing the strengths of the musicians in his groups, Mingus is often considered the heir of Duke Ellington, for whom he expressed great admiration and collaborated on the record "Money Jungle". Indeed, Dizzy Gillespie had once claimed Mingus reminded him "of a young Duke", citing their shared "organizational genius." Nearly as well known as his ambitious music was Mingus's often fearsome temperament, which earned him the nickname "The Angry Man of Jazz". His refusal to compromise his musical integrity led to many onstage eruptions, exhortations to musicians, and dismissals. Although respected for his musical talents, Mingus was sometimes feared for his occasionally violent onstage temper, which was at times directed at members of his band and other times aimed at the audience. He was physically large, prone to obesity (especially in his later years), and was by all accounts often intimidating and frightening when expressing anger or displeasure. When confronted with a nightclub audience talking and clinking ice in their glasses while he performed, Mingus stopped his band and loudly chastised the audience, stating: "Isaac Stern doesn't have to put up with this shit." Mingus reportedly destroyed a $20,000 bass in response to audience heckling at the Five Spot in New York City. Guitarist and singer Jackie Paris was a first-hand witness to Mingus's irascibility. Paris recalls his time in the Jazz Workshop: "He chased everybody off the stand except [drummer] Paul Motian and me... The three of us just wailed on the blues for about an hour and a half before he called the other cats back." On October 12, 1962, Mingus punched Jimmy Knepper in the mouth while the two men were working together at Mingus' apartment on a score for his upcoming concert at The Town Hall in New York, and Knepper refused to take on more work. Mingus' blow broke off a crowned tooth and its underlying stub. According to Knepper, this ruined his embouchure and resulted in the permanent loss of the top octave of his range on the trombone – a significant handicap for any professional trombonist. This attack temporarily ended their working relationship, and Knepper was unable to perform at the concert. Charged with assault, Mingus appeared in court in January 1963 and was given a suspended sentence. Knepper did again work with Mingus in 1977 and played extensively with the Mingus Dynasty, formed after Mingus' death in 1979. In addition to bouts of ill temper, Mingus was prone to clinical depression and tended to have brief periods of extreme creative activity intermixed with fairly long stretches of greatly decreased output, such as the five-year period following the death of Eric Dolphy. In 1966, Mingus was evicted from his apartment at 5 Great Jones Street in New York City for nonpayment of rent, captured in the 1968 documentary film "", directed by Thomas Reichman. The film also features Mingus performing in clubs and in the apartment, firing a .410 shotgun indoors, composing at the piano, playing with and taking care of his young daughter Caroline, and discussing love, art, politics, and the music school he had hoped to create. Charles Mingus' music is currently being performed and reinterpreted by the Mingus Big Band, which in October 2008 began playing every Monday at Jazz Standard in New York City, and often tours the rest of the U.S. and Europe. The Mingus Big Band, the Mingus Orchestra, and the Mingus Dynasty band are managed by Jazz Workshop, Inc. and run by Mingus' widow Sue Graham Mingus. Elvis Costello has written lyrics for a few Mingus pieces. He had once sung lyrics for one piece, "Invisible Lady", backed by the Mingus Big Band on the album, "Tonight at Noon: Three of Four Shades of Love". "Epitaph" is considered one of Charles Mingus' masterpieces. The composition is 4,235 measures long, requires two hours to perform, and is one of the longest jazz pieces ever written. "Epitaph" was only completely discovered, by musicologist Andrew Homzy, during the cataloging process after Mingus' death. With the help of a grant from the Ford Foundation, the score and instrumental parts were copied, and the piece itself was premiered by a 30-piece orchestra, conducted by Gunther Schuller. This concert was produced by Mingus' widow, Sue Graham Mingus, at Alice Tully Hall on June 3, 1989, 10 years after Mingus' death. It was performed again at several concerts in 2007. The performance at Walt Disney Concert Hall is available on NPR. Hal Leonard published the complete score in 2008. Mingus wrote the sprawling, exaggerated, quasi-autobiography, "Beneath the Underdog: His World as Composed by Mingus", throughout the 1960s, and it was published in 1971. Its "stream of consciousness" style covered several aspects of his life that had previously been off-record. In addition to his musical and intellectual proliferation, Mingus goes into great detail about his perhaps overstated sexual exploits. He claims to have had more than 31 affairs in the course of his life (including 26 prostitutes in one sitting). This does not include any of his five wives (he claims to have been married to two of them simultaneously). In addition, he asserts that he held a brief career as a pimp. This has never been confirmed. Mingus's autobiography also serves as an insight into his psyche, as well as his attitudes about race and society. It includes accounts of abuse at the hands of his father from an early age, being bullied as a child, his removal from a white musician's union, and grappling with disapproval while married to white women and other examples of the hardship and prejudice. The work of Charles Mingus has also received attention in academia. According to Ashon Crawley, the musicianship of Charles Mingus provides a salient example of the power of music to unsettle the dualistic, categorical distinction of sacred from profane through otherwise epistemologies. Crawley offers a reading of Mingus that examines the deep imbrication uniting Holiness-Pentecostal aesthetic practices and jazz. Mingus recognized the importance and impact of the midweek gathering of black folks at the Holiness-Pentecostal Church at 79th and Watts in Los Angeles that he would attend with his stepmother or his friend Britt Woodman. Crawley goes on to argue that these visits were the impetus for the song "Wednesday Prayer Meeting." Emphasis is placed on the ethical demand of the prayer meeting felt and experienced that, according to Crawley, Mingus attempts to capture. In many ways, "Wednesday Night Prayer Meeting" was Mingus's homage, to black sociality. By exploring Mingus' homage to black Pentecostal aesthetics, Crawley expounds on how Mingus figured out that those Holiness-Pentecostal gatherings were the constant repetition of the ongoing, deep, intense mode of study, a kind of study wherein the aesthetic forms created could not be severed from the intellectual practice because they were one and also, but not, the same." Gunther Schuller has suggested that Mingus should be ranked among the most important American composers, jazz or otherwise. In 1988, a grant from the National Endowment for the Arts made possible the cataloging of Mingus compositions, which were then donated to the Music Division of the New York Public Library for public use. In 1993, The Library of Congress acquired Mingus's collected papers—including scores, sound recordings, correspondence and photos—in what they described as "the most important acquisition of a manuscript collection relating to jazz in the Library's history". Considering the number of compositions that Charles Mingus wrote, his works have not been recorded as often as comparable jazz composers. The only Mingus tribute albums recorded during his lifetime were baritone saxophonist Pepper Adams's album, "Pepper Adams Plays the Compositions of Charlie Mingus", in 1963, and Joni Mitchell's album "Mingus", in 1979. Of all his works, his elegy for Lester Young, "Goodbye Pork Pie Hat" (from "Mingus Ah Um") has probably had the most recordings.The song has been covered by both jazz and non-jazz artists, such as Jeff Beck, Andy Summers, Eugene Chadbourne, and Bert Jansch and John Renbourn with and without Pentangle. Joni Mitchell sang a version with lyrics that she wrote for it. Elvis Costello has recorded "Hora Decubitus" (from "Mingus Mingus Mingus Mingus Mingus") on "My Flame Burns Blue" (2006). "Better Git It in Your Soul" was covered by Davey Graham on his album "Folk, Blues, and Beyond." Trumpeter Ron Miles performs a version of "Pithecanthropus Erectus" on his CD "Witness." New York Ska Jazz Ensemble has done a cover of Mingus's "Haitian Fight Song", as have the British folk rock group Pentangle and others. Hal Willner's 1992 tribute album "Weird Nightmare: Meditations on Mingus" (Columbia Records) contains idiosyncratic renditions of Mingus's works involving numerous popular musicians including Chuck D, Keith Richards, Henry Rollins and Dr. John. The Italian band Quintorigo recorded an entire album devoted to Mingus's music, titled "Play Mingus". Gunther Schuller's edition of Mingus's "Epitaph" which premiered at Lincoln Center in 1989 was subsequently released on Columbia/Sony Records. One of the most elaborate tributes to Mingus came on September 29, 1969, at a festival honoring him. Duke Ellington performed "The Clown", with Ellington reading Jean Shepherd's narration. It was long believed that no recording of this performance existed; however, one was discovered and premiered on July 11, 2013, by Dry River Jazz host Trevor Hodgkins for NPR member station KRWG-FM with re-airings on July 13, 2013, and July 26, 2014. Mingus's elegy for Duke, "Duke Ellington's Sound Of Love", was recorded by Kevin Mahogany on "Double Rainbow" (1993) and Anita Wardell on "Why Do You Cry?" (1995). On June 25, 2019, "The New York Times Magazine" listed Charles Mingus among hundreds of artists whose material was reportedly destroyed in the 2008 Universal fire.
https://en.wikipedia.org/wiki?curid=7668
Committee on Data for Science and Technology The CODATA is the Committee on Data of the International Science Council and was established as ICSU Committee on Data for Science and Technology in 1966. CODATA exists to promote global collaboration to advance Open Science and to improve the availability and usability of data for all areas of research.  CODATA supports the principle that data produced by research and susceptible to be used for research should be as open as possible and as closed as necessary.  CODATA works also to advance the interoperability and the usability of such data: research data should be FAIR (Findable, Accessible, Interoperable and Reusable). By promoting the policy, technological and cultural changes that are essential to promote Open Science, CODATA helps advance ISC’s vision and mission of advancing science as a global public good. The CODATA Strategic Plan 2015 and Prospectus of Strategy and Achievement 2016 identify three priority areas: CODATA achieves these objectives through a number of standing committees and strategic executive led initiatives, and through its task groups and working groups. CODATA supports the Data Science Journal and collaborates on major data conferences like SciDataCon and International Data Week. In October 2020 CODATA is co-organising an International FAIR Symposium together with the GO FAIR initiative to provide a forum for advancing international and cross-domain convergence around FAIR. The event will bring together a global data community with an interest in combining data across domains for a host of research issues - including major global challenges, such as those relating to the Sustainable Development Goals. Outcomes will directly link to the CODATA Decadal Programme Data for the Planet: making data work for cross-domain grand challenges and to the developments of GO FAIR community towards the Internet of FAIR data and services. One of the CODATA strategic Initiatives and Task Groups concentrates on Fundamental Physical Constants. Established in 1969, its purpose is to periodically provide the international scientific and technological communities with an internationally accepted set of values of the fundamental physical constants and closely related conversion factors for use worldwide. The first such CODATA set was published in 1973. Later versions are named based on the year of the data incorporated; the 1986 CODATA (published April 1987) used data up to 1 January 1986. All subsequent releases use data up to the "end" of the stated year, and are necessarily published a year or two later: 1998 (April 2000), 2002 (January 2005), 2006 (June 2008) and the sixth in 2010 (November 2012). The latest version is Version 7.0 called "2014 CODATA" published on 25 June 2015. The CODATA recommended values of fundamental physical constants are published at the NIST Reference on Constants, Units, and Uncertainty. Since 1998, the task group has produced a new version every four years, incorporating results published up to the end of the specified year. In order to support the redefinition of the SI base units, adopted at the 26th General Conference on Weights and Measures on 16 November 2018, CODATA made a special release that was published in October 2017. It incorporates all data up to 1 July 2017, and determines the final numerical values of "h", "e", "k", and "N"A that are to be used for the new SI definitions. The last regular version, with a closing date of 31 December 2018, was used to produce the new 2018 CODATA values that were made available by the time the revised SI came into force on 20 May 2019. This was necessary because the redefinitions have a significant (mostly beneficial) effect on the uncertainties and correlation coefficients reported by CODATA.
https://en.wikipedia.org/wiki?curid=7671
Chuck Jones Charles Martin Jones (September 21, 1912 – February 22, 2002) was an American animated filmmaker and cartoonist, best known for his work with Warner Bros. Cartoons on the "Looney Tunes" and "Merrie Melodies" shorts. He wrote, produced, and/or directed many classic animated cartoon shorts starring Bugs Bunny, Daffy Duck, Wile E. Coyote and the Road Runner, Pepé Le Pew, Porky Pig, Michigan J. Frog, the Three Bears, and a slew of other Warner characters. After his career at Warner Bros. ended in 1962, Jones started Sib Tower 12 Productions, and began producing cartoons for Metro-Goldwyn-Mayer, including a new series of and the television adaptation of Dr. Seuss' "How the Grinch Stole Christmas!". He later started his own studio, Chuck Jones Enterprises, which created several one-shot specials, and periodically worked on "Looney Tunes" related works. Jones was nominated for eight Academy Awards, winning three times. He won for the cartoons "For Scent-imental Reasons", "So Much for So Little", and "The Dot and the Line". Robin Williams presented Jones with an Honorary Academy Award in 1996 for his iconic work in the animation industry. Film historian Leonard Maltin has praised Jones' work at Warner Bros., MGM and Chuck Jones Enterprises. He also said that the "feud" that there may have been between Jones and colleague Bob Clampett was mainly because they were so different from each other. In Jerry Beck's "The 50 Greatest Cartoons", ten of the entries were directed by Jones, with four out of the five top cartoons (including first place) being Jones shorts. Jones was born on September 21, 1912, in Spokane, Washington, the son of Mabel McQuiddy (Martin) and Charles Adams Jones. He later moved with his parents and three siblings to the Los Angeles, California area. In his autobiography, "Chuck Amuck", Jones credits his artistic bent to circumstances surrounding his father, who was an unsuccessful businessman in California in the 1920s. His father, Jones recounts, would start every new business venture by purchasing new stationery and new pencils with the company name on them. When the business failed, his father would quietly turn the huge stacks of useless stationery and pencils over to his children, requiring them to use up all the material as fast as possible. Armed with an endless supply of high-quality paper and pencils, the children drew constantly. Later, in one art school class, the professor gravely informed the students that they each had 100,000 bad drawings in them that they must first get past before they could possibly draw anything worthwhile. Jones recounted years later that this pronouncement came as a great relief to him, as he was well past the 200,000 mark, having used up all that stationery. Jones and several of his siblings went on to artistic careers. During his artistic education, he worked part-time as a janitor. After graduating from Chouinard Art Institute, Jones got a phone call from a friend named Fred Kopietz, who had been hired by the Ub Iwerks studio and offered him a job. He worked his way up in the animation industry, starting as a cel washer; "then I moved up to become a painter in black and white, some color. Then I went on to take animator's drawings and traced them onto the celluloid. Then I became what they call an in-betweener, which is the guy that does the drawing between the drawings the animator makes". While at Iwerks, he met a cel painter named Dorothy Webster, who later became his first wife. Jones joined Leon Schlesinger Productions, the independent studio that produced "Looney Tunes" and "Merrie Melodies" for Warner Bros., in 1933 as an assistant animator. In 1935, he was promoted to animator and assigned to work with a new Schlesinger director Tex Avery. There was no room for the new Avery unit in Schlesinger's small studio, so Avery, Jones, and fellow animators Bob Clampett, Virgil Ross, and Sid Sutherland were moved into a small adjacent building they dubbed "Termite Terrace". When Clampett was promoted to director in 1937, Jones was assigned to his unit; the Clampett unit was briefly assigned to work with Jones' old employer, Ub Iwerks when Iwerks subcontracted four cartoons to Schlesinger in 1937. Jones became a director (or "supervisor", the original title for an animation director in the studio) himself in 1938 when Frank Tashlin left the studio. The following year Jones created his first major character, Sniffles, a cute Disney-style mouse, who went on to star in twelve Warner Bros. cartoons. He was actively involved in efforts to unionize the staff of Leon Schlesinger Studios. He was responsible for recruiting animators, layout men, and background people. Almost all animators joined, in reaction to salary cuts imposed by Leon Schlesinger. The Metro-Goldwyn-Mayer cartoon studio had already signed a union contract, encouraging their counterparts under Schlesinger. In a meeting with his staff, Schlesinger talked for a few minutes, then turned over the meeting to his attorney. His insulting manner had a unifying effect on the staff. Jones gave a pep talk at the union headquarters. As negotiations broke down, the staff decided to go on strike. Schlesinger locked them out of the studio for a few days, before agreeing to sign the contract. A Labor-Management Committee was formed and Jones served as a moderator. Because of his role as a supervisor in the studio, he could not himself join the union. Jones created many of his lesser-known characters during this period, including Charlie Dog, Hubie and Bertie, and The Three Bears. During World War II, Jones worked closely with Theodor Geisel, better known as Dr. Seuss, to create the "Private Snafu" series of Army educational cartoons (the character was created by director Frank Capra). Jones later collaborated with Seuss on animated adaptations of Seuss' books, including "How the Grinch Stole Christmas!" in 1966. Jones directed such shorts as "The Weakly Reporter", a 1944 short that related to shortages and rationing on the home front. During the same year, he directed "Hell-Bent for Election", a campaign film for Franklin D. Roosevelt. Jones created characters through the late 1940s and the 1950s, which include Claude Cat, Marc Antony and Pussyfoot, Charlie Dog, Michigan J. Frog, Gossamer, and his four most popular creations, Marvin the Martian, Pepé Le Pew, Wile E. Coyote and the Road Runner. Jones and writer Michael Maltese collaborated on the Road Runner cartoons, "Duck Amuck", "One Froggy Evening", and "What's Opera, Doc?". Other staff at Unit A that Jones collaborated with include layout artist, background designer, co-director Maurice Noble; animator and co-director Abe Levitow; and animators Ken Harris and Ben Washam. Jones remained at Warner Bros. throughout the 1950s, except for a brief period in 1953 when Warner closed the animation studio. During this interim, Jones found employment at Walt Disney Productions, where he teamed with Ward Kimball for a four-month period of uncredited work on "Sleeping Beauty" (1959). Upon the reopening of the Warner animation department, Jones was rehired and reunited with most of his unit. In the early 1960s, Jones and his wife Dorothy wrote the screenplay for the animated feature "Gay Purr-ee". The finished film would feature the voices of Judy Garland, Robert Goulet and Red Buttons as cats in Paris, France. The feature was produced by UPA and directed by his former Warner Bros. collaborator, Abe Levitow. Jones moonlighted to work on the film since he had an exclusive contract with Warner Bros. UPA completed the film and made it available for distribution in 1962; it was picked up by Warner Bros. When Warner Bros. discovered that Jones had violated his exclusive contract with them, they terminated him. Jones' former animation unit was laid off after completing the final cartoon in their pipeline, "The Iceman Ducketh", and the rest of the Warner Bros. Cartoons studio was closed in early 1963. With business partner Les Goldman, Jones started an independent animation studio, Sib Tower 12 Productions, and brought on most of his unit from Warner Bros., including Maurice Noble and Michael Maltese. In 1963, Metro-Goldwyn-Mayer contracted with Sib Tower 12 to have Jones and his staff produce new "Tom and Jerry" cartoons as well as a television adaptation of all "Tom and Jerry" theatricals produced to that date. This included major editing, including writing out the African-American maid, Mammy Two-Shoes, and replacing her with one of Irish descent voiced by June Foray. In 1964, Sib Tower 12 was absorbed by MGM and was renamed MGM Animation/Visual Arts. His animated short film, "The Dot and the Line: A Romance in Lower Mathematics", won the 1965 Academy Award for Best Animated Short Film. Jones directed the classic animated short "The Bear That Wasn't". As the "Tom and Jerry" series wound down (it was discontinued in 1967), Jones produced more for television. In 1966, he produced and directed the TV special "How the Grinch Stole Christmas!", featuring the voice and facial models based on the readings by Boris Karloff. Jones continued to work on other TV specials such as "Horton Hears a Who!" (1970), but his main focus during this time was producing the feature film "The Phantom Tollbooth", which did lukewarm business when MGM released it in 1970. Jones co-directed 1969's "The Pogo Special Birthday Special", based on the Walt Kelly comic strip, and voiced the characters of Porky Pine and Bun Rab. It was at this point that he decided to start ST Incorporated. MGM closed the animation division in 1970, and Jones once again started his own studio, Chuck Jones Enterprises. He produced a Saturday morning children's TV series for the American Broadcasting Company called "The Curiosity Shop" in 1971. In 1973, he produced an animated version of the George Selden book "The Cricket in Times Square" and would go on to produce two sequels. Three of his works during this period were animated TV adaptations of short stories from Rudyard Kipling's "Mowgli's Brothers", "The White Seal" and "Rikki-Tikki-Tavi". During this period, Jones began to experiment with more realistically designed characters, most of which having larger eyes, leaner bodies, and altered proportions, such as those of the Looney Tunes characters. Jones resumed working with Warner Bros. in 1976 with the animated TV adaptation of "The Carnival of the Animals" with Bugs Bunny and Daffy Duck. Jones also produced "The Bugs Bunny/Road Runner Movie" (1979) which was a compilation of Jones' best theatrical shorts; Jones produced new Road Runner shorts for "The Electric Company" series and "Bugs Bunny's Looney Christmas Tales" (1979). New shorts were made for "Bugs Bunny's Bustin' Out All Over" (1980). From 1977 to 1978, Jones wrote and drew the newspaper comic strip "Crawford" (also known as "Crawford & Morgan") for the Chicago Tribune-NY News Syndicate. In 2011 IDW Publishing collected Jones' strip as part of their Library of American Comic Strips. In 1978, Jones' wife Dorothy died; three years later, he married Marian Dern, the writer of the comic strip "Rick O'Shay". On December 11, 1975, shortly after the release of "Bugs Bunny Superstar", which prominently featured Bob Clampett, Jones wrote a letter to Tex Avery, accusing Clampett of taking credit for ideas that were not his, and for characters created by other directors (notably Jones' Sniffles and Friz Freleng's Yosemite Sam). Their correspondence was never published in the media. It was forwarded to Michael Barrier, who conducted the interview with Clampett and was distributed by Jones to multiple people concerned with animation over the years. Robert McKimson claimed in an interview that many animators but mostly Clampett contributed to the crazy personality of Bugs, while others like Chuck Jones concentrated more on the more calmed-down gags. As far as plagiarism is concerned, McKimson claimed the animators would always be looking at each other's sheets to see if they could borrow some punchlines and cracks. Through the 1980s and 1990s, Jones was painting cartoon and parody art, sold through animation galleries by his daughter's company, Linda Jones Enterprises. Jones was the creative consultant and character designer for two Raggedy Ann animated specials and the first "Alvin and the Chipmunks" Christmas special "A Chipmunk Christmas". He made a cameo appearance in the film "Gremlins" (1984) and directed the Bugs Bunny/Daffy Duck animated sequences that bookend its sequel "" (1990). Jones directed animated sequences for various features such as a lengthy sequence in the film "Stay Tuned" (1992) and a shorter one seen at the start of the Robin Williams vehicle "Mrs. Doubtfire" (1993). Also during the 1980s and 1990s, Jones served on the advisory board of the National Student Film Institute. Jones' final Looney Tunes cartoon was "From Hare to Eternity" (1997), which starred Bugs Bunny and Yosemite Sam, with Greg Burson voicing Bugs. The cartoon was dedicated to Friz Freleng, who had died in 1995. Jones' final animation project was a series of 13 shorts starring a timber wolf character he had designed in the 1960s named Thomas Timber Wolf. The series was released online by Warner Bros. in 2000. From 2001 until 2004, Cartoon Network aired "The Chuck Jones Show" which features shorts directed by him. The show won the Annie Award for Outstanding Achievement in an Animated Special Project. In 1997, Jones was awarded the Edward MacDowell Medal. In 1999, he founded the non-profit Chuck Jones Center for Creativity, in Costa Mesa, California, an art education "gymnasium for the brain" dedicated to teaching creative skills, primarily to children and seniors, which is still in operation. In his later years, he recovered from skin cancer and had done hip and ankle replacements. Jones died of heart failure on February 22, 2002, at the age of 89. He was cremated and his ashes were scattered at sea. After his death, the Looney Tunes cartoon "Daffy Duck for President", based on the book that Jones had written and using Jones' style for the characters, originally scheduled to be released in 2000, was released in 2004 as part of disc three of the "" DVD set. Jones received an Honorary Academy Award in 1996 by the board of governors of the Academy of Motion Picture Arts and Sciences, for "the creation of classic cartoons and cartoon characters whose animated lives have brought joy to our real ones for more than half a century." At that year's awards show, Robin Williams, a self-confessed "Jones-aholic," presented the honorary award to Jones, calling him "The Orson Welles of cartoons.", and the audience gave Jones a standing ovation as he walked onto the stage. For himself, a flattered Jones wryly remarked in his acceptance speech, "Well, what can I say in the face of such humiliating evidence? I stand guilty before the world of directing over three hundred cartoons in the last fifty or sixty years. Hopefully, this means you've forgiven me." He received the Lifetime Achievement Award at the World Festival of Animated Film – Animafest Zagreb in 1988. Jones was a historical authority as well as a major contributor to the development of animation throughout the 20th century. In 1990, Jones received the Golden Plate Award of the American Academy of Achievement. He received an honorary degree from Oglethorpe University in 1993. For his contribution to the motion picture industry, Jones has a star on the Hollywood Walk of Fame at 7011 Hollywood Blvd. Jones' life and legacy were celebrated on January 12, 2012, with the official grand opening of "The Chuck Jones Experience" at Circus Circus Las Vegas. Many of Jones' family welcomed celebrities, animation aficionados and visitors to the new attraction when they opened the attraction in an appropriate and unconventional way. Among those in attendance were Jones' widow, Marian Jones; daughter Linda Clough; and grandchildren Craig, Todd and Valerie Kausen.
https://en.wikipedia.org/wiki?curid=7672
Costume Costume is the distinctive style of dress of an individual or group that reflects class, gender, profession, ethnicity, nationality, activity or epoch. The term also was traditionally used to describe typical appropriate clothing for certain activities, such as riding costume, swimming costume, dance costume, and evening costume. Appropriate and acceptable costume is subject to changes in fashion and local cultural norms. This general usage has gradually been replaced by the terms "dress", "attire", "robes" or "wear" and usage of "costume" has become more limited to unusual or out-of-date clothing and to attire intended to evoke a change in identity, such as theatrical, Halloween, and mascot costumes. Before the advent of ready-to-wear apparel, clothing was made by hand. When made for commercial sale it was made, as late as the beginning of the 20th century, by "costumiers", often women who ran businesses that met the demand for complicated or intimate female costume, including millinery and corsetry. Costume comes from the same Italian word, inherited via French, which means fashion or custom. National costume or regional costume expresses local (or exiled) identity and emphasizes a culture's unique attributes. They are often a source of national pride. Examples include the Scottish kilt or Japanese kimono. In Bhutan there is a traditional national dress prescribed for men and women, including the monarchy. These have been in vogue for thousands of years and have developed into a distinctive dress style. The dress worn by men is known as Gho which is a robe worn up to knee-length and is fastened at the waist by a band called the Kera. The front part of the dress which is formed like a pouch, in olden days was used to hold baskets of food and short dagger, but now it is used to keep cell phone, purse and the betel nut called "Doma". The dress worn by women consist of three pieces known as Kira, Tego and Wonju. The long dress which extends up to the ankle is Kira. The jacket worn above this is Tego which is provided with Wonju, the inner jacket. However, while visiting the Dzong or monastery a long scarf or stoll, called Kabney is worn by men across the shoulder, in colours appropriate to their ranks. Women also wear scarfs or stolls called Rachus, made of raw silk with embroidery, over their shoulder but not indicative of their rank. "Costume" often refers to a particular style of clothing worn to portray the wearer as a character or type of character at a social event in a theatrical performance on the stage or in film or television. In combination with other aspects of stagecraft, theatrical costumes can help actors portray characters' and their contexts as well as communicate information about the historical period/era, geographic location and time of day, season or weather of the theatrical performance. Some stylized theatrical costumes, such as Harlequin and Pantaloon in the Commedia dell'arte, exaggerate an aspect of a character. A costume technician is a term used for a person that constructs and/or alters the costumes. The costume technician is responsible for taking the two dimensional sketch and translating it to create a garment that resembles the designer's rendering. It is important for a technician to keep the ideas of the designer in mind when building the garment. Draping is the art of manipulating the fabric using pins and hand stitching to create structure on a body. This is usually done on a dress form to get the adequate shape for the performer. Cutting is the act of laying out fabric on a flat surface, using scissors to cut and follow along a pattern. These pieces are put together to create a final costume. The job of a costume designer is to design and create a concept for the costumes for the play or performance. The job of a costume technician is to construct and pattern the costumes for the play or performance. The wardrobe supervisor oversees the wardrobe crew and run of the show from backstage. They are responsible for maintaining the good condition of the costumes. Millinery also known as hatmaking is the manufacturing of hats and headwear. The wearing of costumes is an important part of holidays developed from religious festivals such as Mardi Gras (in the lead up to Easter), and Halloween (related to All Hallow's Eve). Mardi Gras costumes usually take the form of jesters and other fantasy characters; Halloween costumes traditionally take the form of supernatural creatures such as ghosts, vampires, pop-culture icons and angels. In modern times. Christmas costumes typically portray characters such as Santa Claus (developed from Saint Nicholas). In Australia, the United Kingdom and the United States the American version of a Santa suit and beard is popular; in the Netherlands, the costume of Zwarte Piet is customary. Easter costumes are associated with the Easter Bunny or other animal costumes. In Judaism, a common practice is to dress up on Purim. During this holiday, Jews celebrate the change of their destiny. They were delivered from being the victims of an evil decree against them and were instead allowed by the King to destroy their enemies. A quote from the Book of Esther, which says: "On the contrary" () is the reason that wearing a costume has become customary for this holiday. Buddhist religious festivals in Tibet, Bhutan, Mongolia and Lhasa and Sikkim in India perform the Cham dance, which is a popular dance form utilising masks and costumes. Parades and processions provide opportunities for people to dress up in historical or imaginative costumes. For example, in 1879 the artist Hans Makart designed costumes and scenery to celebrate the wedding anniversary of the Austro-Hungarian Emperor and Empress and led the people of Vienna in a costume parade that became a regular event until the mid-twentieth century. Uncle Sam costumes are worn on Independence Day in the United States. The Lion Dance, which is part of Chinese New Year celebrations, is performed in costume. Some costumes, such as the ones used in the Dragon Dance, need teams of people to create the required effect. Public sporting events such as fun runs also provide opportunities for wearing costumes, as do private masquerade balls and fancy dress parties. Costumes are popularly employed at sporting events, during which fans dress as their team's representative mascot to show their support. Businesses use mascot costumes to bring in people to their business either by placing their mascot in the street by their business or sending their mascot out to sporting events, festivals, national celebrations, fairs, and parades. Mascots appear at organizations wanting to raise awareness of their work. Children's Book authors create mascots from the main character to present at their book signings. Animal costumes that are visually very similar to mascot costumes are also popular among the members of the furry fandom, where the costumes are referred to as fursuits and match one's animal persona, or "fursona". Costumes also serve as an avenue for children to explore and role-play. For example, children may dress up as characters from history or fiction, such as pirates, princesses, cowboys, or superheroes. They may also dress in uniforms used in common jobs, such as nurses, police officers, or firefighters, or as zoo or farm animals. Young boys tend to prefer costumes that reinforce stereotypical ideas of being male, and young girls tend to prefer costumes that reinforce stereotypical ideas of being female. Cosplay, a word of Japanese origin that in English is short for "costume display" or "costume play", is a performance art in which participants wear costumes and accessories to represent a specific character or idea that is usually always identified with a unique name (as opposed to a generic word). These costume wearers often interact to create a subculture centered on role play, so they can be seen most often in play groups, or at a gathering or convention. A significant number of these costumes are homemade and unique, and depend on the character, idea, or object the costume wearer is attempting to imitate or represent. The costumes themselves are often artistically judged to how well they represent the subject or object that the costume wearer is attempting to contrive. Costume design is the envisioning of clothing and the overall appearance of a character or performer. Costume may refer to the style of dress particular to a nation, a class, or a period. In many cases, it may contribute to the fullness of the artistic, visual world that is unique to a particular theatrical or cinematic production. The most basic designs are produced to denote status, provide protection or modesty, or provide visual interest to a character. Costumes may be for, but not limited to, theater, cinema, or musical performances. Costume design should not be confused with costume coordination, which merely involves altering existing clothing, although both processes are used to create stage clothes. The Costume Designers Guild's international membership includes motion picture, television, and commercial costume designers, assistant costume designers and costume illustrators, and totals over 750 members. "The Costume Designer" is a quarterly magazine devoted to the costume design industry. Notable costume designers include recipients of the Academy Award for Best Costume Design, Tony Award for Best Costume Design, and Drama Desk Award for Outstanding Costume Design. Edith Head and Orry-Kelly, both of whom were born late in 1897, were two of Hollywood's most notable costume designers. In the 20th century, contemporary fabric stores offered commercial patterns that could be bought and used to make a costume from raw materials. Some companies also began producing catalogs with great numbers of patterns. More recently, and particularly with the advent of the Internet, the DIY movement has ushered in a new era of DIY costumes and pattern sharing. YouTube, Pinterest, Mashable also feature many DIY costumes. Professional-grade costumes are typically designed and produced by artisan crafters, often specifically for a particular character or setting. Specialty shops may also include common costumes of this caliber. Some high-end costumes may even be designed by the costume's wearer. The costume industry includes vendors such the American company Spirit Halloween, which opens consumer-oriented stores seasonally with pre-made Halloween costumes.
https://en.wikipedia.org/wiki?curid=7673
Cable car (railway) A cable car (usually known as a cable tram outside North America) is a type of cable railway used for mass transit where rail cars are hauled by a continuously moving cable running at a constant speed. Individual cars stop and start by releasing and gripping this cable as required. Cable cars are distinct from funiculars, where the cars are permanently attached to the cable. The first cable-operated railway, employing a moving rope that could be picked up or released by a grip on the cars was the Fawdon Wagonway in 1826, a Colliery railway line. The London and Blackwall Railway, which opened for passengers in east London, England, in 1840 used such a system. The rope available at the time proved too susceptible to wear and the system was abandoned in favour of steam locomotives after eight years. In America, the first cable car installation in operation probably was the West Side and Yonkers Patent Railway in New York City, as its first-ever elevated railway which ran from 1 July 1868 to 1870. The cable technology used in this elevated railway involved collar-equipped cables and claw-equipped cars, proving cumbersome. The line was closed and rebuilt, reopening with steam locomotives. In 1869 P. G. T. Beauregard demonstrated a cable car at New Orleans and was issued . Other cable cars to use grips were those of the Clay Street Hill Railroad, which later became part of the San Francisco cable car system. The building of this line was promoted by Andrew Smith Hallidie with design work by William Eppelsheimer, and it was first tested in 1873. The success of these grips ensured that this line became the model for other cable car transit systems, and this model is often known as the "Hallidie Cable Car". In 1881 the Dunedin cable tramway system opened in Dunedin, New Zealand and became the first such system outside San Francisco. For Dunedin, George Smith Duncan further developed the Hallidie model, introducing the pull curve and the slot brake; the former was a way to pull cars through a curve, since Dunedin's curves were too sharp to allow coasting, while the latter forced a wedge down into the cable slot to stop the car. Both of these innovations were generally adopted by other cities, including San Francisco. In Australia, the Melbourne cable tramway system operated from 1885 to 1940. It was one of the most extensive in the world with 1200 trams and trailers operating over 15 routes with 103 km (64 miles) of track. Sydney also had a couple of cable tram routes. Cable cars rapidly spread to other cities, although the major attraction for most was the ability to displace horsecar (or mule-drawn) systems rather than the ability to climb hills. Many people at the time viewed horse-drawn transit as unnecessarily cruel, and the fact that a typical horse could work only four or five hours per day necessitated the maintenance of large stables of draft animals that had to be fed, housed, groomed, medicated and rested. Thus, for a period, economics worked in favour of cable cars even in relatively flat cities. For example, the Chicago City Railway, also designed by Eppelsheimer, opened in Chicago in 1882 and went on to become the largest and most profitable cable car system. As with many cities, the problem in flat Chicago was not one of incline, but of transportation capacity. This caused a different approach to the combination of grip car and trailer. Rather than using a grip car and single trailer, as many cities did, or combining the grip and trailer into a single car, like San Francisco's "California Cars", Chicago used grip cars to pull trains of up to three trailers. In 1883 the New York and Brooklyn Bridge Railway was opened, which had a most curious feature: though it was a cable car system, it used steam locomotives to get the cars into and out of the terminals. After 1896 the system was changed to one on which a motor car was added to each train to maneuver at the terminals, while en route, the trains were still propelled by the cable. On 25 September 1883, a test of a cable car system was held by Liverpool United Tramways and Omnibus Company in Kirkdale, Liverpool. This would have been the first cable car system in Europe, but the company decided against implementing it. Instead, the distinction went to the 1884 route from Archway to Highgate, north London, which used a continuous cable and grip system on the 1 in 11 (9%) climb of Highgate Hill. The installation was not reliable and was replaced by electric traction in 1909. Other cable car systems were implemented in Europe, though, among which was the Glasgow District Subway, the first underground cable car system, in 1896. (London, England's first deep-level tube railway, the City & South London Railway, had earlier also been built for cable haulage but had been converted to electric traction before opening in 1890.) A few more cable car systems were built in the United Kingdom, Portugal, and France. European cities, having many more curves in their streets, were ultimately less suitable for cable cars than American cities. Though some new cable car systems were still being built, by 1890 the cheaper to construct and simpler to operate electrically-powered trolley or tram started to become the norm, and eventually started to replace existing cable car systems. For a while hybrid cable/electric systems operated, for example in Chicago where electric cars had to be pulled by grip cars through the loop area, due to the lack of trolley wires there. Eventually, San Francisco became the only street-running manually operated system to survive—Dunedin, the second city with such cars, was also the second-last city to operate them, closing down in 1957. In the last decades of the 20th-century, cable traction in general has seen a limited revival as automatic people movers, used in resort areas, airports (for example, Toronto Airport), huge hospital centers and some urban settings. While many of these systems involve cars permanently attached to the cable, the Minimetro system from Poma/Leitner Group and the Cable Liner system from DCC Doppelmayr Cable Car both have variants that allow the cars to be automatically decoupled from the cable under computer control, and can thus be considered a modern interpretation of the cable car. The cable is itself powered by a stationary motor or engine situated in a cable house or power house. The speed at which it moves is relatively constant depending on the number of units gripping the cable at any given time. The cable car begins moving when a clamping device attached to the car, called a "grip", applies pressure to ("grips") the moving cable. Conversely, the car is stopped by releasing pressure on the cable (with or without completely detaching) and applying the brakes. This gripping and releasing action may be manual, as was the case in all early cable car systems, or automatic, as is the case in some recent cable operated people mover type systems. Gripping must be an even and gradual process in order to avoid bringing the car to cable speed too quickly and unacceptably jarring passengers. In the case of manual systems, the grip resembles a very large pair of pliers, and considerable strength and skill are required to operate the car. As many early cable car operators discovered the hard way, if the grip is not applied properly, it can damage the cable, or even worse, become entangled in the cable. In the latter case, the cable car may not be able to stop and can wreak havoc along its route until the cable house realizes the mishap and halts the cable. One apparent advantage of the cable car is its relative energy efficiency. This is due to the economy of centrally located power stations, and the ability of descending cars to transfer energy to ascending cars. However, this advantage is totally negated by the relatively large energy consumption required to simply move the cable over and under the numerous guide rollers and around the many sheaves. Approximately 95% of the tractive effort in the San Francisco system is expended in simply moving the four cables at 9.5 miles per hour. Electric cars with regenerative braking do offer the advantages, without the problem of moving a cable. In the case of steep grades, however, cable traction has the major advantage of not depending on adhesion between wheels and rails. There is also the advantage that keeping the car gripped to the cable will also limit the downhill speed to that of the cable. Because of the constant and relatively low speed, a cable car's potential to cause harm in an accident can be underestimated. Even with a cable car traveling at only 9 miles per hour, the mass of the cable car and the combined strength and speed of the cable can cause extensive damage in a collision. A cable car is superficially similar to a funicular, but differs from such a system in that its cars are not permanently attached to the cable and can stop independently, whereas a funicular has cars that are permanently attached to the propulsion cable, which is itself stopped and started. A cable car cannot climb as steep a grade as a funicular, but many more cars can be operated with a single cable, making it more flexible, and allowing a higher capacity. During the rush hour on San Francisco's Market Street Railway in 1883, a car would leave the terminal every 15 seconds. A few funicular railways operate in street traffic, and because of this operation are often incorrectly described as cable cars. Examples of such operation, and the consequent confusion, are: Even more confusingly, a hybrid cable car/funicular line once existed in the form of the original Wellington Cable Car, in the New Zealand city of Wellington. This line had both a continuous loop haulage cable that the cars gripped using a cable car gripper, and a balance cable permanently attached to both cars over an undriven pulley at the top of the line. The descending car gripped the haulage cable and was pulled downhill, in turn pulling the ascending car (which remained ungripped) uphill by the balance cable. This line was rebuilt in 1979 and is now a standard funicular, although it retains its old cable car name. The best known existing cable car system is the San Francisco cable car system in the city of San Francisco, California. San Francisco's cable cars constitute the oldest and largest such system in permanent operation, and it is the only one to still operate in the traditional manner with manually operated cars running in street traffic. Several cities operate a modern version of the cable car system. These systems are fully automated and run on their own reserved right of way. They are commonly referred to as people movers, although that term is also applied to systems with other forms of propulsion, including funicular style cable propulsion. These cities include: 8th St. Tunnel in use (1887–1956) Information Patents
https://en.wikipedia.org/wiki?curid=7674
Creaky voice In linguistics, creaky voice (sometimes called laryngealisation, pulse phonation, vocal fry, or glottal fry) is a special kind of phonation in which the arytenoid cartilages in the larynx are drawn together; as a result, the vocal folds are compressed rather tightly, becoming relatively slack and compact. They normally vibrate irregularly at 20–50 pulses per second, about two octaves below the frequency of modal voicing, and the airflow through the glottis is very slow. Although creaky voice may occur with very low pitch, as at the end of a long intonation unit, it can also occur with a higher pitch. Researcher Ikuko Patricia Yuasa found that "college-age Americans ... perceive female creaky voice as hesitant, nonaggressive, and informal but also educated, urban-oriented, and upwardly mobile." However, according to a 2012 study in "PLOS ONE", young women using creaky voice are viewed as less competent, less educated, less trustworthy, less attractive and less employable. Some suggest that creaky voice can function as a marker of parentheticals in conversations; creaky voice may indicate that certain phrases, when uttered with creaky voice, contain less central information. In some languages, such as Jalapa Mazatec, creaky voice has a phonemic status; that is, the presence or absence of creaky voice can change the meaning of a word. In the International Phonetic Alphabet, creaky voice of a phone is represented by a diacritical tilde , for example . The Danish prosodic feature "stød" is an example of a form of laryngealisation that has a phonemic function. A slight degree of laryngealisation, occurring in some Korean consonants for example, is called "stiff voice".
https://en.wikipedia.org/wiki?curid=7676
Computer monitor A computer monitor is an output device that displays information in pictorial form. A monitor usually comprises the visual display, circuitry, casing, and power supply. The display device in modern monitors is typically a thin film transistor liquid crystal display (TFT-LCD) with LED backlighting having replaced cold-cathode fluorescent lamp (CCFL) backlighting. Older monitors used a cathode ray tube (CRT). Monitors are connected to the computer via VGA, Digital Visual Interface (DVI), HDMI, DisplayPort, Thunderbolt, low-voltage differential signaling (LVDS) or other proprietary connectors and signals. Originally, computer monitors were used for data processing while television sets were used for entertainment. From the 1980s onwards, computers (and their monitors) have been used for both data processing and entertainment, while televisions have implemented some computer functionality. The common aspect ratio of televisions, and computer monitors, has changed from 4:3 to 16:10, to 16:9. Modern computer monitors are easily interchangeable with conventional television sets. However, as computer monitors do not necessarily include integrated speakers, it may not be possible to use a computer monitor without external components. Early electronic computers were fitted with a panel of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the 'monitor'. As early monitors were only capable of displaying a very limited amount of information and were very transient, they were rarely considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation. As technology developed engineers realized that the output of a CRT display was more flexible than a panel of light bulbs and eventually, by giving control of what was displayed in the program itself, the monitor itself became a powerful output device in its own right. Computer monitors were formerly known as visual display units (VDU), but this term had mostly fallen out of use by the 1990s. Multiple technologies have been used for computer monitors. Until the 21st century most used cathode ray tubes but they have largely been superseded by LCD monitors. The first computer monitors used cathode ray tubes (CRTs). Prior to the advent of home computers in the late 1970s, it was common for a video display terminal (VDT) using a CRT to be physically integrated with a keyboard and other components of the system in a single large chassis. The display was monochrome and far less sharp and detailed than on a modern flat-panel monitor, necessitating the use of relatively large text and severely limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for the specialized military, industrial and scientific applications but they were far too costly for general use. Some of the earliest home computers (such as the TRS-80 and Commodore PET) were limited to monochrome CRT displays, but color display capability was already a standard feature of the pioneering Apple II, introduced in 1977, and the specialty of the more graphically sophisticated Atari 800, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality. Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 x 200 pixels, or it could produce 640 x 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter which was capable of producing 16 colors and had a resolution of 640 x 350. By the end of the 1980s color CRT monitors that could clearly display 1024 x 768 pixels were widely available and increasingly affordable. During the following decade, maximum display resolutions gradually increased and prices continued to fall. CRT technology remained dominant in the PC monitor market into the new millennium partly because it was cheaper to produce and offered to view angles close to 180 degrees. CRTs still offer some image quality advantages over LCDs but improvements to the latter have made them much less obvious. The dynamic range of early LCD panels was very poor, and although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry. There are multiple technologies that have been used to implement liquid crystal displays (LCD). Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, and smaller physical size of LCDs justified the higher price versus a CRT. Commonly, the same laptop would be offered with an assortment of display options at increasing price points: (active or passive) monochrome, passive color, or active matrix color (TFT). As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD which is now the dominant technology used for computer monitors. The first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined over a period of years they became more popular, and by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors was the Eizo L66 in the mid-1990s, the Apple Studio Display in 1998, and the Apple Cinema Display in 1999. In 2003, TFT-LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors. The main advantages of LCDs over CRT displays are that LCDs consume less power, take up much less space, and are considerably lighter. The now common active matrix TFT-LCD technology also has less flickering than CRTs, which reduces eye strain. On the other hand, CRT monitors have superior contrast, have a superior response time, are able to use multiple screen resolutions natively, and there is no discernible flicker if the refresh rate is set to a sufficiently high value. LCD monitors have now very high temporal accuracy and can be used for vision research. High dynamic range (HDR) has been implemented into high-end LCD monitors to improve color accuracy. Since around the late 2000s, widescreen LCD monitors have become popular, in part due to television series, motion pictures and video games transitioning to high-definition (HD), which makes standard-width monitors unable to display them correctly as they either stretch or crop HD content. These types of monitors may also display it in the proper width, however they usually fill the extra space at the top and bottom of the image with black bars. Other advantages of widescreen monitors over standard-width monitors is that they make work more productive by displaying more of a user's documents and images, and allow displaying toolbars with documents. They also have a larger viewing area, with a typical widescreen monitor having a 16:9 aspect ratio, compared to the 4:3 aspect ratio of a typical standard-width monitor. Organic light-emitting diode (OLED) monitors provide higher contrast and better viewing angles than LCDs but they require more power when displaying documents with white or bright backgrounds and have a severe problem known as burn-in. The performance of a monitor is measured by the following parameters: On two-dimensional display devices such as computer monitors the display size or view able image size is the actual amount of screen space that is available to display a picture, video or working space, without obstruction from the case or other aspects of the unit's design. The main measurements for display devices are: width, height, total area and the diagonal. The size of a display is usually by monitor manufacturers given by the diagonal, i.e. the distance between two opposite screen corners. This method of measurement is inherited from the method used for the first generation of CRT television, when picture tubes with circular faces were in common use. Being circular, it was the external diameter of the glass envelope that described their size. Since these circular tubes were used to display rectangular images, the diagonal measurement of the rectangular image was smaller than the diameter of the tube's face (due to the thickness of the glass). This method continued even when cathode ray tubes were manufactured as rounded rectangles; it had the advantage of being a single number specifying the size, and was not confusing when the aspect ratio was universally 4:3. With the introduction of flat panel technology, the diagonal measurement became the actual diagonal of the visible display. This meant that an eighteen-inch LCD had a larger visible area than an eighteen-inch cathode ray tube. The estimation of the monitor size by the distance between opposite corners does not take into account the display aspect ratio, so that for example a 16:9 widescreen display has less area, than a 4:3 screen. The 4:3 screen has dimensions of and area , while the widescreen is , . Until about 2003, most computer monitors had a aspect ratio and some had . Between 2003 and 2006, monitors with and mostly (8:5) aspect ratios became commonly available, first in laptops and later also in standalone monitors. Reasons for this transition was productive uses for such monitors, i.e. besides widescreen computer game play and movie viewing, are the word processor display of two standard letter pages side by side, as well as CAD displays of large-size drawings and CAD application menus at the same time. In 2008 16:10 became the most common sold aspect ratio for LCD monitors and the same year 16:10 was the mainstream standard for laptops and notebook computers. In 2010 the computer industry started to move over from to because 16:9 was chosen to be the standard high-definition television display size, and because they were cheaper to manufacture. In 2011 non-widescreen displays with 4:3 aspect ratios were only being manufactured in small quantities. According to Samsung this was because the "Demand for the old 'Square monitors' has decreased rapidly over the last couple of years," and "I predict that by the end of 2011, production on all 4:3 or similar panels will be halted due to a lack of demand." The resolution for computer monitors has increased over time. From 320x200 during the early 1980s, to 1024x768 during the late 1990s. Since 2009, the most commonly sold resolution for computer monitors is 1920x1080. Before 2013 top-end consumer LCD monitors were limited to 2560x1600 at , excluding Apple products and CRT monitors. Apple introduced 2880x1800 with Retina MacBook Pro at on June 12, 2012, and introduced a 5120x2880 Retina iMac at on October 16, 2014. By 2015 most major display manufacturers had released 3840x2160 resolution displays. Every RGB monitor has its own color gamut, bounded in chromaticity by a color triangle. Some of these triangles are smaller than the sRGB triangle, some are larger. Colors are typically encoded by 8 bits per primary color. The RGB value [255, 0, 0] represents red, but slightly different colors in different color spaces such as AdobeRGB and sRGB. Displaying sRGB-encoded data on wide-gamut devices can give an unrealistic result. The gamut is a property of the monitor; the image color space can be forwarded as Exif metadata in the picture. As long as the monitor gamut is wider than the color space gamut, correct display is possible, if the monitor is calibrated. A picture that uses colors that are outside the sRGB color space will display on an sRGB color space monitor with limitations. Still today, many monitors that can display the sRGB color space are not factory adjusted to display it correctly. Color management is needed both in electronic publishing (via the Internet for display in browsers) and in desktop publishing targeted to print. Most modern monitors will switch to a power-saving mode if no video-input signal is received. This allows modern operating systems to turn off a monitor after a specified period of inactivity. This also extends the monitor's service life. Some monitors will also switch themselves off after a time period on standby. Most modern laptops provide a method of screen dimming after periods of inactivity or when the battery is in use. This extends battery life and reduces wear. Many monitors have other accessories (or connections for them) integrated. This places standard ports within easy reach and eliminates the need for another separate hub, camera, microphone, or set of speakers. These monitors have advanced microprocessors which contain codec information, Windows Interface drivers and other small software which help in proper functioning of these functions. Some displays, especially newer LCD monitors, replace the traditional anti-glare matte finish with a glossy one. This increases color saturation and sharpness but reflections from lights and windows are very visible. Anti-reflective coatings are sometimes applied to help reduce reflections, although this only mitigates the effect. In about 2009, NEC/Alienware together with Ostendo Technologies (based in Carlsbad, CA) were offering a curved (concave) monitor that allows better viewing angles near the edges, covering 75% of peripheral vision in the horizontal direction. This monitor had 2880x900 resolution, LED backlight and was marketed as suitable both for gaming and office work, while for $6499 it was rather expensive. While this particular monitor is no longer in production, most PC manufacturers now offer some sort of curved desktop display. Narrow viewing angle screens are used in some security conscious applications. Newer monitors are able to display a different image for each eye, often with the help of special glasses, giving the perception of depth. An autostereoscopic screen can generate 3D images without headgear. These monitors use touching of the screen as an input method. Items can be selected or moved with a finger, and finger gestures may be used to convey commands. The screen will need frequent cleaning due to image degradation from fingerprints. A combination of a monitor with a graphics tablet. Such devices are typically unresponsive to touch without the use of one or more special tools' pressure. Newer models however are now able to detect touch from any pressure and often have the ability to detect tilt and rotation as well. Touch and tablet screens are used on LCDs as a substitute for the light pen, which can only work on CRTs. Monitors that feature an aspect ratio of 21:9 as opposed to the more common 16:9. Computer monitors are provided with a variety of methods for mounting them depending on the application and environment. A desktop monitor is typically provided with a stand from the manufacturer which lifts the monitor up to a more ergonomic viewing height. The stand may be attached to the monitor using a proprietary method or may use, or be adaptable to, a Video Electronics Standards Association, VESA, standard mount. Using a VESA standard mount allows the monitor to be used with an after-market stand once the original stand is removed. Stands may be fixed or offer a variety of features such as height adjustment, horizontal swivel, and landscape or portrait screen orientation. The Flat Display Mounting Interface (FDMI), also known as VESA Mounting Interface Standard (MIS) or colloquially as a VESA mount, is a family of standards defined by the Video Electronics Standards Association for mounting flat panel monitors, TVs, and other displays to stands or wall mounts. It is implemented on most modern flat-panel monitors and TVs. For Computer Monitors, the VESA Mount typically consists of four threaded holes on the rear of the display that will mate with an adapter bracket. Rack mount computer monitors are available in two styles and are intended to be mounted into a 19-inch rack: A fixed rack mount monitor is mounted directly to the rack with the LCD visible at all times. The height of the unit is measured in rack units (RU) and 8U or 9U are most common to fit 17-inch or 19-inch LCDs. The front sides of the unit are provided with flanges to mount to the rack, providing appropriately spaced holes or slots for the rack mounting screws. A 19-inch diagonal LCD is the largest size that will fit within the rails of a 19-inch rack. Larger LCDs may be accommodated but are 'mount-on-rack' and extend forward of the rack. There are smaller display units, typically used in broadcast environments, which fit multiple smaller LCDs side by side into one rack mount. A stowable rack mount monitor is 1U, 2U or 3U high and is mounted on rack slides allowing the display to be folded down and the unit slid into the rack for storage. The display is visible only when the display is pulled out of the rack and deployed. These units may include only a display or may be equipped with a keyboard creating a KVM (Keyboard Video Monitor). Most common are systems with a single LCD but there are systems providing two or three displays in a single rack mount system. A panel mount computer monitor is intended for mounting into a flat surface with the front of the display unit protruding just slightly. They may also be mounted to the rear of the panel. A flange is provided around the LCD, sides, top and bottom, to allow mounting. This contrasts with a rack mount display where the flanges are only on the sides. The flanges will be provided with holes for thru-bolts or may have studs welded to the rear surface to secure the unit in the hole in the panel. Often a gasket is provided to provide a water-tight seal to the panel and the front of the LCD will be sealed to the back of the front panel to prevent water and dirt contamination. An open frame monitor provides the LCD monitor and enough supporting structure to hold associated electronics and to minimally support the LCD. Provision will be made for attaching the unit to some external structure for support and protection. Open frame LCDs are intended to be built into some other piece of equipment. An arcade video game would be a good example with the display mounted inside the cabinet. There is usually an open frame display inside all end-use displays with the end-use display simply providing an attractive protective enclosure. Some rack mount LCD manufacturers will purchase desktop displays, take them apart, and discard the outer plastic parts, keeping the inner open-frame LCD for inclusion into their product. According to an NSA document leaked to Der Spiegel, the NSA sometimes swaps the monitor cables on targeted computers with a bugged monitor cable in order to allow the NSA to remotely see what is being displayed on the targeted computer monitor. Van Eck phreaking is the process of remotely displaying the contents of a CRT or LCD by detecting its electromagnetic emissions. It is named after Dutch computer researcher Wim van Eck, who in 1985 published the first paper on it, including proof of concept. Phreaking more generally is the process of exploiting telephone networks.
https://en.wikipedia.org/wiki?curid=7677
ClearType ClearType is Microsoft's implementation of subpixel rendering technology in rendering text in a font system. ClearType attempts to improve the appearance of text on certain types of computer display screens by sacrificing color fidelity for additional intensity variation. This trade-off is asserted to work well on LCD flat panel monitors. ClearType was first announced at the November 1998 COMDEX exhibition. The technology was first introduced in software in January 2000 as an always-on feature of Microsoft Reader, which was released to the public in August 2000. ClearType was significantly changed with the introduction of DirectWrite in Windows 7. Word 2013 stopped using ClearType, because it "depends critically on the color of the background pixels", which made it difficult to give good animation performance on arbitrary backgrounds. Computer displays where the positions of individual pixels are permanently fixed such as most modern flat panel displays can show saw-tooth edges when displaying small, high-contrast graphic elements, such as text. ClearType uses spatial anti-aliasing at the subpixel level to reduce visible artifacts on such displays when text is rendered, making the text appear "smoother" and less jagged. ClearType also uses very heavy font hinting to force the font to fit into the pixel grid. This increases edge contrast and readability of small fonts at the expense of font rendering fidelity and has been criticized by graphic designers for making different fonts look similar. Like most other types of subpixel rendering, ClearType involves a compromise, sacrificing one aspect of image quality (color or "chrominance" detail) for another (light and dark or "luminance" detail). The compromise can improve text appearance when luminance detail is more important than chrominance. Only user and system applications render the application of ClearType. ClearType does not alter other graphic display elements (including text already in bitmaps). For example, ClearType enhancement renders text on the screen in Microsoft Word, but text placed in a bitmapped image in a program such as Adobe Photoshop is not. In theory, the method (called "RGB Decimation" internally) can enhance the anti-aliasing of any digital image. ClearType is not used when printing text. Most printers already use such small pixels that aliasing is rarely a problem, and they don't have the addressable fixed subpixels ClearType requires. Nor does ClearType affect text stored in files. ClearType only applies any processing to the text while it is being rendered onto the screen. ClearType was invented in the Microsoft e-Books team by Bert Keely and Greg Hitchcock. It was then analyzed by researchers in the company, and signal processing expert John Platt designed an improved version of the algorithm. Dick Brass, a Vice President at Microsoft from 1997 to 2004, complained that the company was slow in moving ClearType to market in the portable computing field. Normally, the software in a computer treats the computer’s display screen as a rectangular array of square, indivisible pixels, each of which has an intensity and color that are determined by the blending of three primary colors: red, green, and blue. However, actual display hardware usually implements each pixel as a group of three adjacent, independent "subpixels," each of which displays a different primary color. Thus, on a real computer display, each pixel is actually composed of separate red, green, and blue subpixels. For example, if a flat-panel display is examined under a magnifying glass, the pixels may appear as follows: In the illustration above, there are nine pixels but 27 subpixels. If the computer controlling the display knows the exact position and color of all the subpixels on the screen, it can take advantage of this to improve the apparent resolution in certain situations. If each pixel on the display actually contains three rectangular subpixels of red, green, and blue, in that fixed order, then things on the screen that are smaller than one full pixel in size can be rendered by lighting only one or two of the subpixels. For example, if a diagonal line with a width smaller than a full pixel must be rendered, then this can be done by lighting only the subpixels that the line actually touches. If the line passes through the leftmost portion of the pixel, only the red subpixel is lit; if it passes through the rightmost portion of the pixel, only the blue subpixel is lit. This effectively triples the horizontal resolution of the image at normal viewing distances; the drawback is that the line thus drawn will show color fringes (at some points it might look green, at other points it might look red or blue). ClearType uses this method to improve the smoothness of text. When the elements of a type character are smaller than a full pixel, ClearType lights only the appropriate subpixels of each full pixel in order to more closely follow the outlines of that character. Text rendered with ClearType looks “smoother” than text rendered without it, provided that the pixel layout of the display screen exactly matches what ClearType expects. The following picture shows a 4× enlargement of the word "Wikipedia" rendered using ClearType. The word was originally rendered using a Times New Roman 12 pt font. In this magnified view, it becomes clear that, while the overall smoothness of the text seems to improve, there is also color fringing of the text. An extreme close-up of a color display shows (a) text rendered without ClearType and (b) text rendered with ClearType. Note the changes in subpixel intensity that are used to increase effective resolution when ClearType is enabled without ClearType, all sub-pixels of a given pixel have the same intensity. In the above lines of text, when the orange circle is shown, all the text in the frame is rendered using ClearType (RGB subpixel rendering); when the orange circle is absent all the text is rendered using normal (full pixel greyscale) anti-aliasing. ClearType and similar technologies work on the theory that variations in intensity are more noticeable than variations in color. In a MSDN article, Microsoft acknowledges that "[te]xt that is rendered with ClearType can also appear significantly different when viewed by individuals with varying levels of color sensitivity. Some individuals can detect slight differences in color better than others." This opinion is shared by font designer Thomas Phinney (former CEO of FontLab, also formerly with Adobe Systems): "There is also considerable variation between individuals in their sensitivity to color fringing. Some people just notice it and are bothered by it a lot more than others." Software developer Melissa Elliott has written about finding ClearType rendering uncomfortable to read, saying that "instead of seeing black text, I see blue text, and rendered over it but offset by a pixel or two, I see orange text, and someone reached into a bag of purple pixel glitter and just tossed it on...I’m not the only person in the world with this problem, and yet, every time it comes up, people are quick to assure me it works for them as if that’s supposed to make me feel better." Hinting expert Beat Stamm, who worked on ClearType at Microsoft, agrees that ClearType may look blurry at 96 dpi, which was a typical resolution for LCDs in 2008, but adds that higher resolution displays improve on this aspect: "WPF [Windows Presentation Foundation] uses method C [ClearType with fractional pixel positioning], but few display devices have a sufficiently high resolution to make the potential blur a moot point for everybody. . . . Some people are ok with the blur in Method C, some aren’t. Anecdotal evidence suggests that some people are fine with Method C when reading continuous text at 96 dpi (e.g. Times Reader, etc.) but not in UI scenarios. Many people are fine with the colors of ClearType, even at 96 dpi, but a few aren’t… To my eyes and at 96 dpi, Method C doesn’t read as well as Method A. It reads “blurrily” to me. Conversely, at 144 dpi, I don’t see a problem with Method C. It looks and reads just fine to me." One illustration of the potential problem is the following image: In the above block of text, the same portion of text is shown in the upper half without and in the lower half with ClearType rendering (as opposed to Standard and ClearType in the previous image). This and the previous example with the orange circle demonstrate the blurring introduced. A 2001 study, conducted by researchers from Clemson University and The University of Pennsylvania on "18 users who spent 60 minutes reading fiction from each of three different displays" found that "When reading from an LCD display, users preferred text rendered with ClearType™. ClearType also yielded higher readability judgments and lower ratings of mental fatigue." A 2002 study on 24 users conducted by the same researchers from Clemson University also found that "Participants were significantly more accurate at identifying words with ClearType™ than without ClearType™." According to a 2006 study, at the University of Texas at Austin by Dillon et al., ClearType "may not be universally beneficial". The study notes that maximum benefit may be seen when the information worker is spending large proportions of their time reading text (which is not necessarily the case for the majority of computer users today). Additionally, over one third of the study participants experienced some disadvantage when using ClearType. Whether ClearType, or other rendering, should be used is very subjective and it must be the choice of the individual, with the report recommending "to allow users to disable [ClearType] if they find it produces effects other than improved performance". Another 2007 empirical study, found that "while ClearType rendering does not improve text legibility, reading speed or comfort compared to perceptually-tuned grayscale rendering, subjects prefer text with moderate ClearType rendering to text with grayscale or higher-level ClearType contrast." A 2007 survey, of the literature by Microsoft researcher Kevin Larson presented a different picture: "Peer-reviewed studies have consistently found that using ClearType boosts reading performance compared with other text-rendering systems. In a 2004 study, for instance, Lee Gugerty, a psychology professor at Clemson University, in South Carolina, measured a 17 percent improvement in word recognition accuracy with ClearType. Gugerty’s group also showed, in a sentence comprehension study, that ClearType boosted reading speed by 5 percent and comprehension by 2 percent. Similarly, in a study published in 2007, psychologist Andrew Dillon at the University of Texas at Austin found that when subjects were asked to scan a spreadsheet and pick out certain information, they did those tasks 7 percent faster with ClearType." ClearType and allied technologies require display hardware with fixed pixels and subpixels. More precisely, the positions of the pixels and subpixels on the screen must be exactly known to the computer to which it is connected. This is the case for flat-panel displays, on which the positions of the pixels are permanently fixed by the design of the screen itself. Almost all flat panels have a perfectly rectangular array of square pixels, each of which contains three rectangular subpixels in the three primary colors, with the normal ordering being red, green, and blue, arranged in vertical bands. ClearType assumes this arrangement of pixels when rendering text. ClearType does not work properly with flat-panel displays that are operated at resolutions other than their “native” resolutions, since only the native resolution corresponds exactly to the actual positions of pixels on the screen of the display. If a display does not have the type of fixed pixels that ClearType expects, text rendered with ClearType enabled actually looks worse than type rendered without it. Some flat panels have unusual pixel arrangements, with the colors in a different order, or with the subpixels positioned differently (in three horizontal bands, or in other ways). ClearType needs to be manually tuned for use with such displays (see below). ClearType will not work as intended on displays that have no fixed pixel positions, such as CRT displays, however it will still have some antialiasing effect and may be preferable to some users as compared to non-anti-aliased type. Because ClearType utilizes the physical layout of the red, green and blue pigments of the LCD screen, it is sensitive to the orientation of the display. ClearType in Windows XP currently supports the RGB and BGR sub pixel structures. Rotated displays, in which the subpixels are stacked vertically rather than arranged horizontally, are "not" currently supported. Using ClearType on these display configurations will actually reduce the display quality. The best option for users of Windows XP having rotated LCD displays (Tablet PCs or swivel-stand LCD displays) is using regular anti-aliasing, or switching off font-smoothing altogether. The software developer documentation for Windows CE states that ClearType for rotated screens is supported on that platform. Vertical sub pixel structures are not supported in Windows XP. ClearType is also an integrated component of the Windows Presentation Foundation text-rendering engine. As part of the Vista release, Microsoft released a set of fonts, known as the ClearType Font Collection, thought to work well with the ClearType system. ClearType can be globally enabled or disabled for GDI applications. A control panel applet is available to let the users tune the GDI ClearType settings. The GDI implementation of ClearType does not support sub-pixel positioning. Some versions of Microsoft Windows, as supplied, allow ClearType to be turned on or off, with no adjustment; other versions allow tuning of the ClearType parameters. A Microsoft ClearType tuner utility is available for free download for Windows versions lacking this facility. If ClearType is disabled in the operating system, applications with their own ClearType controls can still support it. Microsoft Reader (for e-books) has its own ClearType tuner. All text in Windows Presentation Foundation is anti-aliased and rendered using ClearType. There are separate ClearType registry settings for GDI and WPF applications, but by default the WPF entries are absent, and the GDI values are used in their absence. WPF registry entries can be tuned using the instructions from the MSDN WPF Text Blog. ClearType in WPF supports sub-pixel positioning, natural advance widths, Y-direction anti-aliasing and hardware acceleration. WPF supports aggressive caching of pre-rendered ClearType text in video memory. The extent to which this is supported is dependent on the video card. DirectX 10 cards will be able to cache the font glyphs in video memory, then perform the composition (assembling of character glyphs in the correct order, with the correct spacing), alpha blending (application of anti-aliasing), and RGB blending (ClearType's sub-pixel color calculations), entirely in hardware. This means that only the original glyphs need to be stored in video memory once per font (Microsoft estimates that this would require 2 MB of video memory per font), and other operations such as the display of anti-aliased text on top of other graphics including video can also be done with no computation effort on the part of the CPU. DirectX 9 cards will only be able to cache the alpha-blended glyphs in memory, thus requiring the CPU to handle glyph composition and alpha-blending before passing this to the video card. Caching these partially rendered glyphs requires significantly more memory (Microsoft estimates 5 MB per process). Cards that don't support DirectX 9 have no hardware-accelerated text rendering capabilities. As pixel densities of displays improved and more high DPI screens became available, colored subpixel rendering became less of a necessity according to Microsoft. Also Windows tablet user interfaces evolved to support vertical screen orientations where the LCD color stripes would run horizontally. The original colored ClearType subpixel rendering was tuned to work optimally with horizontal orientation LCD displays where RGB or BGR stripes run vertically. For these reasons, DirectWrite which is the next-generation text rendering API from Microsoft moved away from color-aware ClearType. The font rendering engine in DirectWrite supports a different version of ClearType with only greyscale anti-aliasing , not color subpixel rendering, as demonstrated at PDC 2008. This version is sometimes called "Natural ClearType" but is often referred to simply as DirectWrite rendering (with the term "ClearType" being designated to only the RGB/BGR color subpixel rendering version). The improvements have been confirmed by independent sources, such as Firefox developers; they were particularly noticeable for OpenType fonts in Compact Font Format (CFF). Many Office 2013 apps including Word 2013, Excel 2013, parts of Outlook 2013 stopped using ClearType and switched to this DirectWrite greyscale antialiasing. The reasons invoked are, in the words of Murray Sargent: "There is a problem with ClearType: it depends critically on the color of the background pixels. This isn’t a problem if you know a priori that those pixels are white, which is usually the case for text. But the general case involves calculating what the colors should be for an arbitrary background and that takes time. Meanwhile, Word 2013 enjoys cool animations and smooth zooming. Nothing jumps any more. Even the caret (the blinking vertical line at the text insertion point) glides from one position to the next as you type. Jerking movement just isn’t considered cool any more. Well animations and zooms have to be faster than human response times in order to appear smooth. And that rules out ClearType in animated scenarios at least with present generation hardware. And in future scenarios, screens will have sufficiently high resolution that gray-scale anti-aliasing should suffice." For the same reasons related to animation performance and vertical screen orientations where the colored RGB/BGR ClearType antialiasing would be a problem, the color-aware version of ClearType was abandoned in Metro-style apps platform of Windows 8 (and Universal Windows Platform of Windows 10)., including the Start menu and everything not using classic Win32 APIs (GDI/GDI+). ClearType is a registered trademark and Microsoft claims protection under the following U.S. patents: The ClearType name was also used to refer to the screens of Microsoft Surface tablets. ClearType HD Display indicates a 1366×768 screen, while ClearType Full HD Display indicates a 1920×1080 screen.
https://en.wikipedia.org/wiki?curid=7681
Centriole In cell biology a centriole is a cylindrical organelle composed mainly of a protein called tubulin. Centrioles are found in most eukaryotic cells. A bound pair of centrioles, surrounded by a shapeless mass of dense material, called the pericentriolar material (PCM), makes up a structure called a centrosome. Centrioles are not present in all eukaryotes; for example, they are absent from conifers (pinophyta), flowering plants (angiosperms) and most fungi, and are only present in the male gametes of charophytes, bryophytes, seedless vascular plants, cycads, and ginkgo. Centrioles are typically made up of nine sets of short microtubule triplets, arranged in a cylinder. Deviations from this structure include crabs and "Drosophila melanogaster" embryos, with nine doublets, and "Caenorhabditis elegans" sperm cells and early embryos, with nine singlets. The main function of centrioles is to produce cilia during interphase and the aster and the spindle during cell division. Edouard Van Beneden made the first observation of centrosomes (which are composed of two orthogonal centrioles) in 1883. In 1895, Theodor Boveri named the organelle a "centrosome". The pattern of centriole duplication was first worked out independently by Étienne de Harven and Joseph G. Gall c. 1950. Centrioles are involved in the organization of the mitotic spindle and in the completion of cytokinesis. Centrioles were previously thought to be required for the formation of a mitotic spindle in animal cells. However, more recent experiments have demonstrated that cells whose centrioles have been removed via laser ablation can still progress through the G1 stage of interphase before centrioles can be synthesized later in a de novo fashion. Additionally, mutant flies lacking centrioles develop normally, although the adult flies' cells lack flagella and cilia and as a result, they die shortly after birth. The centrioles can self replicate during cell division. Centrioles are a very important part of centrosomes, which are involved in organizing microtubules in the cytoplasm. The position of the centriole determines the position of the nucleus and plays a crucial role in the spatial arrangement of the cell. Sperm centrioles are important for 2 functions: (1) to form the sperm flagellum and sperm movement and (2) for the development of the embryo after fertilization. The sperm supplies the centriole that creates the centrosome and microtubule system of the zygote. In flagellates and ciliates, the position of the flagellum or cilium is determined by the mother centriole, which becomes the basal body. An inability of cells to use centrioles to make functional flagella and cilia has been linked to a number of genetic and developmental diseases. In particular, the inability of centrioles to properly migrate prior to ciliary assembly has recently been linked to Meckel–Gruber syndrome. Proper orientation of cilia via centriole positioning toward the posterior of embryonic node cells is critical for establishing left–right asymmetry during mammalian development. Before DNA replication, cells contain two centrioles, an older "mother centriole", and a younger "daughter centriole". During cell division, a new centriole grows at the proximal end of both mother and daughter centrioles. After duplication, the two centriole pairs (the freshly assembled centriole is now a daughter centriole in each pair) will remain attached to each other orthogonally until mitosis. At that point the mother and daughter centrioles separate dependently on an enzyme called separase. The two centrioles in the centrosome are tied to one another. The mother centriole has radiating appendages at the distal end of its long axis and is attached to its daughter at the proximal end. Each daughter cell formed after cell division will inherit one of these pairs. Centrioles start duplicating when DNA replicates. The last common ancestor of all eukaryotes was a ciliated cell with centrioles. Some lineages of eukaryotes, such as land plants, do not have centrioles except in their motile male gametes. Centrioles are completely absent from all cells of conifers and flowering plants, which do not have ciliate or flagellate gametes. It is unclear if the last common ancestor had one or two cilia. Important genes such as centrins required for centriole growth, are only found in eukaryotes, and not in bacteria or archaea. The word "centriole" () uses combining forms of "centri-" and "-ole", yielding "little central part", which describes a centriole's typical location near the center of the cell. Typical centrioles are made of 9 triplets of microtubules organized with radial symmetry. Centrioles can vary the number of microtubules and can be made of 9 doublets of microtubules (as in "Drosophila melanogaster") or 9 singlets of microtubules as in "C. elegans". Atypical centrioles are centrioles that do not have microtubules, such as the Proximal Centriole-Like found in "D. melanogaster" sperm, or that have microtubules with no radial symmetry, such as in the distal centriole of human spermatozoon.
https://en.wikipedia.org/wiki?curid=7682
Creation science Creation science or scientific creationism is a pseudoscience, a form of creationism presented without obvious Biblical language but with the claim that special creation and flood geology based on the Genesis creation narrative in the Book of Genesis have validity as science. Creationists also claim it disproves or reexplains a variety of scientific facts, theories and paradigms of geology, cosmology, biological evolution, archaeology, history, and linguistics. However, the overwhelming consensus of the scientific community is that creation science fails to qualify as scientific because it lacks empirical support, supplies no tentative hypotheses, and resolves to describe natural history in terms of scientifically untestable supernatural causes. Courts, most often in the United States where the question has been asked in the context of teaching the subject in public schools, have consistently ruled since the 1980s that creation science is a religious view rather than a scientific one. Its scientific and skeptical critics assess creation science as a pseudoscientific attempt to map the Bible into scientific facts. Professional biologists have criticized creation science for being unscholarly, and even as a dishonest and misguided sham, with extremely harmful educational consequences. Creation science began in the 1960s, as a fundamentalist Christian effort in the United States to prove Biblical inerrancy and nullify the scientific evidence for evolution. It has since developed a sizable religious following in the United States, with creation science ministries branching worldwide. The main ideas in creation science are: the belief in "creation "ex nihilo"" (Latin: out of nothing); the conviction that the Earth was created within the last 6,000–10,000 years; the belief that humans and other life on Earth were created as distinct fixed "baraminological" "kinds"; and the idea that fossils found in geological strata were deposited during a cataclysmic flood which completely covered the entire Earth. As a result, creation science also challenges the commonly accepted geologic and astrophysical theories for the age and origins of the Earth and universe, which creationists believe are irreconcilable with the account in the Book of Genesis. Creation science proponents often refer to the theory of evolution as "Darwinism" or as "Darwinian evolution." The creation science texts and curricula that first emerged in the 1960s focused upon concepts derived from a literal interpretation of the Bible and were overtly religious in nature, most notably linking Noah's flood in the Biblical Genesis account to the geological and fossil record. These works attracted little notice beyond the schools and congregations of conservative fundamental and Evangelical Christians until the 1970s, when its followers challenged the teaching of evolution in the public schools and other venues in the United States, bringing it to the attention of the public-at-large and the scientific community. Many school boards and lawmakers were persuaded to include the teaching of creation science alongside evolution in the science curriculum. Creation science texts and curricula used in churches and Christian schools were revised to eliminate their Biblical and theological references, and less explicitly sectarian versions of creation science education were introduced in public schools in Louisiana, Arkansas, and other regions in the United States. The 1982 ruling in "McLean v. Arkansas" found that creation science fails to meet the essential characteristics of science and that its chief intent is to advance a particular religious view. The teaching of creation science in public schools in the United States effectively ended in 1987 following the United States Supreme Court decision in "Edwards v. Aguillard". The court affirmed that a statute requiring the teaching of creation science alongside evolution when evolution is taught in Louisiana public schools was unconstitutional because its sole true purpose was to advance a particular religious belief. In response to this ruling, drafts of the creation science school textbook "Of Pandas and People" were edited to change references of creation to intelligent design before its publication in 1989. The intelligent design movement promoted this version. Requiring intelligent design to be taught in public school science classes was found to be unconstitutional in the 2005 "Kitzmiller v. Dover Area School District" federal court case. Creation science is based largely upon chapters 1–11 of the Book of Genesis. These describe how God calls the world into existence through the power of speech ("And God said, Let there be light," etc.) in six days, calls all the animals and plants into existence, and molds the first man from clay and the first woman from a rib taken from the man's side; a worldwide flood destroys all life except for Noah and his family and representatives of the animals, and Noah becomes the ancestor of the 70 "nations" of the world; the nations live together until the incident of the Tower of Babel, when God disperses them and gives them their different languages. Creation science attempts to explain history and science within the span of Biblical chronology, which places the initial act of creation some six thousand years ago. Most creation science proponents hold fundamentalist or Evangelical Christian beliefs in Biblical literalism or Biblical inerrancy, as opposed to the higher criticism supported by liberal Christianity in the Fundamentalist–Modernist Controversy. However, there are also examples of Islamic and Jewish scientific creationism that conform to the accounts of creation as recorded in their religious doctrines. The Seventh-day Adventist Church has a history of support for creation science. This dates back to George McCready Price, an active Seventh-day Adventist who developed views of flood geology, which formed the basis of creation science. This work was continued by the Geoscience Research Institute, an official institute of the Seventh-day Adventist Church, located on its Loma Linda University campus in California. Creation science is generally rejected by the Church of England as well as the Roman Catholic Church. The Pontifical Gregorian University has officially discussed intelligent design as a "cultural phenomenon" without scientific elements. The Church of England's official website cites Charles Darwin's local work assisting people in his religious parish. Creation science rejects evolution and the common descent of all living things on Earth. Instead, it asserts that the field of evolutionary biology is itself pseudoscientific or even a religion. Creationists argue instead for a system called baraminology, which considers the living world to be descended from uniquely created kinds or "baramins." Creation science incorporates the concept of catastrophism to reconcile current landforms and fossil distributions with Biblical interpretations, proposing the remains resulted from successive cataclysmic events, such as a worldwide flood and subsequent ice age. It rejects one of the fundamental principles of modern geology (and of modern science generally), uniformitarianism, which applies the same physical and geological laws observed on the Earth today to interpret the Earth's geological history. Sometimes creationists attack other scientific concepts, like the Big Bang cosmological model or methods of scientific dating based upon radioactive decay. Young Earth creationists also reject current estimates of the age of the universe and the age of the Earth, arguing for creationist cosmologies with timescales much shorter than those determined by modern physical cosmology and geological science, typically less than 10,000 years. The scientific community has overwhelmingly rejected the ideas put forth in creation science as lying outside the boundaries of a legitimate science. The foundational premises underlying scientific creationism disqualify it as a science because the answers to all inquiry therein are preordained to conform to Bible doctrine, and because that inquiry is constructed upon theories which are not empirically testable in nature. Scientists also deem creation science's attacks against biological evolution to be without scientific merit. The views of the scientific community were accepted in two significant court decisions in the 1980s, which found the field of creation science to be a religious mode of inquiry, not a scientific one. The teaching of evolution was gradually introduced into more and more public high school textbooks in the United States after 1900, but in the aftermath of the First World War the growth of fundamentalist Christianity gave rise to a creationist opposition to such teaching. Legislation prohibiting the teaching of evolution was passed in certain regions, most notably Tennessee's Butler Act of 1925. The Soviet Union's successful launch of "Sputnik 1" in 1957 sparked national concern that the science education in public schools was outdated. In 1958, the United States passed National Defense Education Act which introduced new education guidelines for science instruction. With federal grant funding, the Biological Sciences Curriculum Study (BSCS) drafted new standards for the public schools' science textbooks which included the teaching of evolution. Almost half the nation's high schools were using textbooks based on the guidelines of the BSCS soon after they were published in 1963. The Tennessee legislature did not repeal the Butler Act until 1967. Creation science (dubbed "scientific creationism" at the time) emerged as an organized movement during the 1960s. It was strongly influenced by the earlier work of armchair geologist George McCready Price who wrote works such as "The New Geology" (1923) to advance what he termed "new catastrophism" and dispute the current geological time frames and explanations of geologic history. Price's work was cited at the Scopes Trial of 1925, yet although he frequently solicited feedback from geologists and other scientists, they consistently disparaged his work. Price's "new catastrophism" also went largely unnoticed by other creationists until its revival with the 1961 publication of "" by John C. Whitcomb and Henry M. Morris, a work which quickly became an important text on the issue to fundamentalist Christians and expanded the field of creation science beyond critiques of geology into biology and cosmology as well. Soon after its publication, a movement was underway to have the subject taught in United States' public schools. The various state laws prohibiting teaching of evolution were overturned in 1968 when the United States Supreme Court ruled in "Epperson v. Arkansas" such laws violated the Establishment Clause of the First Amendment to the United States Constitution. This ruling inspired a new creationist movement to promote laws requiring that schools give balanced treatment to creation science when evolution is taught. The 1981 Arkansas Act 590 was one such law that carefully detailed the principles of creation science that were to receive equal time in public schools alongside evolutionary principles. The act defined creation science as follows: "'Creation-science' means the scientific evidences for creation and inferences from those evidences. Creation-science includes the scientific evidences and related inferences that indicate: This legislation was examined in "McLean v. Arkansas", and the ruling handed down on January 5, 1982, concluded that creation-science as defined in the act "is simply not science". The judgement defined the following as essential characteristics of science: The court ruled that creation science failed to meet these essential characteristics and identified specific reasons. After examining the key concepts from creation science, the court found: The court further noted that no recognized scientific journal had published any article espousing the creation science theory as described in the Arkansas law, and stated that the testimony presented by defense attributing the absence to censorship was not credible. In its ruling, the court wrote that for any theory to qualify as scientific, the theory must be tentative, and open to revision or abandonment as new facts come to light. It wrote that any methodology which begins with an immutable conclusion which cannot be revised or rejected, regardless of the evidence, is not a scientific theory. The court found that creation science does not culminate in conclusions formed from scientific inquiry, but instead begins with the conclusion, one taken from a literal wording of the Book of Genesis, and seeks only scientific evidence to support it. The law in Arkansas adopted the same two-model approach as that put forward by the Institute for Creation Research, one allowing only two possible explanations for the origins of life and existence of man, plants and animals: it was either the work of a creator or it was not. Scientific evidence that failed to support the theory of evolution was posed as necessarily scientific evidence in support of creationism, but in its judgment the court ruled this approach to be no more than a "contrived dualism which has not scientific factual basis or legitimate educational purpose." The judge concluded that "Act 590 is a religious crusade, coupled with a desire to conceal this fact," and that it violated the First Amendment's Establishment Clause. The decision was not appealed to a higher court, but had a powerful influence on subsequent rulings. Louisiana's 1982 Balanced Treatment for Creation-Science and Evolution-Science Act, authored by State Senator Bill P. Keith, judged in the 1987 United States Supreme Court case "Edwards v. Aguillard", and was handed a similar ruling. It found the law to require the balanced teaching of creation science with evolution had a particular religious purpose and was therefore unconstitutional. In 1984, "The Mystery of Life's Origin" was first published. It was co-authored by chemist and creationist Charles B. Thaxton with Walter L. Bradley and Roger L. Olsen, the foreword written by Dean H. Kenyon, and sponsored by the Christian-based Foundation for Thought and Ethics (FTE). The work presented scientific arguments against current theories of abiogenesis and offered an hypothesis of special creation instead. While the focus of creation science had until that time centered primarily on the criticism of the fossil evidence for evolution and validation of the creation myth of the Bible, this new work posed the question whether science reveals that even the simplest living systems were far too complex to have developed by natural, unguided processes. Kenyon later co-wrote with creationist Percival Davis a book intended as a "scientific brief for creationism" to use as a supplement to public high school biology textbooks. Thaxton was enlisted as the book's editor, and the book received publishing support from the FTE. Prior to its release, the 1987 Supreme Court ruling in "Edwards v. Aguillard" barred the teaching of creation science and creationism in public school classrooms. The book, originally titled "Biology and Creation" but renamed "Of Pandas and People", was released in 1989 and became the first published work to promote the anti-evolutionist design argument under the name intelligent design. The contents of the book later became a focus of evidence in the federal court case, "Kitzmiller v. Dover Area School District", when a group of parents filed suit to halt the teaching of intelligent design in Dover, Pennsylvania, public schools. School board officials there had attempted to include "Of Pandas and People" in their biology classrooms and testimony given during the trial revealed the book was originally written as a creationist text but following the adverse decision in the Supreme Court it underwent simple cosmetic editing to remove the explicit allusions to "creation" or "creator," and replace them instead with references to "design" or "designer." By the mid-1990s, intelligent design had become a separate movement. The creation science movement is distinguished from the intelligent design movement, or neo-creationism, because most advocates of creation science accept scripture as a literal and inerrant historical account, and their primary goal is to corroborate the scriptural account through the use of science. In contrast, as a matter of principle, neo-creationism eschews references to scripture altogether in its polemics and stated goals (see Wedge strategy). By so doing, intelligent design proponents have attempted to succeed where creation science has failed in securing a place in public school science curricula. Carefully avoiding any reference to the identity of the intelligent designer as God in their public arguments, intelligent design proponents sought to reintroduce the creationist ideas into science classrooms while sidestepping the First Amendment's prohibition against religious infringement. However, the intelligent design curriculum was struck down as a violation of the Establishment Clause in "Kitzmiller v. Dover Area School District", the judge in the case ruling "that ID is nothing less than the progeny of creationism." Today, creation science as an organized movement is primarily centered within the United States. Creation science organizations are also known in other countries, most notably Creation Ministries International which was founded (under the name Creation Science Foundation) in Australia. Proponents are usually aligned with a Christian denomination, primarily with those characterized as evangelical, conservative, or fundamentalist. While creationist movements also exist in Islam and Judaism, these movements do not use the phrase "creation science" to describe their beliefs. Creation science has its roots in the work of young Earth creationist George McCready Price disputing modern science's account of natural history, focusing particularly on geology and its concept of uniformitarianism, and his efforts instead to furnish an alternative empirical explanation of observable phenomena which was compatible with strict Biblical literalism. Price's work was later discovered by civil engineer Henry M. Morris, who is now considered to be the father of creation science. Morris and later creationists expanded the scope with attacks against the broad spectrum scientific findings that point to the antiquity of the Universe and common ancestry among species, including growing body of evidence from the fossil record, absolute dating techniques, and cosmogony. The proponents of creation science often say that they are concerned with religious and moral questions as well as natural observations and predictive hypotheses. Many state that their opposition to scientific evolution is primarily based on religion. The overwhelming majority of scientists are in agreement that the claims of science are necessarily limited to those that develop from natural observations and experiments which can be replicated and substantiated by other scientists, and that claims made by creation science do not meet those criteria. Duane Gish, a prominent creation science proponent, has similarly claimed, "We do not know how the creator created, what processes He used, "for He used processes which are not now operating anywhere in the natural universe." This is why we refer to creation as special creation. We cannot discover by scientific investigation anything about the creative processes used by the Creator." But he also makes the same claim against science's evolutionary theory, maintaining that on the subject of origins, scientific evolution is a religious theory which cannot be validated by science. Creation science makes the "a priori" metaphysical assumption that there exists a creator of the life whose origin is being examined. Christian creation science holds that the description of creation is given in the Bible, that the Bible is inerrant in this description (and elsewhere), and therefore empirical scientific evidence must correspond with that description. Creationists also view the preclusion of all supernatural explanations within the sciences as a doctrinaire commitment to exclude the supreme being and miracles. They claim this to be the motivating factor in science's acceptance of Darwinism, a term used in creation science to refer to evolutionary biology which is also often used as a disparagement. Critics argue that creation science is religious rather than scientific because it stems from faith in a religious text rather than by the application of the scientific method. The United States National Academy of Sciences (NAS) has stated unequivocally, "Evolution pervades all biological phenomena. To ignore that it occurred or to classify it as a form of dogma is to deprive the student of the most fundamental organizational concept in the biological sciences. No other biological concept has been more extensively tested and more thoroughly corroborated than the evolutionary history of organisms." Anthropologist Eugenie Scott has noted further, "Religious opposition to evolution propels antievolutionism. Although antievolutionists pay lip service to supposed scientific problems with evolution, what motivates them to battle its teaching is apprehension over the implications of evolution for religion." Creation science advocates argue that scientific theories of the origins of the Universe, Earth, and life are rooted in "a priori" presumptions of methodological naturalism and uniformitarianism, each of which is disputed. In some areas of science such as chemistry, meteorology or medicine, creation science proponents do not challenge the application of naturalistic or uniformitarian assumptions. Traditionally, creation science advocates have singled out those scientific theories judged to be in conflict with held religious beliefs, and it is against those theories that they concentrate their efforts. Many mainstream Christian churches criticize creation science on theological grounds, asserting either that religious faith alone should be a sufficient basis for belief in the truth of creation, or that efforts to prove the Genesis account of creation on scientific grounds are inherently futile because reason is subordinate to faith and cannot thus be used to prove it. Many Christian theologies, including Liberal Christianity, consider the Genesis creation narrative to be a poetic and allegorical work rather than a literal history, and many Christian churches—including the Eastern Orthodox Church, the Roman Catholic, Anglican and the more liberal denominations of the Lutheran, Methodist, Congregationalist and Presbyterian faiths—have either rejected creation science outright or are ambivalent to it. Belief in non-literal interpretations of Genesis is often cited as going back to Saint Augustine. Theistic evolution and evolutionary creationism are theologies that reconcile belief in a creator with biological evolution. Each holds the view that there is a creator but that this creator has employed the natural force of evolution to unfold a divine plan. Religious representatives from faiths compatible with theistic evolution and evolutionary creationism have challenged the growing perception that belief in a creator is inconsistent with the acceptance of evolutionary theory. Spokespersons from the Catholic Church have specifically criticized biblical creationism for relying upon literal interpretations of biblical scripture as the basis for determining scientific fact. The National Academy of Sciences states that "the claims of creation science lack empirical support and cannot be meaningfully tested" and that "creation science is in fact not science and should not be presented as such in science classes." According to Joyce Arthur writing for "Skeptic" magazine, the "creation 'science' movement gains much of its strength through the use of distortion and scientifically unethical tactics" and "seriously misrepresents the theory of evolution." Scientists have considered the hypotheses proposed by creation science and have rejected them because of a lack of evidence. Furthermore, the claims of creation science do not refer to natural causes and cannot be subject to meaningful tests, so they do not qualify as scientific hypotheses. In 1987, the United States Supreme Court ruled that creationism is religion, not science, and cannot be advocated in public school classrooms. Most mainline Christian denominations have concluded that the concept of evolution is not at odds with their descriptions of creation and human origins. A summary of the objections to creation science by scientists follows: By invoking claims of "abrupt appearance" of species as a miraculous act, creation science is unsuited for the tools and methods demanded by science, and it cannot be considered scientific in the way that the term "science" is currently defined. Scientists and science writers commonly characterize creation science as a pseudoscience. Historically, the debate of whether creationism is compatible with science can be traced back to 1874, the year science historian John William Draper published his "History of the Conflict between Religion and Science". In it Draper portrayed the entire history of scientific development as a war against religion. This presentation of history was propagated further by followers such as Andrew Dickson White in his two-volume "A History of the Warfare of Science with Theology in Christendom" (1896). Their conclusions have been disputed. In the United States, the principal focus of creation science advocates is on the government-supported public school systems, which are prohibited by the Establishment Clause from promoting specific religions. Historical communities have argued that Biblical translations contain many translation errors and errata, and therefore that the use of biblical literalism in creation science is self-contradictory. Creationist biology centers on an idea derived from Genesis that states that life was created by God, in a finite number of "created kinds," rather than through biological evolution from a common ancestor. Creationists consider that any observable speciation descends from these distinctly created kinds through inbreeding, deleterious mutations and other genetic mechanisms. Whereas evolutionary biologists and creationists share similar views of microevolution, creationists disagree that the process of macroevolution can explain common ancestry among organisms far beyond the level of common species. Creationists contend that there is no empirical evidence for new plant or animal species, and deny fossil evidence has ever been found documenting the process. Popular arguments against evolution have changed since the publishing of Henry M. Morris' first book on the subject, "Scientific Creationism" (1974), but some consistent themes remain: that missing links or gaps in the fossil record are proof against evolution; that the increased complexity of organisms over time through evolution is not possible due to the law of increasing entropy; that it is impossible that the mechanism of natural selection could account for common ancestry; and that evolutionary theory is untestable. The origin of the human species is particularly hotly contested; the fossil remains of purported hominid ancestors are not considered by advocates of creation biology to be evidence for a speciation event involving "Homo sapiens". Creationists also assert that early hominids, are either apes, or humans. Richard Dawkins has explained evolution as "a theory of gradual, incremental change over millions of years, which starts with something very simple and works up along slow, gradual gradients to greater complexity," and described the existing fossil record as entirely consistent with that process. Biologists emphasize that transitional gaps between those fossils recovered are to be expected, that the existence of any such gaps cannot be invoked to disprove evolution, and that instead the fossil evidence that could be used to disprove the theory would be those fossils which are found and which are entirely inconsistent with what can be predicted or anticipated by the evolutionary model. One example given by Dawkins was, "If there were a single hippo or rabbit in the Precambrian, that would completely blow evolution out of the water. None have ever been found." Flood geology is a concept based on the belief that most of Earth's geological record was formed by the Great Flood described in the story of Noah's Ark. Fossils and fossil fuels are believed to have formed from animal and plant matter which was buried rapidly during this flood, while submarine canyons are explained as having formed during a rapid runoff from the continents at the end of the flood. Sedimentary strata are also claimed to have been predominantly laid down during or after Noah's flood and orogeny. Flood geology is a variant of catastrophism and is contrasted with geological science in that it rejects standard geological principles such as uniformitarianism and radiometric dating. For example, the Creation Research Society argues that "uniformitarianism is wishful thinking." Geologists conclude that no evidence for such a flood is observed in the preserved rock layers and moreover that such a flood is physically impossible, given the current layout of land masses. For instance, since Mount Everest currently is approximately 8.8 kilometres in elevation and the Earth's surface area is 510,065,600 km2, the volume of water required to cover Mount Everest to a depth of 15 cubits (6.8 m), as indicated by Genesis 7:20, would be 4.6 billion cubic kilometres. Measurements of the amount of precipitable water vapor in the atmosphere have yielded results indicating that condensing all water vapor in a column of atmosphere would produce liquid water with a depth ranging between zero and approximately 70mm, depending on the date and the location of the column. Nevertheless, there continue to be adherents to the belief in flood geology, and in recent years new theories have been introduced such as catastrophic plate tectonics and catastrophic orogeny. Creationists point to experiments they have performed, which they claim demonstrate that 1.5 billion years of nuclear decay took place over a short period of time, from which they infer that "billion-fold speed-ups of nuclear decay" have occurred, a massive violation of the principle that radioisotope decay rates are constant, a core principle underlying nuclear physics generally, and radiometric dating in particular. The scientific community points to numerous flaws in the creationists' experiments, to the fact that their results have not been accepted for publication by any peer-reviewed scientific journal, and to the fact that the creationist scientists conducting them were untrained in experimental geochronology. They have also been criticised for widely publicising the results of their research as successful despite their own admission of insurmountable problems with their hypothesis. The constancy of the decay rates of isotopes is well supported in science. Evidence for this constancy includes the correspondences of date estimates taken from different radioactive isotopes as well as correspondences with non-radiometric dating techniques such as dendrochronology, ice core dating, and historical records. Although scientists have noted slight increases in the decay rate for isotopes subject to extreme pressures, those differences were too small to significantly impact date estimates. The constancy of the decay rates is also governed by first principles in quantum mechanics, wherein any deviation in the rate would require a change in the fundamental constants. According to these principles, a change in the fundamental constants could not influence different elements uniformly, and a comparison between each of the elements' resulting unique chronological timescales would then give inconsistent time estimates. In refutation of young Earth claims of inconstant decay rates affecting the reliability of radiometric dating, Roger C. Wiens, a physicist specializing in isotope dating states: In the 1970s, young Earth creationist Robert V. Gentry proposed that radiohaloes in certain granites represented evidence for the Earth being created instantaneously rather than gradually. This idea has been criticized by physicists and geologists on many grounds including that the rocks Gentry studied were not primordial and that the radionuclides in question need not have been in the rocks initially. Thomas A. Baillieul, a geologist and retired senior environmental scientist with the United States Department of Energy, disputed Gentry's claims in an article entitled, "'Polonium Haloes' Refuted: A Review of 'Radioactive Halos in a Radio-Chronological and Cosmological Perspective' by Robert V. Gentry." Baillieul noted that Gentry was a physicist with no background in geology and given the absence of this background, Gentry had misrepresented the geological context from which the specimens were collected. Additionally, he noted that Gentry relied on research from the beginning of the 20th century, long before radioisotopes were thoroughly understood; that his assumption that a polonium isotope caused the rings was speculative; and that Gentry falsely argued that the half-life of radioactive elements varies with time. Gentry claimed that Baillieul could not publish his criticisms in a reputable scientific journal, although some of Baillieul's criticisms rested on work previously published in reputable scientific journals. Several attempts have been made by creationists to construct a cosmology consistent with a young Universe rather than the standard cosmological age of the universe, based on the belief that Genesis describes the creation of the Universe as well as the Earth. The primary challenge for young-universe cosmologies is that the accepted distances in the Universe require millions or billions of years for light to travel to Earth (the "starlight problem"). An older creationist idea, proposed by creationist astronomer Barry Setterfield, is that the speed of light has decayed in the history of the Universe. More recently, creationist physicist Russell Humphreys has proposed a hypothesis called "white hole cosmology" which suggests that the Universe expanded out of a white hole less than 10,000 years ago; the apparent age of the universe results from relativistic effects. Humphreys' theory is advocated by creationist organisations such as Answers in Genesis; however because the predictions of Humphreys' cosmology conflict with current observations, it is not accepted by the scientific community. Various claims are made by creationists concerning alleged evidence that the age of the Solar System is of the order of thousands of years, in contrast to the scientifically accepted age of 4.6 billion years. It is commonly argued that the number of comets in the Solar System is much higher than would be expected given its supposed age. Creationist astronomers express scepticism about the existence of the Kuiper belt and Oort cloud. Creationists also argue that the recession of the Moon from the Earth is incompatible with either the Moon or the Earth being billions of years old. These claims have been refuted by planetologists. In response to increasing evidence suggesting that Mars once possessed a wetter climate, some creationists have proposed that the global flood affected not only the Earth but also Mars and other planets. People who support this claim include creationist astronomer Wayne Spencer and Russell Humphreys. An ongoing problem for creationists is the presence of impact craters on nearly all Solar System objects, which is consistent with scientific explanations of solar system origins but creates insuperable problems for young Earth claims. Creationists Harold Slusher and Richard Mandock, along with Glenn Morton (who later repudiated this claim) asserted that impact craters on the Moon are subject to rock flow, and so cannot be more than a few thousand years old. While some creationist astronomers assert that different phases of meteoritic bombardment of the Solar System occurred during creation week and during the subsequent Great Flood, others regard this as unsupported by the evidence and call for further research. Notable creationist museums in the United States:
https://en.wikipedia.org/wiki?curid=7683
Cirth The Cirth (, meaning "runes"; sing. certh ) is a semi‑artificial script, based on real‑life runic alphabets, invented by J. R. R. Tolkien for the constructed languages he devised and used in his works. "Cirth" is written with a capital letter when referring to the writing system; the runes themselves can be called "cirth". In the fictional history of Middle-earth, the original Certhas was created by the Grey Elves for their language, Sindarin. Its extension and elaboration was known as the Angerthas Daeron, as it was attributed to the Sindar Daeron, although it was most probably expanded by the Noldor in order to represent the sounds of other languages like Quenya. Although the Cirth was later largely replaced by the Tengwar, it was adopted by Dwarves to write down both their Khuzdul language (Angerthas Moria) and the languages of Men (Angerthas Erebor). The Cirth was also adapted, in its oldest and simplest form, by various races including Men and even Orcs. Many letters have shapes also found in the historical runic alphabets, but their sound values are only similar in a few of the vowels. Rather, the system of assignment of sound values is much more systematic in the Cirth than in the historical runes (e.g., voiced variants of a voiceless sound are expressed by an additional stroke). A similar system has been proposed for a few historical runes but is in any case much more obscure. The division between the older Cirth of Daeron and their adaptation by Dwarves and Men has been interpreted as a parallel drawn by Tolkien to the development of the Fuþorc to the Younger Fuþark. The original Elvish Cirth "as supposed products of a superior culture" are focused on logical arrangement and a close connection between form and value whereas the adaptations by mortal races introduced irregularities. Similar to the Germanic tribes who had no written literature and used only simple runes before their conversion to Christianity, the Sindarin Elves of Beleriand with their Cirth were introduced to the more elaborate Tengwar of Fëanor when the Noldorin Elves returned to Middle-earth from the lands of the divine Valar. In the Appendix E of "The Return of the King", Tolkien writes that the Sindar of Beleriand first developed an alphabet for their language some time between the invention of the Tengwar by Fëanor and their introduction to Middle-earth by the exiled Noldor. This alphabet was devised to represent only the sounds of their Sindarin language and its letters were entirely used for inscribing names or brief memorials on wood, stone or metal, hence their angular forms and straight lines. In Sindarin these letters were named "cirth" (sing. "certh"), from the Elvish root "*kir-" meaning "to cleave, to cut". An abecedarium of cirth, consisting of the runes listed in due order, was commonly known as Certhas (, meaning "rune-rows" in Sindarin and loosely translated as "runic alphabet"). The cirth used for voiceless stop consonants were constructed systematically by the combination of a "stem" and a "branch". The attachment of the branch was usually made on the right side. The reverse was not infrequent, but had no phonetic significance (this means that would just be an alternative form of ). Other consonants were formed following two basic principles: The cirth constructed in this way can therefore be grouped into series. Each series corresponds to a place of articulation. This earliest system had three series: There are also additional cirth that do not have regular shapes. These include liquid consonants and , the voiceless glottal transition , the voiceless alveolar fricative , and vowels. The original display of Cirth should have been this: The known ancient cirth do not cover all the sounds of Sindarin: there is no certh for , , , or . Perhaps this system had been devised for Old Sindarin, since the above-mentioned sounds do not exist in that language. However, still frequent and are missing, too. This indicates that some ancient, unknown cirth could have existed, but did not make it to the later systems. Therefore, a fuller table cannot be reconstructed. Long vowels were evidently indicated by doubling. Before the end of the First Age the "Certhas" was rearranged and further developed, partly under the influence of the Tengwar. This reorganisation of the Cirth was commonly attributed to the Elf Daeron, minstrel and loremaster of king Thingol of Doriath. Thus, the new system became known as the Angerthas Daeron (where "angerthas" is a compound of the Sindarin words "an(d)" and "certhas" , meaning "long rune-rows"). Unlike the previous system, the flipped form of a certh had now a phonemic significance: it signalled the lenition of the original rune. These new cirth were needed in order to represent fricatives that were developed at one point in Sindarin (e.g., → ). Some new runes were introduced in the "Angerthas" with the purpose of writing: However, the principal additions to the former "Certhas" were two entirely new series of regularly-formed cirth: Since these new series represent sounds which do not occur in Sindarin but are present in Quenya, they were most probably invented by the Exiled Noldor that spoke Quenya as a language of knowledge. By loan-translation, the Cirth became known in Quenya as "Certar" , while a single certh was called "certa" . Back to the fictional history, after the introduction of the Tengwar in Middle-earth, the "Angerthas Daeron" was relegated primarily to carved inscriptions. The Elves abandoned the Cirth altogether, with the exception of the Noldor dwelling in Eregion, who maintained it and made it known as Angerthas Eregion. Please note that, in this article, the primitive "Certhas" is transliterated using the regular Sindarin spelling, whereas the "Angerthas" is rendered using its own peculiar transliteration, introduced by Tolkien in the Appendix E, given that this script was meant to cover a much larger set of sounds than its primitive form. For example, the Sindarin spelling for is ; in the transliteration of the "Angerthas" instead, the sound is spelled , while represents the sound . In this article, each certh of the ‑series presents two IPA transcriptions. The reason is that the palatal consonants of Noldorin Quenya are realised as palato-alveolar consonants in Vanyarin Quenya. For example, the Quenya word is pronounced in the Noldorin variety, but in Vanyarin. The very name of the language, whose archaic form is , is spelled in Noldorin (reflecting its pronunciation as ), but retains the spelling in Vanyarin (where it is realised as due to assibilation). Although in the fictional history of Middle-earth this series of consonants was introduced by the Noldor, it is deemed necessary to show the Vanyarin pronunciation as well, given that the very transliteration used by Tolkien is more similar to the Vanyarin phonotactics than the Noldorin. According to Tolkien's "legendarium", the Dwarves first came to know the runes of the Noldor at the beginning of the Second Age. The Dwarves "introduced a number of unsystematic changes in value, as well as certain new cirth". They modified the previous system to suit the specific needs of their language, Khuzdul. The Dwarves spread their revised alphabet to Moria, where it came to be known as Angerthas Moria, and developed both carved and pen-written forms of these runes. Many cirth here represent sounds not occurring in Khuzdul (at least in published words of Khuzdul: of course, our corpus is very limited to judge the necessity or not, of these sounds). Here they are marked with a black star (★). In "Angerthas Moria" the cirth and were dropped. Thus and were adopted for and , although they were used for and in Elvish languages. Subsequently, this script used the certh for , which had the sound in the Elvish systems. Therefore, the certh (which was previously used for the sound , useless in Khuzdul) was adopted for the sound . A totally new introduction was the certh , used as an alternative, simplified and, maybe, weaker form of . Because of the visual relation of these two cirth, the certh was given the sound to relate better with that, in this script, had the sound . At the beginning of the Third Age the Dwarves were driven out of Moria, and some migrated to Erebor. As the Dwarves of Erebor would trade with the Men of the nearby towns of Dale and Lake-town, they needed a script to write in Westron (the "lingua franca" of Middle-earth, usually rendered in English by Tolkien in his works). The "Angerthas Moria" was adapted accordingly: some new cirth were added, while some were restored to their Elvish usage, thus creating the Angerthas Erebor. While the "Angerthas Moria" was still used to write down Khuzdul, this new script was primarily used for Mannish languages. It is also the script used in the Book of Mazarbul. Angerthas Erebor also features combining diacritics: The "Angerthas Erebor" is used twice in "The Lord of the Rings" to write in English: The Book of Mazarbul shows some additional cirth used in "Angerthas Erebor": one for a double ligature, one for the definite article, and six for the representation of the same number of English diphthongs: The Cirth is not the only runic writing system devised by Tolkien for Middle-earth. In fact, he invented a great number of runic alphabets, of which only a few others have been published. Many of these runic scripts have been included in the "Appendix on Runes" of "The Treason of Isengard" ("The History of Middle-earth", vol. VII), edited by Christopher Tolkien. According to Tolkien, those used in "The Hobbit" are a form of "our ancient runes" deployed in the book to transliterate the actual Dwarvish runes. They can be interpreted as an attempt made by Tolkien to adapt the Fuþorc (i.e., the Old English runic alphabet) to the Modern English language. These runes are basically the same found in Fuþorc, but their sound may change according to their position, just as the Latin script letters do: the writing mode adopted by Tolkien for these runes is mainly orthographic. This system has one rune for each letter, regardless of pronunciation. For example, the rune can sound either (in the word ) or (in the word ) or even (in the word ) and (in the digraph ). A few sounds are instead written with the same rune, regardless of the way it is spelled with the Latin script. For example, the sound is always written with the rune either if in English it is written as in , as in , or as in . The only letters that are subject to this phonemic spelling are and . In addition, there are also some runes which stand for particular English digraphs and diphthongs. Here the runes used in "The Hobbit" are displayed along with their corresponding English grapheme and Fuþorc counterpart: Two other runes, not attested in "The Hobbit", were added by Tolkien in order to represent additional English graphemes: Not all the runes mentioned in "The Hobbit" are Dwarf-runes. The swords found in the Trolls' cave (which were from the ancient kingdom of Gondolin) bore runes that Gandalf allegedly could not read. In fact, the swords Glamdring and Orcrist, forged in Gondolin, bore a type of letters known as Gondolinic runes. They seem to have been obsoleted and forgotten by the Third Age, and this is supported by the fact that Tolkien writes that only Elrond could still read the inscriptions of the swords. Tolkien devised this runic alphabet in a very early stage of his shaping of Middle-earth. Nevertheless, they are known to us from a slip of paper written by J.R.R. Tolkien, a photocopy of which Christopher Tolkien sent to Paul Nolan Hyde in February 1992. Hyde then published it, together with an extensive analysis, in the 1992 Summer issue of Mythlore, no. 69. The system provides sounds not found in any of the known Elven languages of the First Age, but perhaps it was designed for a variety of languages. However, the consonants seem to be, more or less, the same found in Welsh phonology, a theory supported by the fact that Tolkien was heavily influenced by Welsh when creating Elven languages. Equivalents for some (but not all) cirth can be found in the Runic block of Unicode. Tolkien's mode of writing Modern English in Anglo-Saxon runes received explicit recognition with the introduction of his three additional runes to the Runic block with the release of Unicode 7.0, in June 2014. The three characters represent the English , and graphemes, as follows: A formal Unicode proposal to encode Cirth as a separate script was made in September 1997 by Michael Everson. No action was taken by the Unicode Technical Committee (UTC) but Cirth appears in the Roadmap to the SMP. Unicode Private Use Area layouts for Cirth are defined at the ConScript Unicode Registry (CSUR) and the Under-ConScript Unicode Registry (UCSUR). Two different layouts are defined by the CSUR/UCSUR: Without proper rendering support, you may see question marks, boxes, or other symbols below instead of Cirth.
https://en.wikipedia.org/wiki?curid=7689
Lockheed C-130 Hercules The Lockheed C-130 Hercules is an American four-engine turboprop military transport aircraft designed and built originally by Lockheed (now Lockheed Martin). Capable of using unprepared runways for takeoffs and landings, the C-130 was originally designed as a troop, medevac, and cargo transport aircraft. The versatile airframe has found uses in a variety of other roles, including as a gunship (AC-130), for airborne assault, search and rescue, scientific research support, weather reconnaissance, aerial refueling, maritime patrol, and aerial firefighting. It is now the main tactical airlifter for many military forces worldwide. More than 40 variants of the Hercules, including civilian versions marketed as the Lockheed L-100, operate in more than 60 nations. The C-130 entered service with the U.S. in 1956, followed by Australia and many other nations. During its years of service, the Hercules family has participated in numerous military, civilian and humanitarian aid operations. In 2007, the C-130 became the fifth aircraft to mark 50 years of continuous service with its original primary customer, which for the C-130 is the United States Air Force. The C-130 Hercules is the longest continuously produced military aircraft at over 60 years, with the updated Lockheed Martin C-130J Super Hercules currently being produced. The Korean War showed that World War II-era piston-engine transports—Fairchild C-119 Flying Boxcars, Douglas C-47 Skytrains and Curtiss C-46 Commandos—were no longer adequate. Thus, on 2 February 1951, the United States Air Force issued a General Operating Requirement (GOR) for a new transport to Boeing, Douglas, Fairchild, Lockheed, Martin, Chase Aircraft, North American, Northrop, and Airlifts Inc. The new transport would have a capacity of 92 passengers, 72 combat troops or 64 paratroopers in a cargo compartment that was approximately long, high, and wide. Unlike transports derived from passenger airliners, it was to be designed specifically as a combat transport with loading from a hinged loading ramp at the rear of the fuselage. A key feature was the introduction of the Allison T56 turboprop powerplant, which was developed for the C-130. The turboprop offered greater range at propeller-driven speeds compared to pure turbojets, which were faster but consumed more fuel. They also produced much more power for their weight than piston engines. The Hercules resembled a larger four-engine brother to the C-123 Provider with a similar wing and cargo ramp layout that evolved from the Chase XCG-20 Avitruc, which in turn, was first designed and flown as a cargo glider in 1947. The Boeing C-97 Stratofreighter had rear ramps, which made it possible to drive vehicles onto the airplane (also possible with forward ramp on a C-124). The ramp on the Hercules was also used to airdrop cargo, which included a Low-altitude parachute-extraction system for Sheridan tanks and even dropping large improvised "daisy cutter" bombs. The new Lockheed cargo plane design possessed a range of , takeoff capability from short and unprepared strips, and the ability to fly with one engine shut down. Fairchild, North American, Martin, and Northrop declined to participate. The remaining five companies tendered a total of ten designs: Lockheed two, Boeing one, Chase three, Douglas three, and Airlifts Inc. one. The contest was a close affair between the lighter of the two Lockheed (preliminary project designation L-206) proposals and a four-turboprop Douglas design. The Lockheed design team was led by Willis Hawkins, starting with a 130-page proposal for the "Lockheed L-206". Hall Hibbard, Lockheed vice president and chief engineer, saw the proposal and directed it to Kelly Johnson, who did not care for the low-speed, unarmed aircraft, and remarked, "If you sign that letter, you will destroy the Lockheed Company." Both Hibbard and Johnson signed the proposal and the company won the contract for the now-designated Model 82 on 2 July 1951. The first flight of the "YC-130" prototype was made on 23 August 1954 from the Lockheed plant in Burbank, California. The aircraft, serial number "53-3397", was the second prototype, but the first of the two to fly. The YC-130 was piloted by Stanley Beltz and Roy Wimmer on its 61-minute flight to Edwards Air Force Base; Jack Real and Dick Stanton served as flight engineers. Kelly Johnson flew chase in a Lockheed P2V Neptune. After the two prototypes were completed, production began in Marietta, Georgia, where over 2,300 C-130s have been built through 2009. The initial production model, the "C-130A", was powered by Allison T56-A-9 turboprops with three-blade propellers and originally equipped with the blunt nose of the prototypes. Deliveries began in December 1956, continuing until the introduction of the "C-130B" model in 1959. Some A-models were equipped with skis and re-designated "C-130D". As the C-130A became operational with Tactical Air Command (TAC), the C-130's lack of range became apparent and additional fuel capacity was added with wing pylon-mounted tanks outboard of the engines; this added 6,000 lb (2,720 kg) of fuel capacity for a total capacity of 40,000 lb (18,140 kg). The C-130B model was developed to complement the A-models that had previously been delivered, and incorporated new features, particularly increased fuel capacity in the form of auxiliary tanks built into the center wing section and an AC electrical system. Four-bladed Hamilton Standard propellers replaced the Aeroproducts three-blade propellers that distinguished the earlier A-models. The C-130B had ailerons operated by hydraulic pressure that was increased from to , as well as uprated engines and four-blade propellers that were standard until the J-model. The B model was originally intended to have "blown controls", a system which blows high pressure air over the control surfaces in order to improve their effectiveness during slow flight. It was tested on a NC-130B prototype aircraft with a pair of T-56 turbines providing high pressure air through a duct system to the control surfaces and flaps during landing. This greatly reduced landing speed to just 63 knots, and cut landing distance in half. The system never entered service because it did not improve takeoff performance by the same margin, making the landing performance pointless if the aircraft could not also take off from where it had landed. An electronic reconnaissance variant of the C-130B was designated C-130B-II. A total of 13 aircraft were converted. The C-130B-II was distinguished by its false external wing fuel tanks, which were disguised signals intelligence (SIGINT) receiver antennas. These pods were slightly larger than the standard wing tanks found on other C-130Bs. Most aircraft featured a swept blade antenna on the upper fuselage, as well as extra wire antennas between the vertical fin and upper fuselage not found on other C-130s. Radio call numbers on the tail of these aircraft were regularly changed so as to confuse observers and disguise their true mission. The extended-range "C-130E" model entered service in 1962 after it was developed as an interim long-range transport for the Military Air Transport Service. Essentially a B-model, the new designation was the result of the installation of 1,360 US gal (5,150 L) "Sargent Fletcher" external fuel tanks under each wing's midsection and more powerful Allison T56-A-7A turboprops. The hydraulic boost pressure to the ailerons was reduced back to as a consequence of the external tanks' weight in the middle of the wingspan. The E model also featured structural improvements, avionics upgrades and a higher gross weight. Australia took delivery of 12 C130E Hercules during 1966–67 to supplement the 12 C-130A models already in service with the RAAF. Sweden and Spain fly the TP-84T version of the C-130E fitted for aerial refueling capability. The "KC-130" tankers, originally "C-130F" procured for the US Marine Corps (USMC) in 1958 (under the designation "GV-1") are equipped with a removable 3,600 US gal (13,626 L) stainless steel fuel tank carried inside the cargo compartment. The two wing-mounted hose and drogue aerial refueling pods each transfer up to 300 US gal per minute (1,136 L per minute) to two aircraft simultaneously, allowing for rapid cycle times of multiple-receiver aircraft formations, (a typical tanker formation of four aircraft in less than 30 minutes). The US Navy's "C-130G" has increased structural strength allowing higher gross weight operation. The "C-130H" model has updated Allison T56-A-15 turboprops, a redesigned outer wing, updated avionics and other minor improvements. Later "H" models had a new, fatigue-life-improved, center wing that was retrofitted to many earlier H-models. For structural reasons, some models are required to land with reduced amounts of fuel when carrying heavy cargo, reducing usable range. The H model remains in widespread use with the United States Air Force (USAF) and many foreign air forces. Initial deliveries began in 1964 (to the RNZAF), remaining in production until 1996. An improved C-130H was introduced in 1974, with Australia purchasing 12 of type in 1978 to replace the original 12 C-130A models, which had first entered Royal Australian Air Force (RAAF) service in 1958. The U.S. Coast Guard employs the HC-130H for long-range search and rescue, drug interdiction, illegal migrant patrols, homeland security, and logistics. C-130H models produced from 1992 to 1996 were designated as C-130H3 by the USAF. The "3" denoting the third variation in design for the H series. Improvements included ring laser gyros for the INUs, GPS receivers, a partial glass cockpit (ADI and HSI instruments), a more capable APN-241 color radar, night vision device compatible instrument lighting, and an integrated radar and missile warning system. The electrical system upgrade included Generator Control Units (GCU) and Bus Switching units (BSU) to provide stable power to the more sensitive upgraded components. The equivalent model for export to the UK is the "C-130K", known by the Royal Air Force (RAF) as the "Hercules C.1". The "C-130H-30" ("Hercules C.3" in RAF service) is a stretched version of the original Hercules, achieved by inserting a 100 in (2.54 m) plug aft of the cockpit and an 80 in (2.03 m) plug at the rear of the fuselage. A single C-130K was purchased by the Met Office for use by its Meteorological Research Flight, where it was classified as the "Hercules W.2". This aircraft was heavily modified (with its most prominent feature being the long red and white striped atmospheric probe on the nose and the move of the weather radar into a pod above the forward fuselage). This aircraft, named "Snoopy", was withdrawn in 2001 and was then modified by Marshall of Cambridge Aerospace as flight-testbed for the A400M turbine engine, the TP400. The C-130K is used by the RAF Falcons for parachute drops. Three C-130Ks (Hercules C Mk.1P) were upgraded and sold to the Austrian Air Force in 2002. The "MC-130E Combat Talon" was developed for the USAF during the Vietnam War to support special operations missions in Southeast Asia, and led to both the "MC-130H Combat Talon II" as well as a family of other special missions aircraft. 37 of the earliest models currently operating with the Air Force Special Operations Command (AFSOC) are scheduled to be replaced by new-production MC-130J versions. The EC-130 Commando Solo is another special missions variant within AFSOC, albeit operated solely by an AFSOC-gained wing in the Pennsylvania Air National Guard, and is a psychological operations/information operations (PSYOP/IO) platform equipped as an aerial radio station and television stations able to transmit messaging over commercial frequencies. Other versions of the EC-130, most notably the EC-130H Compass Call, are also special variants, but are assigned to the Air Combat Command (ACC). The AC-130 gunship was first developed during the Vietnam War to provide close air support and other ground-attack duties. The "HC-130" is a family of long-range search and rescue variants used by the USAF and the U.S. Coast Guard. Equipped for deep deployment of Pararescuemen (PJs), survival equipment, and (in the case of USAF versions) aerial refueling of combat rescue helicopters, HC-130s are usually the on-scene command aircraft for combat SAR missions (USAF only) and non-combat SAR (USAF and USCG). Early USAF versions were also equipped with the Fulton surface-to-air recovery system, designed to pull a person off the ground using a wire strung from a helium balloon. The John Wayne movie "The Green Berets" features its use. The Fulton system was later removed when aerial refueling of helicopters proved safer and more versatile. The movie "The Perfect Storm" depicts a real life SAR mission involving aerial refueling of a New York Air National Guard HH-60G by a New York Air National Guard HC-130P. The "C-130R" and "C-130T" are U.S. Navy and USMC models, both equipped with underwing external fuel tanks. The USN C-130T is similar, but has additional avionics improvements. In both models, aircraft are equipped with Allison T56-A-16 engines. The USMC versions are designated "KC-130R" or "KC-130T" when equipped with underwing refueling pods and pylons and are fully night vision system compatible. The RC-130 is a reconnaissance version. A single example is used by the Islamic Republic of Iran Air Force, the aircraft having originally been sold to the former Imperial Iranian Air Force. The "Lockheed L-100 (L-382)" is a civilian variant, equivalent to a C-130E model without military equipment. The L-100 also has two stretched versions. In the 1970s, Lockheed proposed a C-130 variant with turbofan engines rather than turboprops, but the U.S. Air Force preferred the takeoff performance of the existing aircraft. In the 1980s, the C-130 was intended to be replaced by the Advanced Medium STOL Transport project. The project was canceled and the C-130 has remained in production. Building on lessons learned, Lockheed Martin modified a commercial variant of the C-130 into a High Technology Test Bed (HTTB). This test aircraft set numerous short takeoff and landing performance records and significantly expanded the database for future derivatives of the C-130. Modifications made to the HTTB included extended chord ailerons, a long chord rudder, fast-acting double-slotted trailing edge flaps, a high-camber wing leading edge extension, a larger dorsal fin and dorsal fins, the addition of three spoiler panels to each wing upper surface, a long-stroke main and nose landing gear system, and changes to the flight controls and a change from direct mechanical linkages assisted by hydraulic boost, to fully powered controls, in which the mechanical linkages from the flight station controls operated only the hydraulic control valves of the appropriate boost unit. The HTTB first flew on 19 June 1984, with civil registration of N130X. After demonstrating many new technologies, some of which were applied to the C-130J, the HTTB was lost in a fatal accident on 3 February 1993, at Dobbins Air Reserve Base, in Marietta, Georgia. The crash was attributed to disengagement of the rudder fly-by-wire flight control system, resulting in a total loss of rudder control capability while conducting ground minimum control speed tests (Vmcg). The disengagement was a result of the inadequate design of the rudder's integrated actuator package by its manufacturer; the operator's insufficient system safety review failed to consider the consequences of the inadequate design to all operating regimes. A factor which contributed to the accident was the flight crew's lack of engineering flight test training. In the 1990s, the improved C-130J Super Hercules was developed by Lockheed (later Lockheed Martin). This model is the newest version and the only model in production. Externally similar to the classic Hercules in general appearance, the J model has new turboprop engines, six-bladed propellers, digital avionics, and other new systems. In 2000, Boeing was awarded a contract to develop an Avionics Modernization Program kit for the C-130. The program was beset with delays and cost overruns until project restructuring in 2007. On 2 September 2009, Bloomberg news reported that the planned Avionics Modernization Program (AMP) upgrade to the older C-130s would be dropped to provide more funds for the F-35, CV-22 and airborne tanker replacement programs. However, in June 2010, Department of Defense approved funding for the initial production of the AMP upgrade kits. Under the terms of this agreement, the USAF has cleared Boeing to begin low-rate initial production (LRIP) for the C-130 AMP. A total of 198 aircraft are expected to feature the AMP upgrade. The current cost per aircraft is although Boeing expects that this price will drop to US$7 million for the 69th aircraft. In the 2000s, Lockheed Martin and the U.S. Air Force began outfitting and retrofitting C-130s with the eight-blade UTC Aerospace Systems NP2000 propellers. An engine enhancement program saving fuel and providing lower temperatures in the T56 engine has been approved, and the US Air Force expects to save $2 billion and extend the fleet life. In October 2010, the Air Force released a capabilities request for information (CRFI) for the development of a new airlifter to replace the C-130. The new aircraft is to carry a 190 percent greater payload and assume the mission of mounted vertical maneuver (MVM). The greater payload and mission would enable it to carry medium-weight armored vehicles and drop them off at locations without long runways. Various options are being considered, including new or upgraded fixed-wing designs, rotorcraft, tiltrotors, or even an airship. Development could start in 2014, and become operational by 2024. The C-130 fleet of around 450 planes would be replaced by only 250 aircraft. The Air Force had attempted to replace the C-130 in the 1970s through the Advanced Medium STOL Transport project, which resulted in the C-17 Globemaster III that instead replaced the C-141 Starlifter. The Air Force Research Laboratory funded Lockheed and Boeing demonstrators for the Speed Agile concept, which had the goal of making a STOL aircraft that can take off and land at speeds as low as on airfields less than 2,000 ft (610 m) long and cruise at Mach 0.8-plus. Boeing's design used upper-surface blowing from embedded engines on the inboard wing and blown flaps for circulation control on the outboard wing. Lockheed's design also used blown flaps outboard, but inboard used patented reversing ejector nozzles. Boeing's design completed over 2,000 hours of windtunnel tests in late 2009. It was a 5 percent-scale model of a narrowbody design with a payload. When the AFRL increased the payload requirement to , they tested a 5 percent-scale model of a widebody design with a take-off gross weight and an "A400M-size" wide cargo box. It would be powered by four IAE V2533 turbofans. In August 2011, the AFRL released pictures of the Lockheed Speed Agile concept demonstrator. A 23% scale model went through wind tunnel tests to demonstrate its hybrid powered lift, which combines a low drag airframe with simple mechanical assembly to reduce weight and better aerodynamics. The model had four engines, including two Williams FJ44 turbofans. On 26 March 2013, Boeing was granted a patent for its swept-wing powered lift aircraft. As of January 2014, Air Mobility Command, Air Force Materiel Command and the Air Force Research Lab are in the early stages of defining requirements for the C-X next generation airlifter program to replace both the C-130 and C-17. An aircraft would be produced from the early 2030s to the 2040s. If requirements are decided for operating in contested airspace, Air Force procurement of C-130s would end by the end of the decade to not have them serviceable by the 2030s and operated when they cannot perform in that environment. Development of the airlifter depends heavily on the Army's "tactical and operational maneuver" plans. Two different cargo planes could still be created to separately perform tactical and strategic missions, but which course to pursue is to be decided before C-17s need to be retired. Brazil is replacing its C-130s with 28 new Embraer KC-390s. Portugal is doing the same. The first batch of C-130A production aircraft were delivered beginning in 1956 to the 463d Troop Carrier Wing at Ardmore AFB, Oklahoma and the 314th Troop Carrier Wing at Sewart AFB, Tennessee. Six additional squadrons were assigned to the 322d Air Division in Europe and the 315th Air Division in the Far East. Additional aircraft were modified for electronics intelligence work and assigned to Rhein-Main Air Base, Germany while modified RC-130As were assigned to the Military Air Transport Service (MATS) photo-mapping division. The C-130A entered service with the U.S. Air Force in December 1956. In 1958, a U.S. reconnaissance C-130A-II of the 7406th Support Squadron was shot down over Armenia by four Soviet MiG-17s along the Turkish-Armenian border during a routine mission. Australia became the first non-American force to operate the C-130A Hercules with 12 examples being delivered from late 1958. The Royal Canadian Air Force became another early user with the delivery of four B-models (Canadian designation C-130 Mk I) in October / November 1960. In 1963, a Hercules achieved and still holds the record for the largest and heaviest aircraft to land on an aircraft carrier. During October and November that year, a USMC KC-130F (BuNo "149798"), loaned to the U.S. Naval Air Test Center, made 29 touch-and-go landings, 21 unarrested full-stop landings and 21 unassisted take-offs on at a number of different weights. The pilot, Lieutenant (later Rear Admiral) James H. Flatley III, USN, was awarded the Distinguished Flying Cross for his role in this test series. The tests were highly successful, but the idea was considered too risky for routine carrier onboard delivery (COD) operations. Instead, the Grumman C-2 Greyhound was developed as a dedicated COD aircraft. The Hercules used in the test, most recently in service with Marine Aerial Refueler Squadron 352 (VMGR-352) until 2005, is now part of the collection of the National Museum of Naval Aviation at NAS Pensacola, Florida. In 1964, C-130 crews from the 6315th Operations Group at Naha Air Base, Okinawa commenced forward air control (FAC; "Flare") missions over the Ho Chi Minh Trail in Laos supporting USAF strike aircraft. In April 1965 the mission was expanded to North Vietnam where C-130 crews led formations of Martin B-57 Canberra bombers on night reconnaissance/strike missions against communist supply routes leading to South Vietnam. In early 1966 Project Blind Bat/Lamplighter was established at Ubon Royal Thai Air Force Base, Thailand. After the move to Ubon, the mission became a four-engine FAC mission with the C-130 crew searching for targets then calling in strike aircraft. Another little-known C-130 mission flown by Naha-based crews was Operation Commando Scarf, which involved the delivery of chemicals onto sections of the Ho Chi Minh Trail in Laos that were designed to produce mud and landslides in hopes of making the truck routes impassable. In November 1964, on the other side of the globe, C-130Es from the 464th Troop Carrier Wing but loaned to 322d Air Division in France, took part in Operation Dragon Rouge, one of the most dramatic missions in history in the former Belgian Congo. After communist Simba rebels took white residents of the city of Stanleyville hostage, the U.S. and Belgium developed a joint rescue mission that used the C-130s to drop, air-land and air-lift a force of Belgian paratroopers to rescue the hostages. Two missions were flown, one over Stanleyville and another over Paulis during Thanksgiving weeks. The headline-making mission resulted in the first award of the prestigious MacKay Trophy to C-130 crews. In the Indo-Pakistani War of 1965, the No. 6 Transport Squadron of the Pakistan Air Force modified its C-130Bs for use as bombers to carry up to 20,000 lb (9,072 kg) of bombs on pallets. These improvised bombers were used to hit Indian targets such as bridges, heavy artillery positions, tank formations, and troop concentrations. Some C-130s flew with anti-aircraft guns fitted on their ramp and apparently shot down some 17 aircraft and damaging 16 others. In October 1968, a C-130Bs from the 463rd Tactical Airlift Wing dropped a pair of M-121 10,000 lb (4,500 kg) bombs that had been developed for the massive Convair B-36 Peacemaker bomber but had never been used. The U.S. Army and U.S. Air Force resurrected the huge weapons as a means of clearing landing zones for helicopters and in early 1969 the 463rd commenced Commando Vault missions. Although the stated purpose of COMMANDO VAULT was to clear LZs, they were also used on enemy base camps and other targets. During the late 1960s, the U.S. was eager to get information on Chinese nuclear capabilities. After the failure of the Black Cat Squadron to plant operating sensor pods near the Lop Nur Nuclear Weapons Test Base using a Lockheed U-2, the CIA developed a plan, named "Heavy Tea", to deploy two battery-powered sensor pallets near the base. To deploy the pallets, a Black Bat Squadron crew was trained in the U.S. to fly the C-130 Hercules. The crew of 12, led by Col Sun Pei Zhen, took off from Takhli Royal Thai Air Force Base in an unmarked U.S. Air Force C-130E on 17 May 1969. Flying for six and a half hours at low altitude in the dark, they arrived over the target and the sensor pallets were dropped by parachute near Anxi in Gansu province. After another six and a half hours of low altitude flight, they arrived back at Takhli. The sensors worked and uploaded data to a U.S. intelligence satellite for six months before their batteries failed. The Chinese conducted two nuclear tests, on 22 September 1969 and 29 September 1969, during the operating life of the sensor pallets. Another mission to the area was planned as Operation Golden Whip, but was called off in 1970. It is most likely that the aircraft used on this mission was either C-130E serial number 64-0506 or 64-0507 (cn 382-3990 and 382-3991). These two aircraft were delivered to Air America in 1964. After being returned to the U.S. Air Force sometime between 1966 and 1970, they were assigned the serial numbers of C-130s that had been destroyed in accidents. 64-0506 is now flying as 62-1843, a C-130E that crashed in Vietnam on 20 December 1965 and 64-0507 is now flying as 63-7785, a C-130E that had crashed in Vietnam on 17 June 1966. The A-model continued in service through the Vietnam War, where the aircraft assigned to the four squadrons at Naha AB, Okinawa and one at Tachikawa Air Base, Japan performed yeoman's service, including operating highly classified special operations missions such as the BLIND BAT FAC/Flare mission and FACT SHEET leaflet mission over Laos and North Vietnam. The A-model was also provided to the Republic of Vietnam Air Force as part of the Vietnamization program at the end of the war, and equipped three squadrons based at Tan Son Nhut Air Base. The last operator in the world is the Honduran Air Force, which is still flying one of five A model Hercules (FAH "558", c/n 3042) as of October 2009. As the Vietnam War wound down, the 463rd Troop Carrier/Tactical Airlift Wing B-models and A-models of the 374th Tactical Airlift Wing were transferred back to the United States where most were assigned to Air Force Reserve and Air National Guard units. Another prominent role for the B model was with the United States Marine Corps, where Hercules initially designated as GV-1s replaced C-119s. After Air Force C-130Ds proved the type's usefulness in Antarctica, the U.S. Navy purchased a number of B-models equipped with skis that were designated as LC-130s. C-130B-II electronic reconnaissance aircraft were operated under the SUN VALLEY program name primarily from Yokota Air Base, Japan. All reverted to standard C-130B cargo aircraft after their replacement in the reconnaissance role by other aircraft. The C-130 was also used in the 1976 Entebbe raid in which Israeli commando forces carried a surprise assault to rescue 103 passengers of an airliner hijacked by Palestinian and German terrorists at Entebbe Airport, Uganda. The rescue force—200 soldiers, jeeps, and a black Mercedes-Benz (intended to resemble Ugandan Dictator Idi Amin's vehicle of state)—was flown over almost entirely at an altitude of less than from Israel to Entebbe by four Israeli Air Force (IAF) Hercules aircraft without mid-air refueling (on the way back, the aircraft refueled in Nairobi, Kenya). During the Falklands War () of 1982, Argentine Air Force C-130s undertook dangerous re-supply night flights as blockade runners to the Argentine garrison on the Falkland Islands. They also performed daylight maritime survey flights. One was shot down by a Royal Navy Sea Harrier using AIM-9 Sidewinders and cannon. The crew of seven were killed. Argentina also operated two KC-130 tankers during the war, and these refuelled both the Douglas A-4 Skyhawks and Navy Dassault-Breguet Super Étendards; some C-130s were modified to operate as bombers with bomb-racks under their wings. The British also used RAF C-130s to support their logistical operations. During the Gulf War of 1991 (Operation Desert Storm), the C-130 Hercules was used operationally by the U.S. Air Force, U.S. Navy and U.S. Marine Corps, along with the air forces of Australia, New Zealand, Saudi Arabia, South Korea and the UK. The MC-130 Combat Talon variant also made the first attacks using the largest conventional bombs in the world, the BLU-82 "Daisy Cutter" and GBU-43/B "Massive Ordnance Air Blast" (MOAB) bomb. Daisy Cutters were used to primaily clear landing zones and to eliminate mine fields. The weight and size of the weapons make it impossible or impractical to load them on conventional bombers. The GBU-43/B MOAB is a successor to the BLU-82 and can perform the same function, as well as perform strike functions against hardened targets in a low air threat environment. Since 1992, two successive C-130 aircraft named "Fat Albert" have served as the support aircraft for the U.S. Navy Blue Angels flight demonstration team. "Fat Albert I" was a TC-130G ("151891"), while "Fat Albert II" is a C-130T ("164763"). Although "Fat Albert" supports a Navy squadron, it is operated by the U.S. Marine Corps (USMC) and its crew consists solely of USMC personnel. At some air shows featuring the team, "Fat Albert" takes part, performing flyovers. Until 2009, it also demonstrated its rocket-assisted takeoff (RATO) capabilities; these ended due to dwindling supplies of rockets. The AC-130 also holds the record for the longest sustained flight by a C-130. From 22 to 24 October 1997, two AC-130U gunships flew 36 hours nonstop from Hurlburt Field, Florida to Taegu (Daegu), South Korea, being refuelled seven times by KC-135 tanker aircraft. This record flight beat the previous record longest flight by over 10 hours and the two gunships took on of fuel. The gunship has been used in every major U.S. combat operation since Vietnam, except for Operation El Dorado Canyon, the 1986 attack on Libya. During the invasion of Afghanistan in 2001 and the ongoing support of the International Security Assistance Force (Operation Enduring Freedom), the C-130 Hercules has been used operationally by Australia, Belgium, Canada, Denmark, France, Italy, the Netherlands, New Zealand, Norway, Portugal, Romania, South Korea, Spain, the UK and the United States. During the 2003 invasion of Iraq (Operation Iraqi Freedom), the C-130 Hercules was used operationally by Australia, the UK and the United States. After the initial invasion, C-130 operators as part of the Multinational force in Iraq used their C-130s to support their forces in Iraq. Since 2004, the Pakistan Air Force has employed C-130s in the War in North-West Pakistan. Some variants had forward looking infrared (FLIR Systems Star Safire III EO/IR) sensor balls, to enable close tracking of militants. In 2017, France and Germany announced that they are to build up a joint air transport squadron at Evreux Air Base, France, comprising ten C-130J aircraft. Six of these will be operated by Germany. Initial operational capability is expected for 2021 while full operational capability is scheduled for 2024. For almost two decades, the wing's 757th Airlift Squadron and the U.S. Coast Guard have participated in oil spill cleanup exercises to ensure the U.S. military has a capable response in the event of a national emergency. The 910th Airlift Wings 757th AS, DOD's only fixed Aerial Spray System certified by the EPA to disperse pesticides on DOD property spread oil dispersants onto the "Deepwater Horizon" oil spill in the Gulf Coast in 2010. During the 5-week mission, the YARS aircrews flew 92 sorties and sprayed approximately 30,000 acres with nearly 149,000 gallons of oil dispersant to break up the oil. The Deepwater Horizon mission was the first time the US used the oil dispersing capability of the 910th AW—its only large area, fixed-wing aerial spray program—in an actual spill of national significance. The Air Force Reserve Command announced the 910th Airlift Wing has been selected as a recipient of the Air Force Outstanding Unit Award for its outstanding achievement from 28 April 2010 through 4 June 2010. C-130s temporarily based at Kelly Field conducted mosquito control aerial spray applications over areas of eastern Texas devastated by Hurricane Harvey. This special mission treated more than 2.3 million acres at the direction of Federal Emergency Management Agency (FEMA) and the Texas Department of State Health Services (DSHS) to assist in recovery efforts by helping contain the significant increase in pest insects caused by large amounts of standing, stagnant water. The 910th Airlift Wing operates the Department of Defense's only aerial spray capability to control pest insect populations, eliminate undesired and invasive vegetation and disperse oil spills in large bodies of water. The aerial spray flight also is now able to operate during the night with NVG's, which increases the flight's best case spray capacity from approximately 60 thousand acres per day to approximately 190 thousand acres per day. Spray missions are normally conducted at dusk and nighttime hours when pest insects are most active, the U.S. Air Force Reserve reports. In the early 1970s Congress created the Modular Airborne FireFighting System (MAFFS) which is a joint operation between The U.S. Forest Service who supply the systems and the Department of Defense who supply the C-130 aircraft. The roll-on/roll-off systems allow existing aircraft to be temporarily converted into a 3,000-gallon airtanker for fighting wildfires when demand exceeds the supply of privately contracted and publicly available airtankers. In the late 1980s, 22 retired USAF C-130As were removed from storage and transferred to the U.S. Forest Service, which then transferred them to six private companies to be converted into air tankers. One of these C-130s crashed in June 2002 while operating the Retardant Aerial Delivery System (RADS) near Walker, CA. The crash was attributed to wing separation caused by fatigue stress cracking and contributed to the grounding of the entire large aircraft fleet. After and extensive review, US Forest Service and The Bureau of Land Management declined to renew the leases on nine C-130A over concerns about the age of the aircraft, which had been in service since the 1950s, and their ability to handle the forces generated by aerial firefighting. More recently, an updated Retardant Aerial Delivery System known as RADS XL was developed by Coulson Aviation USA. That system consists of a C-130H/Q retrofitted with an in-floor discharge system, combined with a removable 3,500- or 4,000-gallon water tank. The combined system is FAA certified. On January 22, 2020, Coulson's Tanker 134, an EC-130Q registered N134CG, crashed during aerial firefighting operations in New South Wales, Australia, killing all three crew members. The aircraft had taken off out of RAAF Base Richmond, and was supporting firefighting operations during Australia's unprecedented 2019–20 fire season. Significant military variants of the C-130 include: Former operators The C-130 Hercules has had a low accident rate in general. The Royal Air Force recorded an accident rate of about one aircraft loss per 250,000 flying hours over the last 40 years, placing it behind Vickers VC10s and Lockheed TriStars with no flying losses. USAF C-130A/B/E-models had an overall attrition rate of 5% as of 1989 as compared to 1–2% for commercial airliners in the U.S., according to the NTSB, 10% for B-52 bombers, and 20% for fighters (F-4, F-111), trainers (T-37, T-38), and helicopters (H-3). A total of 70 aircraft were lost by the U.S. Air Force and the U.S. Marine Corps during combat operations in the Vietnam War in Southeast Asia. By the nature of the Hercules' worldwide service, the pattern of losses provides an interesting barometer of the global hot spots over the past 50 years. Notes Citations Bibliography
https://en.wikipedia.org/wiki?curid=7697
Commodore 1570 The Commodore 1570 is a 5¼" floppy disk drive for the Commodore 128 home/personal computer. It is a single-sided, 170 kB version of the Commodore 1571, released as a stopgap measure when Commodore International was unable to provide sufficient quantities of 1571s due to a shortage of double-sided drive mechanisms (which were supplied by an outside manufacturer). Like the 1571, it can read and write both GCR and MFM disk formats. The 1570 utilizes a 1571 logic board in a cream-colored original-1541-like case with a drive mechanism similar to the 1541's except that it was equipped with track-zero detection. Like the 1571, its built-in DOS provides a data burst mode for transferring data to the C128 computer at a faster speed than a 1541 can. Its ROM also contains some DOS bug fixes that didn't appear in the 1571 until much later. The 1570 can read and write all single-sided CP/M-format disks that the 1571 can access. Although the 1570 is compatible with the Commodore 64, the C64 isn't capable of taking advantage of the drive's higher-speed operation, and when used with the C64 it's little more than a pricier 1541. Also, many early buyers of the C128 chose to temporarily make do with a 1541 drive, perhaps owned as part of a previous C64 setup, until the 1571 became more widely available. The drive uses the CPU MOS 6502, floppy controller WD1770 or WD1772, I/O controllers 2x MOS Technology 6522 and 1x MOS Technology 6526.
https://en.wikipedia.org/wiki?curid=7699
Commodore 1571 The Commodore 1571 is Commodore's high-end 5¼" floppy disk drive. With its double-sided drive mechanism, it has the ability to use double-sided, double-density (DS/DD) floppy disks natively. This is in contrast to its predecessors, the 1541 and 1570, which can fully read and write such disks only if the user manually flipped them over to access the second side. Because flipping the disk also reverses the direction of rotation, the two methods are not interchangeable; disks which had their back side created in a 1541 by flipping them over would have to be flipped in the 1571 too, and the back side of disks written in a 1571 using the native support for two-sided operation could not be read in a 1541. The 1571 was released to match the Commodore 128, both design-wise and feature-wise. It was announced in the summer of 1985, at the same time as the C128, and became available in quantity later that year. The later C128"D" had a 1571-compatible drive integrated in the system unit. A double-sided disk on the 1571 would have a capacity of 340 kB (70 tracks, 1,360 disk blocks of 256 bytes each); as 8 kB are reserved for system use (directory and block availability information) and, under of each block serve as pointers to the next logical block, = 337,312 B or about were available for user data. (However, with a program organizing disk storage on its own, all space could be used, e.g. for data disks.) The 1571 was designed to accommodate the C128's "burst" mode for 2x faster disk access, however the drive cannot use it if connected to older Commodore machines. This mode replaced the slow bit-banging serial routines of the 1541 with a true serial shift register implemented in hardware, thus dramatically increasing the drive speed. Although this originally had been planned when Commodore first switched from the parallel IEEE-488 interface to a custom serial interface (CBM-488), hardware bugs in the VIC-20's 6522 VIA shift register prevented it from working properly. When connected to a C128, the 1571 would default to double-sided mode, which allowed the drive to read its own 340k disks as well as single-sided 170 kB 1541 disks. If the C128 was switched into C64 mode by typing GO 64 from BASIC, the 1571 will stay in double-sided mode. If C64 mode was activated by holding down the C= key on power-up, the drive would automatically switch to single-sided mode, in which case it is unable to read 340 kB disks (also the default if a 1571 is used with a C64, Plus/4, VIC-20, or PET). A manual command can also be issued from BASIC to switch the 1571 between single and double sided mode. There is also an undocumented command which allows the user to independently control either of the read/write heads of the 1571, making it possible to format both sides of a diskette separate from each other, however the resultant disk cannot be read in a 1541 as it would be spinning in reverse direction when flipped upside down. In the same vein, "flippy" disks created with a 1541 cannot be read on a 1571 with this feature; they must be inserted upside down. The 1571 is not 100% low-level compatible with the 1541, however this isn't a problem except in some software that uses advanced copy protections such as the RapidLok system found on Microprose and Accolade games. The 1571 was noticeably quieter than its predecessor and tended to run cooler as well, even though, like the 1541, it had an internal power supply (later Commodore drives, like the 1541-II and the 3½" 1581, came with external power supplies). The 1541-II/1581 power supply makes mention of a 1571-II, hinting that Commodore may have intended to release a version of the 1571 with an external power supply. However, no 1571-IIs are known to exist. The embedded OS in the 1571 was an improvement over the Early 1571s had a bug in the ROM-based disk operating system that caused relative files to corrupt if they occupied both sides of the disk. A version 2 ROM was released, but though it cured the initial bug, it introduced some minor quirks of its own - particularly with the 1541 emulation. Curiously, it was also identified as V3.0. As with the 1541, Commodore initially could not meet demand for the 1571, and that lack of availability and the drive's relatively high price (about US$300) presented an opportunity for cloners. Two 1571 clones appeared, one from Oceanic and one from Blue Chip, but legal action from Commodore quickly drove them from the market. Commodore announced at the 1985 Consumer Electronics Show a dual-drive version of the 1571, to be called the Commodore 1572, but quickly canceled it, reportedly due to technical difficulties with the 1572 DOS. It would have had four times as much RAM as the 1571 (8 kB), and twice as much ROM (64 kB). The 1572 would have allowed for fast disk backups of non-copy-protected media, much like the old 4040, 8050, and 8250 dual drives. The 1571 built into the European plastic-case C128 D computer is electronically identical to the stand-alone version, but 1571 version integrated into the later metal-case C128 D (often called C128 DCR, for D Cost-Reduced) differs a lot from the stand-alone 1571. It includes a newer DOS, version 3.1, replaces the MOS Technology CIA interface chip, of which only a few features were used by the 1571 DOS, with a very much simplified chip called 5710, and has some compatibility issues with the stand-alone drive. Because this internal 1571 does not have an unused 8-bit input/output port on any chip, unlike most other Commodore drives, it is not possible to install a parallel cable in this drive, such as that used by SpeedDOS, DolphinDOS and some other fast third-party Commodore DOS replacements. The drive detects the motor speed and generates an internal data sampling clock signal that matches with the motor speed. The 1571 uses a saddle canceler when reading the data stream. A correction signal is generated when the raw data pattern on the disk consists of two consecutive zeros. With the GCR recording format a problem occurs in the read signal waveform. The worst case pattern 1001 may cause a saddle condition where a false data bit may occur. The original 1541 drives uses a one-shot to correct the condition. The 1571 uses a gate array to correct this digitally. The drive uses the MOS 6502 CPU, WD1770 or WD1772 floppy controller, 2x MOS Technology 6522 I/O controllers and 1x MOS Technology 6526. Unlike the 1541, which was limited to GCR formatting, the 1571 could read both GCR and MFM disk formats. The version of CP/M included with the C128 supported the following formats: The 1571 can read any of the many CP/M -disk formats. If the CP/M BIOS is modified, it is possible to read any soft sector 40-track MFM format. Single density (FM) formats are not supported because the density selector pin on the MFM controller chip in the drive is disabled (wired to ground). A 1571 cannot boot from MFM disks; the user must boot CP/M from a GCR disk and then switch to MFM disks. With additional software, it was possible to read and write to MS-DOS-formatted floppies as well. Numerous commercial and public-domain programs for this purpose became available, the best-known being SOGWAP's "Big Blue Reader". Although the C128 could not run any DOS-based software, this capability allowed data files to be exchanged with PC users. Reading or disks was possible as well with special software, but the standard format, which used FM rather than MFM encoding, could not be handled by the 1571 hardware without modifying the drive circuitry as the control line that determines if FM or MFM encoding is used by the disc controller chip was permanently wired to ground (MFM mode) rather than being under software control. In the 1541 format, while 40 tracks are possible for a drive like the 154x/157x, only are used. Commodore chose not to use the upper five tracks by default (or at least to use more than 35) due to the bad quality of some of the drive mechanisms, which did not always work reliably on those tracks. For compatibility and ease of implementation, the 1571's double-sided format of one logical disk side with was created by putting together the lower 35 physical tracks on each of the physical sides of the disk rather than using two times even though there were no more quality problems with the mechanisms of the 1571 drives.
https://en.wikipedia.org/wiki?curid=7700
Cocaine Cocaine, also known as coke, is a strong stimulant most frequently used as a recreational drug. It is commonly snorted, inhaled as smoke, or dissolved and injected into a vein. Mental effects may include loss of contact with reality, an intense feeling of happiness, or agitation. Physical symptoms may include a fast heart rate, sweating, and large pupils. High doses can result in very high blood pressure or body temperature. Effects begin within seconds to minutes of use and last between five and ninety minutes. Cocaine has a small number of accepted medical uses such as numbing and decreasing bleeding during nasal surgery. Cocaine is addictive due to its effect on the reward pathway in the brain. After a short period of use, there is a high risk that dependence will occur. Its use also increases the risk of stroke, myocardial infarction, lung problems in those who smoke it, blood infections, and sudden cardiac death. Cocaine sold on the street is commonly mixed with local anesthetics, cornstarch, quinine, or sugar, which can result in additional toxicity. Following repeated doses a person may have decreased ability to feel pleasure and be very physically tired. Cocaine acts by inhibiting the reuptake of serotonin, norepinephrine, and dopamine. This results in greater concentrations of these three neurotransmitters in the brain. It can easily cross the blood–brain barrier and may lead to the breakdown of the barrier. In 2013, 419 kilograms were produced legally. It is estimated that the illegal market for cocaine is 100 to US$500 billion each year. With further processing, crack cocaine can be produced from cocaine. Cocaine is the second most frequently used illegal drug globally, after cannabis. Between 14 and 21 million people use the drug each year. Use is highest in North America followed by Europe and South America. Between one and three percent of people in the developed world have used cocaine at some point in their life. In 2013, cocaine use directly resulted in 4,300 deaths, up from 2,400 in 1990. It is named after the "coca" plant from which it is isolated. The plant's leaves have been used by Peruvians since ancient times. Cocaine was first isolated from the leaves in 1860. Since 1961, the international Single Convention on Narcotic Drugs has required countries to make recreational use of cocaine a crime. Topical cocaine can be used as a local numbing agent to help with painful procedures in the mouth or nose. Cocaine is now predominantly used for nasal and lacrimal duct surgery. The major disadvantages of this use are cocaine's potential for cardiovascular toxicity, glaucoma, and pupil dilation. Medicinal use of cocaine has decreased as other synthetic local anesthetics such as benzocaine, proparacaine, lidocaine, and tetracaine are now used more often. If vasoconstriction is desired for a procedure (as it reduces bleeding), the anesthetic is combined with a vasoconstrictor such as phenylephrine or epinephrine. Some otolaryngology (ENT) specialists occasionally use cocaine within the practice when performing procedures such as nasal cauterization. In this scenario dissolved cocaine is soaked into a ball of cotton wool, which is placed in the nostril for the 10–15 minutes immediately before the procedure, thus performing the dual role of both numbing the area to be cauterized, and vasoconstriction. Even when used this way, some of the used cocaine may be absorbed through oral or nasal mucosa and give systemic effects. An alternative method of administration for ENT surgery is mixed with adrenaline and sodium bicarbonate, as Moffett's solution. Cocaine hydrochloride (Goprelto), an ester local anesthetic, was approved for medical use in the United States in December 2017, and is indicated for the introduction of local anesthesia of the mucous membranes for diagnostic procedures and surgeries on or through the nasal cavities of adults. Cocaine hydrochloride (Numbrino) was approved for medical use in the United States in January 2020. The most common adverse reactions in people treated with Goprelto are headache and epistaxis. The most common adverse reactions in people treated with Numbrino are hypertension, tachycardia, and sinus tachycardia. Cocaine is a powerful nervous system stimulant. Its effects can last from 15 minutes to an hour. The duration of cocaine's effects depends on the amount taken and the route of administration. Cocaine can be in the form of fine white powder, bitter to the taste. When inhaled or injected, it causes a numbing effect. Crack cocaine is a smokeable form of cocaine made into small "rocks" by processing cocaine with sodium bicarbonate (baking soda) and water. Crack cocaine is referred to as "crack" because of the crackling sounds it makes when heated. Cocaine use leads to increases in alertness, feelings of well-being and euphoria, increased energy and motor activity, and increased feelings of competence and sexuality. Coca leaves are typically mixed with an alkaline substance (such as lime) and chewed into a wad that is retained in the mouth between gum and cheek (much the same as chewing tobacco is chewed) and sucked of its juices. The juices are absorbed slowly by the mucous membrane of the inner cheek and by the gastrointestinal tract when swallowed. Alternatively, coca leaves can be infused in liquid and consumed like tea. Ingesting coca leaves generally is an inefficient means of administering cocaine. Because cocaine is hydrolyzed and rendered inactive in the acidic stomach, it is not readily absorbed when ingested alone. Only when mixed with a highly alkaline substance (such as lime) can it be absorbed into the bloodstream through the stomach. The efficiency of absorption of orally administered cocaine is limited by two additional factors. First, the drug is partly catabolized by the liver. Second, capillaries in the mouth and esophagus constrict after contact with the drug, reducing the surface area over which the drug can be absorbed. Nevertheless, cocaine metabolites can be detected in the urine of subjects that have sipped even one cup of coca leaf infusion. Orally administered cocaine takes approximately 30 minutes to enter the bloodstream. Typically, only a third of an oral dose is absorbed, although absorption has been shown to reach 60% in controlled settings. Given the slow rate of absorption, maximum physiological and psychotropic effects are attained approximately 60 minutes after cocaine is administered by ingestion. While the onset of these effects is slow, the effects are sustained for approximately 60 minutes after their peak is attained. Contrary to popular belief, both ingestion and insufflation result in approximately the same proportion of the drug being absorbed: 30 to 60%. Compared to ingestion, the faster absorption of insufflated cocaine results in quicker attainment of maximum drug effects. Snorting cocaine produces maximum physiological effects within 40 minutes and maximum psychotropic effects within 20 minutes, however, a more realistic activation period is closer to 5 to 10 minutes. Physiological and psychotropic effects from nasally insufflated cocaine are sustained for approximately 40–60 minutes after the peak effects are attained. Coca tea, an infusion of coca leaves, is also a traditional method of consumption. The tea has often been recommended for travelers in the Andes to prevent altitude sickness. However, its actual effectiveness has never been systematically studied. In 1986 an article in the "Journal of the American Medical Association" revealed that U.S. health food stores were selling dried coca leaves to be prepared as an infusion as "Health Inca Tea." While the packaging claimed it had been "decocainized," no such process had actually taken place. The article stated that drinking two cups of the tea per day gave a mild stimulation, increased heart rate, and mood elevation, and the tea was essentially harmless. Despite this, the DEA seized several shipments in Hawaii, Chicago, Georgia, and several locations on the East Coast of the United States, and the product was removed from the shelves. Nasal insufflation (known colloquially as "snorting", "sniffing", or "blowing") is a common method of ingestion of recreational powdered cocaine. The drug coats and is absorbed through the mucous membranes lining the nasal passages. Cocaine's desired euphoric effects are delayed when snorted through the nose by about five minutes. This occurs because cocaine's absorption is slowed by its constricting effect on the blood vessels of the nose. Insufflation of cocaine also leads to the longest duration of its effects (60–90 minutes). When insufflating cocaine, absorption through the nasal membranes is approximately 30–60%, with higher doses leading to increased absorption efficiency. Any material not directly absorbed through the mucous membranes is collected in mucus and swallowed (this "drip" is considered pleasant by some and unpleasant by others). In a study of cocaine users, the average time taken to reach peak subjective effects was 14.6 minutes. Any damage to the inside of the nose is because cocaine highly constricts blood vessels – and therefore blood and oxygen/nutrient flow – to that area. Nosebleeds after cocaine insufflation are due to irritation and damage of mucus membranes by foreign particles and adulterants and not the cocaine itself; as a vasoconstrictor, cocaine acts to reduce bleeding. Rolled up banknotes, hollowed-out pens, cut straws, pointed ends of keys, specialized spoons, long fingernails, and (clean) tampon applicators are often used to insufflate cocaine. Such devices are often called "tooters" by users. The cocaine typically is poured onto a flat, hard surface (such as a mirror, CD case or book) and divided into "bumps," "lines" or "rails," and then insufflated. The amount of cocaine in a line varies widely from person to person and occasion to occasion (the purity of the cocaine is also a factor), but one line is generally considered to be a single dose and is typically 35 mg (a "bump") to 100 mg (a "rail"). As tolerance builds rapidly in the short-term (hours), many lines are often snorted to produce greater effects. A 2001 study reported that the sharing of straws used to "snort" cocaine can spread blood diseases such as hepatitis C. Drug injection by turning the drug into a solution provides the highest blood levels of drug in the shortest amount of time. Subjective effects not commonly shared with other methods of administration include a ringing in the ears moments after injection (usually when in excess of 120 milligrams) lasting two to 5 minutes including tinnitus and audio distortion. This is colloquially referred to as a "bell ringer". In a study of cocaine users, the average time taken to reach peak subjective effects was 3.1 minutes. The euphoria passes quickly. Aside from the toxic effects of cocaine, there is also danger of circulatory emboli from the insoluble substances that may be used to cut the drug. As with all injected illicit substances, there is a risk of the user contracting blood-borne infections if sterile injecting equipment is not available or used. Additionally, because cocaine is a vasoconstrictor, and usage often entails multiple injections within several hours or less, subsequent injections are progressively more difficult to administer, which in turn may lead to more injection attempts and more consequences from improperly performed injection. An injected mixture of cocaine and heroin, known as "speedball" is a particularly dangerous combination, as the converse effects of the drugs actually complement each other, but may also mask the symptoms of an overdose. It has been responsible for numerous deaths, including celebrities such as comedians/actors John Belushi and Chris Farley, Mitch Hedberg, River Phoenix, grunge singer Layne Staley and actor Philip Seymour Hoffman. Experimentally, cocaine injections can be delivered to animals such as fruit flies to study the mechanisms of cocaine addiction. Inhalation by smoking cocaine is one of the several ways the drug is consumed. The onset of cocaine's desired euphoric effects is fastest with inhaling cocaine and begins after 3–5 seconds. In contrast, inhalation of cocaine leads to the shortest duration of its effects (5–15 minutes). The two main ways cocaine is smoked are freebasing and by using cocaine which has been converted to smokable "crack cocaine". Cocaine is smoked by inhaling the vapor produced when solid cocaine is heated to the point that it sublimates. In a 2000 Brookhaven National Laboratory medical department study, based on self reports of 32 abusers who participated in the study,"peak high" was found at mean of 1.4min +/- 0.5 minutes. Pyrolysis products of cocaine that occur only when heated/smoked have been shown to change the effect profile, "i.e." anhydroecgonine methyl ester when co-administered with cocaine increases the dopamine in CPu and NAc brain regions, and has M1- and M3- receptor affinity. Smoking freebase or crack cocaine is most often accomplished using a pipe made from a small glass tube, often taken from "love roses", small glass tubes with a paper rose that are promoted as romantic gifts. These are sometimes called "stems", "horns", "blasters" and "straight shooters". A small piece of clean heavy copper or occasionally stainless steel scouring padoften called a "brillo" (actual Brillo Pads contain soap, and are not used) or "chore" (named for Chore Boy brand copper scouring pads)serves as a reduction base and flow modulator in which the "rock" can be melted and boiled to vapor. Crack smokers also sometimes smoke through a soda can with small holes on the side or bottom. Crack is smoked by placing it at the end of the pipe; a flame held close to it produces vapor, which is then inhaled by the smoker. The effects, felt almost immediately after smoking, are very intense and do not last long usually 2 to 10 minutes. When smoked, cocaine is sometimes combined with other drugs, such as cannabis, often rolled into a joint or blunt. Powdered cocaine is also sometimes smoked, though heat destroys much of the chemical; smokers often sprinkle it on cannabis. The language referring to paraphernalia and practices of smoking cocaine vary, as do the packaging methods in the street level sale. Another way users consume cocaine is by making it into a suppository which they then insert into the anus or vagina. The drug is then absorbed by the membranes of these body parts. Little research has been focused on the suppository (anal or vaginal insertion) method of administration, also known as "plugging". This method of administration is commonly administered using an oral syringe. Cocaine can be dissolved in water and withdrawn into an oral syringe which may then be lubricated and inserted into the anus or vagina before the plunger is pushed. Anecdotal evidence of its effects is infrequently discussed, possibly due to social taboos in many cultures. The rectum and the vaginal canal is where the majority of the drug would be taken up through the membranes lining its walls. With excessive or prolonged use, the drug can cause itching, fast heart rate, hallucinations, and paranoid delusions or sensations of insects crawling on the skin. Overdoses may cause abnormally high body temperature and a marked elevation of blood pressure, which can be life-threatening, abnormal heart rhythms, and death. Anxiety, paranoia, and restlessness can also occur, especially during the comedown. With excessive dosage, tremors, convulsions and increased body temperature are observed. Severe cardiac adverse events, particularly sudden cardiac death, become a serious risk at high doses due to cocaine's blocking effect on cardiac sodium channels. Chronic cocaine intake causes strong imbalances of transmitter levels in order to compensate extremes. Thus, receptors disappear from the cell surface or reappear on it, resulting more or less in an "off" or "working mode" respectively, or they change their susceptibility for binding partners (ligands)mechanisms called downregulation and upregulation. However, studies suggest cocaine abusers do not show normal age-related loss of striatal dopamine transporter (DAT) sites, suggesting cocaine has neuroprotective properties for dopamine neurons. Possible side effects include insatiable hunger, aches, insomnia/oversleeping, lethargy, and persistent runny nose. Depression with suicidal ideation may develop in very heavy users. Finally, a loss of vesicular monoamine transporters, neurofilament proteins, and other morphological changes appear to indicate a long term damage of dopamine neurons. All these effects contribute a rise in tolerance thus requiring a larger dosage to achieve the same effect. The lack of normal amounts of serotonin and dopamine in the brain is the cause of the dysphoria and depression felt after the initial high. Physical withdrawal is not dangerous. Physiological changes caused by cocaine withdrawal include vivid and unpleasant dreams, insomnia or hypersomnia, increased appetite and psychomotor retardation or agitation. Physical side effects from chronic smoking of cocaine include coughing up blood, bronchospasm, itching, fever, diffuse alveolar infiltrates without effusions, pulmonary and systemic eosinophilia, chest pain, lung trauma, sore throat, asthma, hoarse voice, dyspnea (shortness of breath), and an aching, flu-like syndrome. Cocaine constricts blood vessels, dilates pupils, and increases body temperature, heart rate, and blood pressure. It can also cause headaches and gastrointestinal complications such as abdominal pain and nausea. A common but untrue belief is that the smoking of cocaine chemically breaks down tooth enamel and causes tooth decay. However, cocaine does often cause involuntary tooth grinding, known as bruxism, which can deteriorate tooth enamel and lead to gingivitis. Additionally, stimulants like cocaine, methamphetamine, and even caffeine cause dehydration and dry mouth. Since saliva is an important mechanism in maintaining one's oral pH level, chronic stimulant abusers who do not hydrate sufficiently may experience demineralization of their teeth due to the pH of the tooth surface dropping too low (below 5.5). Cocaine use also promotes the formation of blood clots. This increase in blood clot formation is attributed to cocaine-associated increases in the activity of plasminogen activator inhibitor, and an increase in the number, activation, and aggregation of platelets. Chronic intranasal usage can degrade the cartilage separating the nostrils (the septum nasi), leading eventually to its complete disappearance. Due to the absorption of the cocaine from cocaine hydrochloride, the remaining hydrochloride forms a dilute hydrochloric acid. Cocaine may also greatly increase the risk of developing rare autoimmune or connective tissue diseases such as lupus, Goodpasture syndrome, vasculitis, glomerulonephritis, Stevens–Johnson syndrome, and other diseases. It can also cause a wide array of kidney diseases and kidney failure. Cocaine use leads to an increased risk of hemorrhagic and ischemic strokes. Cocaine use also increases the risk of having a heart attack. Cocaine addiction occurs through ΔFosB overexpression in the nucleus accumbens, which results in altered transcriptional regulation in neurons within the nucleus accumbens. ΔFosB levels have been found to increase upon the use of cocaine. Each subsequent dose of cocaine continues to increase ΔFosB levels with no ceiling of tolerance. Elevated levels of ΔFosB leads to increases in brain-derived neurotrophic factor (BDNF) levels, which in turn increases the number of dendritic branches and spines present on neurons involved with the nucleus accumbens and prefrontal cortex areas of the brain. This change can be identified rather quickly, and may be sustained weeks after the last dose of the drug. Transgenic mice exhibiting inducible expression of ΔFosB primarily in the nucleus accumbens and dorsal striatum exhibit sensitized behavioural responses to cocaine. They self-administer cocaine at lower doses than control, but have a greater likelihood of relapse when the drug is withheld. ΔFosB increases the expression of AMPA receptor subunit GluR2 and also decreases expression of dynorphin, thereby enhancing sensitivity to reward. Cocaine dependence is a form of psychological dependence that develops from regular cocaine use and produces a withdrawal state with emotional-motivational deficits upon cessation of cocaine use. Cocaine is known to have a number of deleterious effects during pregnancy. Pregnant people who use cocaine have an elevated risk of placental abruption, a condition where the placenta detaches from the uterus and causes bleeding. Due to its vasoconstrictive and hypertensive effects, they are also at risk for hemorrhagic stroke and myocardial infarction. Cocaine is also teratogenic, meaning that it can cause birth defects and fetal malformations. In-utero exposure to cocaine is associated with behavioral abnormalities, cognitive impairment, cardiovascular malformations, intrauterine growth restriction, preterm birth, urinary tract malformations, and cleft lip and palate. The pharmacodynamics of cocaine involve the complex relationships of neurotransmitters (inhibiting monoamine uptake in rats with ratios of about: serotonin:dopamine = 2:3, serotonin:norepinephrine = 2:5). The most extensively studied effect of cocaine on the central nervous system is the blockade of the dopamine transporter protein. Dopamine transmitter released during neural signaling is normally recycled via the transporter; i.e., the transporter binds the transmitter and pumps it out of the synaptic cleft back into the presynaptic neuron, where it is taken up into storage vesicles. Cocaine binds tightly at the dopamine transporter forming a complex that blocks the transporter's function. The dopamine transporter can no longer perform its reuptake function, and thus dopamine accumulates in the synaptic cleft. The increased concentration of dopamine in the synapse activates post-synaptic dopamine receptors, which makes the drug rewarding and promotes the compulsive use of cocaine. Cocaine affects certain serotonin (5-HT) receptors; in particular, it has been shown to antagonize the 5-HT3 receptor, which is a ligand-gated ion channel. The overabundance of 5-HT3 receptors in cocaine conditioned rats display this trait, however the exact effect of 5-HT3 in this process is unclear. The 5-HT2 receptor (particularly the subtypes 5-HT2AR, 5-HT2BR and 5-HT2CR) are involved in the locomotor-activating effects of cocaine. Cocaine has been demonstrated to bind as to directly stabilize the DAT transporter on the open outward-facing conformation. Further, cocaine binds in such a way as to inhibit a hydrogen bond innate to DAT. Cocaine's binding properties are such that it attaches so this hydrogen bond will not form and is blocked from formation due to the tightly locked orientation of the cocaine molecule. Research studies have suggested that the affinity for the transporter is not what is involved in habituation of the substance so much as the conformation and binding properties to where and how on the transporter the molecule binds. Sigma receptors are affected by cocaine, as cocaine functions as a sigma ligand agonist. Further specific receptors it has been demonstrated to function on are NMDA and the D1 dopamine receptor. Cocaine also blocks sodium channels, thereby interfering with the propagation of action potentials; thus, like lignocaine and novocaine, it acts as a local anesthetic. It also functions on the binding sites to the dopamine and serotonin sodium dependent transport area as targets as separate mechanisms from its reuptake of those transporters; unique to its local anesthetic value which makes it in a class of functionality different from both its own derived phenyltropanes analogues which have that removed. In addition to this cocaine has some target binding to the site of the Kappa-opioid receptor as well. Cocaine also causes vasoconstriction, thus reducing bleeding during minor surgical procedures. The locomotor enhancing properties of cocaine may be attributable to its enhancement of dopaminergic transmission from the substantia nigra. Recent research points to an important role of circadian mechanisms and clock genes in behavioral actions of cocaine. Cocaine can often cause reduced food intake, many chronic users lose their appetite and can experience severe malnutrition and significant weight loss. Cocaine effects, further, are shown to be potentiated for the user when used in conjunction with new surroundings and stimuli, and otherwise novel environs. Cocaine has a short half life of 0.7-1.5 hours and is extensively metabolized by cholinesterase enzymes (primarily in the liver and plasma), with only about 1% excreted unchanged in the urine. The metabolism is dominated by hydrolytic ester cleavage, so the eliminated metabolites consist mostly of benzoylecgonine (BE), the major metabolite, and other significant metabolites in lesser amounts such as ecgonine methyl ester (EME) and ecgonine. Further minor metabolites of cocaine include norcocaine, p-hydroxycocaine, m-hydroxycocaine, p-hydroxybenzoylecgonine (pOHBE), and m-hydroxybenzoylecgonine. If consumed with alcohol, cocaine combines with alcohol in the liver to form cocaethylene. Studies have suggested cocaethylene is both more euphoric, and has a higher cardiovascular toxicity than cocaine by itself. Depending on liver and kidney function, cocaine metabolites are detectable in urine. Benzoylecgonine can be detected in urine within four hours after cocaine intake and remains detectable in concentrations greater than 150 ng/mL typically for up to eight days after cocaine is used. Detection of cocaine metabolites in hair is possible in regular users until the sections of hair grown during use are cut or fall out. Cocaine in its purest form is a white, pearly product. Cocaine appearing in powder form is a salt, typically cocaine hydrochloride. Street cocaine is often adulterated or "cut" with talc, lactose, sucrose, glucose, mannitol, inositol, caffeine, procaine, phencyclidine, phenytoin, lignocaine, strychnine, amphetamine, or heroin. The color of "crack" cocaine depends upon several factors including the origin of the cocaine used, the method of preparation – with ammonia or baking soda – and the presence of impurities. It will generally range from white to a yellowish cream to a light brown. Its texture will also depend on the adulterants, origin and processing of the powdered cocaine, and the method of converting the base. It ranges from a crumbly texture, sometimes extremely oily, to a hard, almost crystalline nature. Cocaine – a tropane alkaloid – is a weakly alkaline compound, and can therefore combine with acidic compounds to form salts. The hydrochloride (HCl) salt of cocaine is by far the most commonly encountered, although the sulfate (SO42-) and the nitrate (NO3−) salts are occasionally seen. Different salts dissolve to a greater or lesser extent in various solvents – the hydrochloride salt is polar in character and is quite soluble in water. As the name implies, "freebase" is the base form of cocaine, as opposed to the salt form. It is practically insoluble in water whereas hydrochloride salt is water-soluble. Smoking freebase cocaine has the additional effect of releasing methylecgonidine into the user's system due to the pyrolysis of the substance (a side effect which insufflating or injecting powder cocaine does not create). Some research suggests that smoking freebase cocaine can be even more cardiotoxic than other routes of administration because of methylecgonidine's effects on lung tissue and liver tissue. Pure cocaine is prepared by neutralizing its compounding salt with an alkaline solution, which will precipitate to non-polar basic cocaine. It is further refined through aqueous-solvent liquid–liquid extraction. Smoking or vaporizing cocaine and inhaling it into the lungs produces an almost immediate "high" that can be very powerful (and addicting) quite rapidly – this initial crescendo of stimulation is known as a "rush". While the stimulating effects may last for hours, the euphoric sensation is very brief, prompting the user to smoke more immediately. Powder cocaine (cocaine hydrochloride) must be heated to a high temperature (about 197 °C), and considerable decomposition/burning occurs at these high temperatures. This effectively destroys some of the cocaine and yields a sharp, acrid, and foul-tasting smoke. Cocaine base/crack can be smoked because it vaporizes with little or no decomposition at , which is below the boiling point of water. Crack is a lower purity form of free-base cocaine that is usually produced by neutralization of cocaine hydrochloride with a solution of baking soda (sodium bicarbonate, NaHCO3) and water, producing a very hard/brittle, off-white-to-brown colored, amorphous material that contains sodium carbonate, entrapped water, and other by-products as the main impurities.The origin of the name "crack" comes from the "crackling" sound (and hence the onomatopoeic moniker "crack") that is produced when the cocaine and its impurities (i.e. water, sodium bicarbonate) are heated past the point of vaporization. Coca herbal infusion (also referred to as coca tea) is used in coca-leaf producing countries much as any herbal medicinal infusion would elsewhere in the world. The free and legal commercialization of dried coca leaves under the form of filtration bags to be used as "coca tea" has been actively promoted by the governments of Peru and Bolivia for many years as a drink having medicinal powers. In Peru, the National Coca Company, a state-run corporation, sells cocaine-infused teas and other medicinal products and also exports leaves to the U.S. for medicinal use. Visitors to the city of Cuzco in Peru, and La Paz in Bolivia are greeted with the offering of coca leaf infusions (prepared in teapots with whole coca leaves) purportedly to help the newly arrived traveler overcome the malaise of high altitude sickness. The effects of drinking coca tea are a mild stimulation and mood lift. It does not produce any significant numbing of the mouth nor does it give a rush like snorting cocaine. In order to prevent the demonization of this product, its promoters publicize the unproven concept that much of the effect of the ingestion of coca leaf infusion would come from the secondary alkaloids, as being not only quantitatively different from pure cocaine but also qualitatively different. It has been promoted as an adjuvant for the treatment of cocaine dependence. In one controversial study, coca leaf infusion was used—in addition to counseling—to treat 23 addicted coca-paste smokers in Lima, Peru. Relapses fell from an average of four times per month before treatment with coca tea to one during the treatment. The duration of abstinence increased from an average of 32 days prior to treatment to 217 days during treatment. These results suggest that the administration of coca leaf infusion plus counseling would be an effective method for preventing relapse during treatment for cocaine addiction. Importantly, these results also suggest strongly that the primary pharmacologically active metabolite in coca leaf infusions is actually cocaine and not the secondary alkaloids. The cocaine metabolite benzoylecgonine can be detected in the urine of people a few hours after drinking one cup of coca leaf infusion. The first synthesis and elucidation of the cocaine molecule was by Richard Willstätter in 1898. Willstätter's synthesis derived cocaine from tropinone. Since then, Robert Robinson and Edward Leete have made significant contributions to the mechanism of the synthesis. (-NO3) The additional carbon atoms required for the synthesis of cocaine are derived from acetyl-CoA, by addition of two acetyl-CoA units to the "N"-methyl-Δ1-pyrrolinium cation. The first addition is a Mannich-like reaction with the enolate anion from acetyl-CoA acting as a nucleophile towards the pyrrolinium cation. The second addition occurs through a Claisen condensation. This produces a racemic mixture of the 2-substituted pyrrolidine, with the retention of the thioester from the Claisen condensation. In formation of tropinone from racemic ethyl [2,3-13C2]4(Nmethyl-2-pyrrolidinyl)-3-oxobutanoate there is no preference for either stereoisomer. In the biosynthesis of cocaine, however, only the (S)-enantiomer can cyclize to form the tropane ring system of cocaine. The stereoselectivity of this reaction was further investigated through study of prochiral methylene hydrogen discrimination. This is due to the extra chiral center at C-2. This process occurs through an oxidation, which regenerates the pyrrolinium cation and formation of an enolate anion, and an intramolecular Mannich reaction. The tropane ring system undergoes hydrolysis, SAM-dependent methylation, and reduction via NADPH for the formation of methylecgonine. The benzoyl moiety required for the formation of the cocaine diester is synthesized from phenylalanine via cinnamic acid. Benzoyl-CoA then combines the two units to form cocaine. The biosynthesis begins with L-Glutamine, which is derived to L-ornithine in plants. The major contribution of L-ornithine and L-arginine as a precursor to the tropane ring was confirmed by Edward Leete. Ornithine then undergoes a pyridoxal phosphate-dependent decarboxylation to form putrescine. In animals, however, the urea cycle derives putrescine from ornithine. L-ornithine is converted to L-arginine, which is then decarboxylated via PLP to form agmatine. Hydrolysis of the imine derives "N"-carbamoylputrescine followed with hydrolysis of the urea to form putrescine. The separate pathways of converting ornithine to putrescine in plants and animals have converged. A SAM-dependent "N"-methylation of putrescine gives the "N"-methylputrescine product, which then undergoes oxidative deamination by the action of diamine oxidase to yield the aminoaldehyde. Schiff base formation confirms the biosynthesis of the "N"-methyl-Δ1-pyrrolinium cation. The biosynthesis of the tropane alkaloid, however, is still uncertain. Hemscheidt proposes that Robinson's acetonedicarboxylate emerges as a potential intermediate for this reaction. Condensation of "N"-methylpyrrolinium and acetonedicarboxylate would generate the oxobutyrate. Decarboxylation leads to tropane alkaloid formation. The reduction of tropinone is mediated by NADPH-dependent reductase enzymes, which have been characterized in multiple plant species. These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine. Cocaine and its major metabolites may be quantified in blood, plasma, or urine to monitor for abuse, confirm a diagnosis of poisoning, or assist in the forensic investigation of a traffic or other criminal violation or a sudden death. Most commercial cocaine immunoassay screening tests cross-react appreciably with the major cocaine metabolites, but chromatographic techniques can easily distinguish and separately measure each of these substances. When interpreting the results of a test, it is important to consider the cocaine usage history of the individual, since a chronic user can develop tolerance to doses that would incapacitate a cocaine-naive individual, and the chronic user often has high baseline values of the metabolites in his system. Cautious interpretation of testing results may allow a distinction between passive or active usage, and between smoking versus other routes of administration. In 2011, researchers at John Jay College of Criminal Justice reported that dietary zinc supplements can mask the presence of cocaine and other drugs in urine. Similar claims have been made in web forums on that topic. Cocaine may be detected by law enforcement using the Scott reagent. The test can easily generate false positives for common substances and must be confirmed with a laboratory test. Approximate cocaine purity can be determined using 1 mL 2% cupric sulfate pentahydrate in dilute HCl, 1 mL 2% potassium thiocyanate and 2 mL of chloroform. The shade of brown shown by the chloroform is proportional to the cocaine content. This test is not cross sensitive to heroin, methamphetamine, benzocaine, procaine and a number of other drugs but other chemicals could cause false positives. According to a 2016 United Nations report, England and Wales are the countries with the highest rate of cocaine usage (2.4% of adults in the previous year). Other countries where the usage rate meets or exceeds 1.5% are Spain and Scotland (2.2%), the United States (2.1%), Australia (2.1%), Uruguay (1.8%), Brazil (1.75%), Chile (1.73%), the Netherlands (1.5%) and Ireland (1.5%). Cocaine is the second most popular illegal recreational drug in Europe (behind cannabis). Since the mid-1990s, overall cocaine usage in Europe has been on the rise, but usage rates and attitudes tend to vary between countries. European countries with the highest usage rates are the United Kingdom, Spain, Italy, and the Republic of Ireland. Approximately 17 million Europeans (5.1%) have used cocaine at least once and 3.5 million (1.1%) in the last year. About 1.9% (2.3 million) of young adults (15–34 years old) have used cocaine in the last year (latest data available as of 2018). Usage is particularly prevalent among this demographic: 4% to 7% of males have used cocaine in the last year in Spain, Denmark, Republic of Ireland, Italy, and the United Kingdom. The ratio of male to female users is approximately 3.8:1, but this statistic varies from 1:1 to 13:1 depending on country. In 2014 London had the highest amount of cocaine in its sewage out of 50 European cities. Cocaine is the second most popular illegal recreational drug in the United States (behind cannabis) and the U.S. is the world's largest consumer of cocaine. Cocaine is commonly used in middle to upper-class communities and is known as a "rich man's drug". It is also popular amongst college students, as a party drug. A study throughout the entire United States has reported that around 48 percent of people who graduated from high school in 1979 have used cocaine recreationally during some point in their lifetime, compared to approximately 20 percent of students who graduated between the years of 1980 and 1995. Its users span over different ages, races, and professions. In the 1970s and 1980s, the drug became particularly popular in the disco culture as cocaine usage was very common and popular in many discos such as Studio 54. For over a thousand years South American indigenous peoples have chewed the leaves of "Erythroxylon coca", a plant that contains vital nutrients as well as numerous alkaloids, including cocaine. The coca leaf was, and still is, chewed almost universally by some indigenous communities. The remains of coca leaves have been found with ancient Peruvian mummies, and pottery from the time period depicts humans with bulged cheeks, indicating the presence of something on which they are chewing. There is also evidence that these cultures used a mixture of coca leaves and saliva as an anesthetic for the performance of trepanation. When the Spanish arrived in South America, most at first ignored aboriginal claims that the leaf gave them strength and energy, and declared the practice of chewing it the work of the Devil. But after discovering that these claims were true, they legalized and taxed the leaf, taking 10% off the value of each crop. In 1569, Spanish botanist Nicolás Monardes described the indigenous peoples' practice of chewing a mixture of tobacco and coca leaves to induce "great contentment": In 1609, Padre Blas Valera wrote: Although the stimulant and hunger-suppressant properties of coca had been known for many centuries, the isolation of the cocaine alkaloid was not achieved until 1855. Various European scientists had attempted to isolate cocaine, but none had been successful for two reasons: the knowledge of chemistry required was insufficient at the time, and contemporary conditions of sea-shipping from South America could degrade the cocaine in the plant samples available to European chemists. The cocaine alkaloid was first isolated by the German chemist Friedrich Gaedcke in 1855. Gaedcke named the alkaloid "erythroxyline", and published a description in the journal "Archiv der Pharmazie." In 1856, Friedrich Wöhler asked Dr. Carl Scherzer, a scientist aboard the "Novara" (an Austrian frigate sent by Emperor Franz Joseph to circle the globe), to bring him a large amount of coca leaves from South America. In 1859, the ship finished its travels and Wöhler received a trunk full of coca. Wöhler passed on the leaves to Albert Niemann, a PhD student at the University of Göttingen in Germany, who then developed an improved purification process. Niemann described every step he took to isolate cocaine in his dissertation titled "Über eine neue organische Base in den Cocablättern" ("On a New Organic Base in the Coca Leaves"), which was published in 1860—it earned him his PhD and is now in the British Library. He wrote of the alkaloid's "colourless transparent prisms" and said that "Its solutions have an alkaline reaction, a bitter taste, promote the flow of saliva and leave a peculiar numbness, followed by a sense of cold when applied to the tongue." Niemann named the alkaloid "cocaine" from "coca" (from Quechua "cuca") + suffix "ine". Because of its use as a local anesthetic, a suffix "-caine" was later extracted and used to form names of synthetic local anesthetics. The first synthesis and elucidation of the structure of the cocaine molecule was by Richard Willstätter in 1898. It was the first biomimetic synthesis of an organic structure recorded in academic chemical literature. The synthesis started from tropinone, a related natural product and took five steps. With the discovery of this new alkaloid, Western medicine was quick to exploit the possible uses of this plant. In 1879, Vassili von Anrep, of the University of Würzburg, devised an experiment to demonstrate the analgesic properties of the newly discovered alkaloid. He prepared two separate jars, one containing a cocaine-salt solution, with the other containing merely salt water. He then submerged a frog's legs into the two jars, one leg in the treatment and one in the control solution, and proceeded to stimulate the legs in several different ways. The leg that had been immersed in the cocaine solution reacted very differently from the leg that had been immersed in salt water. Karl Koller (a close associate of Sigmund Freud, who would write about cocaine later) experimented with cocaine for ophthalmic usage. In an infamous experiment in 1884, he experimented upon himself by applying a cocaine solution to his own eye and then pricking it with pins. His findings were presented to the Heidelberg Ophthalmological Society. Also in 1884, Jellinek demonstrated the effects of cocaine as a respiratory system anesthetic. In 1885, William Halsted demonstrated nerve-block anesthesia, and James Leonard Corning demonstrated peridural anesthesia. 1898 saw Heinrich Quincke use cocaine for spinal anesthesia. In 1859, an Italian doctor, Paolo Mantegazza, returned from Peru, where he had witnessed first-hand the use of coca by the local indigenous peoples. He proceeded to experiment on himself and upon his return to Milan he wrote a paper in which he described the effects. In this paper he declared coca and cocaine (at the time they were assumed to be the same) as being useful medicinally, in the treatment of "a furred tongue in the morning, flatulence, and whitening of the teeth." A chemist named Angelo Mariani who read Mantegazza's paper became immediately intrigued with coca and its economic potential. In 1863, Mariani started marketing a wine called Vin Mariani, which had been treated with coca leaves, to become cocawine. The ethanol in wine acted as a solvent and extracted the cocaine from the coca leaves, altering the drink's effect. It contained 6 mg cocaine per ounce of wine, but Vin Mariani which was to be exported contained 7.2 mg per ounce, to compete with the higher cocaine content of similar drinks in the United States. A "pinch of coca leaves" was included in John Styth Pemberton's original 1886 recipe for Coca-Cola, though the company began using decocainized leaves in 1906 when the Pure Food and Drug Act was passed. In 1879 cocaine began to be used to treat morphine addiction. Cocaine was introduced into clinical use as a local anesthetic in Germany in 1884, about the same time as Sigmund Freud published his work "Über Coca", in which he wrote that cocaine causes: In 1885 the U.S. manufacturer Parke-Davis sold cocaine in various forms, including cigarettes, powder, and even a cocaine mixture that could be injected directly into the user's veins with the included needle. The company promised that its cocaine products would "supply the place of food, make the coward brave, the silent eloquent and render the sufferer insensitive to pain." By the late Victorian era, cocaine use had appeared as a vice in literature. For example, it was injected by Arthur Conan Doyle's fictional Sherlock Holmes, generally to offset the boredom he felt when he was not working on a case. In early 20th-century Memphis, Tennessee, cocaine was sold in neighborhood drugstores on Beale Street, costing five or ten cents for a small boxful. Stevedores along the Mississippi River used the drug as a stimulant, and white employers encouraged its use by black laborers. In 1909, Ernest Shackleton took "Forced March" brand cocaine tablets to Antarctica, as did Captain Scott a year later on his ill-fated journey to the South Pole. During the mid-1940s, amidst World War II, cocaine was considered for inclusion as an ingredient of a future generation of 'pep pills' for the German military, code named D-IX. In modern popular culture, references to cocaine are common. The drug has a glamorous image associated with the wealthy, famous and powerful, and is said to make users "feel rich and beautiful". In addition the pace of modern society − such as in finance − gives many the incentive to make use of the drug. In many countries, cocaine is a popular recreational drug. In the United States, the development of "crack" cocaine introduced the substance to a generally poorer inner-city market. Use of the powder form has stayed relatively constant, experiencing a new height of use during the late 1990s and early 2000s in the U.S., and has become much more popular in the last few years in the UK. Cocaine use is prevalent across all socioeconomic strata, including age, demographics, economic, social, political, religious, and livelihood. The estimated U.S. cocaine market exceeded US$70 billion in street value for the year 2005, exceeding revenues by corporations such as Starbucks. There is a tremendous demand for cocaine in the U.S. market, particularly among those who are making incomes affording luxury spending, such as single adults and professionals with discretionary income. Cocaine's status as a club drug shows its immense popularity among the "party crowd". In 1995 the World Health Organization (WHO) and the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced in a press release the publication of the results of the largest global study on cocaine use ever undertaken. However, a decision by an American representative in the World Health Assembly banned the publication of the study, because it seemed to make a case for the positive uses of cocaine. An excerpt of the report strongly conflicted with accepted paradigms, for example "that occasional cocaine use does not typically lead to severe or even minor physical or social problems." In the sixth meeting of the B committee, the US representative threatened that "If World Health Organization activities relating to drugs failed to reinforce proven drug control approaches, funds for the relevant programs should be curtailed". This led to the decision to discontinue publication. A part of the study was recuperated and published in 2010, including profiles of cocaine use in 20 countries, but are unavailable . In October 2010 it was reported that the use of cocaine in Australia has doubled since monitoring began in 2003. A problem with illegal cocaine use, especially in the higher volumes used to combat fatigue (rather than increase euphoria) by long-term users, is the risk of ill effects or damage caused by the compounds used in adulteration. Cutting or "stepping on" the drug is commonplace, using compounds which simulate ingestion effects, such as Novocain (procaine) producing temporary anesthaesia, as many users believe a strong numbing effect is the result of strong and/or pure cocaine, ephedrine or similar stimulants that are to produce an increased heart rate. The normal adulterants for profit are inactive sugars, usually mannitol, creatine or glucose, so introducing active adulterants gives the illusion of purity and to 'stretch' or make it so a dealer can sell more product than without the adulterants. The adulterant of sugars allows the dealer to sell the product for a higher price because of the illusion of purity and allows sale of more of the product at that higher price, enabling dealers to significantly increase revenue with little additional cost for the adulterants. A 2007 study by the European Monitoring Centre for Drugs and Drug Addiction showed that the purity levels for street purchased cocaine was often under 5% and on average under 50% pure. The production, distribution, and sale of cocaine products is restricted (and illegal in most contexts) in most countries as regulated by the Single Convention on Narcotic Drugs, and the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. In the United States the manufacture, importation, possession, and distribution of cocaine are additionally regulated by the 1970 Controlled Substances Act. Some countries, such as Peru and Bolivia, permit the cultivation of coca leaf for traditional consumption by the local indigenous population, but nevertheless, prohibit the production, sale, and consumption of cocaine. The provisions as to how much a coca farmer can yield annually is protected by laws such as the Bolivian Cato accord. In addition, some parts of Europe, the United States, and Australia allow processed cocaine for medicinal uses only. Cocaine is a Schedule 8 prohibited substance in Australia under the Poisons Standard (July 2016). A schedule 8 substance is a controlled Drug – Substances which should be available for use but require restriction of manufacture, supply, distribution, possession and use to reduce abuse, misuse and physical or psychological dependence. In Western Australia under the Misuse of Drugs Act 1981 4.0g of cocaine is the amount of prohibited drugs determining a court of trial, 2.0g is the amount of cocaine required for the presumption of intention to sell or supply and 28.0g is the amount of cocaine required for purposes of drug trafficking. The US federal government instituted a national labeling requirement for cocaine and cocaine-containing products through the Pure Food and Drug Act of 1906. The next important federal regulation was the Harrison Narcotics Tax Act of 1914. While this act is often seen as the start of prohibition, the act itself was not actually a prohibition on cocaine, but instead set up a regulatory and licensing regime. The Harrison Act did not recognize addiction as a treatable condition and therefore the therapeutic use of cocaine, heroin or morphine to such individuals was outlawed leading a 1915 editorial in the journal "American Medicine" to remark that the addict "is denied the medical care he urgently needs, open, above-board sources from which he formerly obtained his drug supply are closed to him, and he is driven to the underworld where he can get his drug, but of course, surreptitiously and in violation of the law." The Harrison Act left manufacturers of cocaine untouched so long as they met certain purity and labeling standards. Despite that cocaine was typically illegal to sell and legal outlets were rarer, the quantities of legal cocaine produced declined very little. Legal cocaine quantities did not decrease until the Jones–Miller Act of 1922 put serious restrictions on cocaine manufactures. In 2004, according to the United Nations, 589 tonnes of cocaine were seized globally by law enforcement authorities. Colombia seized 188 t, the United States 166 t, Europe 79 t, Peru 14 t, Bolivia 9 t, and the rest of the world 133 t. Because of the drug's potential for addiction and overdose, cocaine is generally treated as a "hard drug", with severe penalties for possession and trafficking. Demand remains high, and consequently, black market cocaine is quite expensive. Unprocessed cocaine, such as coca leaves, are occasionally purchased and sold, but this is exceedingly rare as it is much easier and more profitable to conceal and smuggle it in powdered form. The scale of the market is immense: 770 tonnes times $100 per gram retail = up to $77 billion. Colombia is as of 2019 the world's largest cocaine producer, with production more than tripling since 2013. Three-quarters of the world's annual yield of cocaine has been produced in Colombia, both from cocaine base imported from Peru (primarily the Huallaga Valley) and Bolivia, and from locally grown coca. There was a 28% increase from the amount of potentially harvestable coca plants which were grown in Colombia in 1998. This, combined with crop reductions in Bolivia and Peru, made Colombia the nation with the largest area of coca under cultivation after the mid-1990s. Coca grown for traditional purposes by indigenous communities, a use which is still present and is permitted by Colombian laws, only makes up a small fragment of total coca production, most of which is used for the illegal drug trade. An interview with a coca farmer published in 2003 described a mode of production by acid-base extraction that has changed little since 1905. Roughly of leaves were harvested per hectare, six times per year. The leaves were dried for half a day, then chopped into small pieces with a string trimmer and sprinkled with a small amount of powdered cement (replacing sodium carbonate from former times). Several hundred pounds of this mixture were soaked in of gasoline for a day, then the gasoline was removed and the leaves were pressed for remaining liquid, after which they could be discarded. Then battery acid (weak sulfuric acid) was used, one bucket per of leaves, to create a phase separation in which the cocaine free base in the gasoline was acidified and extracted into a few buckets of "murky-looking smelly liquid". Once powdered caustic soda was added to this, the cocaine precipitated and could be removed by filtration through a cloth. The resulting material, when dried, was termed "pasta" and sold by the farmer. The 3750 pound yearly harvest of leaves from a hectare produced of "pasta", approximately 40–60% cocaine. Repeated recrystallization from solvents, producing "pasta lavada" and eventually crystalline cocaine were performed at specialized laboratories after the sale. Attempts to eradicate coca fields through the use of defoliants have devastated part of the farming economy in some coca growing regions of Colombia, and strains appear to have been developed that are more resistant or immune to their use. Whether these strains are natural mutations or the product of human tampering is unclear. These strains have also shown to be more potent than those previously grown, increasing profits for the drug cartels responsible for the exporting of cocaine. Although production fell temporarily, coca crops rebounded in numerous smaller fields in Colombia, rather than the larger plantations. The cultivation of coca has become an attractive economic decision for many growers due to the combination of several factors, including the lack of other employment alternatives, the lower profitability of alternative crops in official crop substitution programs, the eradication-related damages to non-drug farms, the spread of new strains of the coca plant due to persistent worldwide demand. The latest estimate provided by the U.S. authorities on the annual production of cocaine in Colombia refers to 290 metric tons. As of the end of 2011, the seizure operations of Colombian cocaine carried out in different countries have totaled 351.8 metric tons of cocaine, i.e. 121.3% of Colombia's annual production according to the U.S. Department of State's estimates. Synthetic cocaine would be highly desirable to the illegal drug industry as it would eliminate the high visibility and low reliability of offshore sources and international smuggling, replacing them with clandestine domestic laboratories, as are common for illicit methamphetamine. However, natural cocaine remains the lowest cost and highest quality supply of cocaine. Actual full synthesis of cocaine is rarely done. Formation of inactive stereoisomers (cocaine has 4 chiral centres – 1"R", 2"R", 3"S", and 5"S", 2 of them dependent, hence a total potential of 8 possible stereoisomers) plus synthetic by-products limits the yield and purity. Names like "synthetic cocaine" and "new cocaine" have been misapplied to phencyclidine (PCP) and various designer drugs. Organized criminal gangs operating on a large scale dominate the cocaine trade. Most cocaine is grown and processed in South America, particularly in Colombia, Bolivia, Peru, and smuggled into the United States and Europe, the United States being the world's largest consumer of cocaine, where it is sold at huge markups; usually in the US at $80–120 for 1 gram, and $250–300 for 3.5 grams ( of an ounce, or an "eight ball"). , cocaine shipments from South America transported through Mexico or Central America were generally moved over land or by air to staging sites in northern Mexico. The cocaine is then broken down into smaller loads for smuggling across the U.S.–Mexico border. The primary cocaine importation points in the United States have been in Arizona, southern California, southern Florida, and Texas. Typically, land vehicles are driven across the U.S.–Mexico border. Sixty-five percent of cocaine enters the United States through Mexico, and the vast majority of the rest enters through Florida. , the Sinaloa Cartel is the most active drug cartel involved in smuggling illicit drugs like cocaine into the United States and trafficking them throughout the United States. Cocaine traffickers from Colombia and Mexico have established a labyrinth of smuggling routes throughout the Caribbean, the Bahama Island chain, and South Florida. They often hire traffickers from Mexico or the Dominican Republic to transport the drug using a variety of smuggling techniques to U.S. markets. These include airdrops of in the Bahama Islands or off the coast of Puerto Rico, mid-ocean boat-to-boat transfers of , and the commercial shipment of tonnes of cocaine through the port of Miami. Another route of cocaine traffic goes through Chile, which is primarily used for cocaine produced in Bolivia since the nearest seaports lie in northern Chile. The arid Bolivia–Chile border is easily crossed by 4×4 vehicles that then head to the seaports of Iquique and Antofagasta. While the price of cocaine is higher in Chile than in Peru and Bolivia, the final destination is usually Europe, especially Spain where drug dealing networks exist among South American immigrants. Cocaine is also carried in small, concealed, kilogram quantities across the border by couriers known as "mules" (or "mulas"), who cross a border either legally, for example, through a port or airport, or illegally elsewhere. The drugs may be strapped to the waist or legs or hidden in bags, or hidden in the body. If the mule gets through without being caught, the gangs will reap most of the profits. If he or she is caught, however, gangs will sever all links and the mule will usually stand trial for trafficking alone. Bulk cargo ships are also used to smuggle cocaine to staging sites in the western Caribbean–Gulf of Mexico area. These vessels are typically 150–250-foot (50–80 m) coastal freighters that carry an average cocaine load of approximately 2.5 tonnes. Commercial fishing vessels are also used for smuggling operations. In areas with a high volume of recreational traffic, smugglers use the same types of vessels, such as go-fast boats, as those used by the local populations. Sophisticated drug subs are the latest tool drug runners are using to bring cocaine north from Colombia, it was reported on 20 March 2008. Although the vessels were once viewed as a quirky sideshow in the drug war, they are becoming faster, more seaworthy, and capable of carrying bigger loads of drugs than earlier models, according to those charged with catching them. Cocaine is readily available in all major countries' metropolitan areas. According to the "Summer 1998 Pulse Check", published by the U.S. Office of National Drug Control Policy, cocaine use had stabilized across the country, with a few increases reported in San Diego, Bridgeport, Miami, and Boston. In the West, cocaine usage was lower, which was thought to be due to a switch to methamphetamine among some users; methamphetamine is cheaper, three and a half times more powerful, and lasts 12–24 times longer with each dose. Nevertheless, the number of cocaine users remain high, with a large concentration among urban youth. In addition to the amounts previously mentioned, cocaine can be sold in "bill sizes": for example, $10 might purchase a "dime bag", a very small amount (0.1–0.15 g) of cocaine. Twenty dollars might purchase 0.15–0.3 g. However, in lower Texas, it is sold cheaper due to it being easier to receive: a dime for $10 is 0.4 g, a 20 is 0.8–1.0 g and an 8-ball (3.5 g) is sold for $60 to $80, depending on the quality and dealer. These amounts and prices are very popular among young people because they are inexpensive and easily concealed on one's body. Quality and price can vary dramatically depending on supply and demand, and on geographic region. In 2008, the European Monitoring Centre for Drugs and Drug Addiction reports that the typical retail price of cocaine varied between €50 and €75 per gram in most European countries, although Cyprus, Romania, Sweden and Turkey reported much higher values. World annual cocaine consumption, as of 2000, stood at around 600 tonnes, with the United States consuming around 300 t, 50% of the total, Europe about 150 t, 25% of the total, and the rest of the world the remaining 150 t or 25%. It is estimated that 1.5 million people in the United States used cocaine in 2010 down from 2.4 million in 2006. Conversely, cocaine use appears to be increasing in Europe with the highest prevalences in Spain, the United Kingdom, Italy, and Ireland. The 2010 UN World Drug Report concluded that "it appears that the North American cocaine market has declined in value from US$47 billion in 1998 to US$38 billion in 2008. Between 2006 and 2008, the value of the market remained basically stable". In 2005, researchers proposed the use of cocaine in conjunction with phenylephrine administered in the form of an eye drop as a diagnostic test for Parkinson's disease.
https://en.wikipedia.org/wiki?curid=7701
Cartesian coordinate system A Cartesian coordinate system (, ) is a coordinate system that specifies each point uniquely in a plane by a set of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in the same unit of length. Each reference line is called a "coordinate axis" or just "axis" (plural "axes") of the system, and the point where they meet is its "origin", at ordered pair . The coordinates can also be defined as the positions of the perpendicular projections of the point onto the two axes, expressed as signed distances from the origin. One can use the same principle to specify the position of any point in three-dimensional space by three Cartesian coordinates, its signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines). In general, "n" Cartesian coordinates (an element of real "n"-space) specify the point in an "n"-dimensional Euclidean space for any dimension "n". These coordinates are equal, up to sign, to distances from the point to "n" mutually perpendicular hyperplanes. The invention of Cartesian coordinates in the 17th century by René Descartes (Latinized name: "Cartesius") revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2, centered at the origin of the plane, may be described as the set of all points whose coordinates "x" and "y" satisfy the equation . Cartesian coordinates are the foundation of analytic geometry, and provide enlightening geometric interpretations for many other branches of mathematics, such as linear algebra, complex analysis, differential geometry, multivariate calculus, group theory and more. A familiar example is the concept of the graph of a function. Cartesian coordinates are also essential tools for most applied disciplines that deal with geometry, including astronomy, physics, engineering and many more. They are the most common coordinate system used in computer graphics, computer-aided geometric design and other geometry-related data processing. The adjective "Cartesian" refers to the French mathematician and philosopher René Descartes, who published this idea in 1637. It was independently discovered by Pierre de Fermat, who also worked in three dimensions, although Fermat did not publish the discovery. The French cleric Nicole Oresme used constructions similar to Cartesian coordinates well before the time of Descartes and Fermat. Both Descartes and Fermat used a single axis in their treatments and have a variable length measured in reference to this axis. The concept of using a pair of axes was introduced later, after Descartes' "La Géométrie" was translated into Latin in 1649 by Frans van Schooten and his students. These commentators introduced several concepts while trying to clarify the ideas contained in Descartes' work. The development of the Cartesian coordinate system would play a fundamental role in the development of the calculus by Isaac Newton and Gottfried Wilhelm Leibniz. The two-coordinate description of the plane was later generalized into the concept of vector spaces. Many other coordinate systems have been developed since Descartes, such as the polar coordinates for the plane, and the spherical and cylindrical coordinates for three-dimensional space. Choosing a Cartesian coordinate system for a one-dimensional space—that is, for a straight line—involves choosing a point "O" of the line (the origin), a unit of length, and an orientation for the line. An orientation chooses which of the two half-lines determined by "O" is the positive, and which is negative; we then say that the line "is oriented" (or "points") from the negative half towards the positive half. Then each point "P" of the line can be specified by its distance from "O", taken with a + or − sign depending on which half-line contains "P". A line with a chosen Cartesian system is called a number line. Every real number has a unique location on the line. Conversely, every point on the line can be interpreted as a number in an ordered continuum such as the real numbers. A Cartesian coordinate system in two dimensions (also called a rectangular coordinate system or an orthogonal coordinate system) is defined by an ordered pair of perpendicular lines (axes), a single unit of length for both axes, and an orientation for each axis. The point where the axes meet is taken as the origin for both, thus turning each axis into a number line. For any point "P", a line is drawn through "P" perpendicular to each axis, and the position where it meets the axis is interpreted as a number. The two numbers, in that chosen order, are the "Cartesian coordinates" of "P". The reverse construction allows one to determine the point "P" given its coordinates. The first and second coordinates are called the "abscissa" and the "ordinate" of "P", respectively; and the point where the axes meet is called the "origin" of the coordinate system. The coordinates are usually written as two numbers in parentheses, in that order, separated by a comma, as in . Thus the origin has coordinates , and the points on the positive half-axes, one unit away from the origin, have coordinates and . In mathematics, physics, and engineering, the first axis is usually defined or depicted as horizontal and oriented to the right, and the second axis is vertical and oriented upwards. (However, in some computer graphics contexts, the ordinate axis may be oriented downwards.) The origin is often labeled "O", and the two coordinates are often denoted by the letters "X" and "Y", or "x" and "y". The axes may then be referred to as the "X"-axis and "Y"-axis. The choices of letters come from the original convention, which is to use the latter part of the alphabet to indicate unknown values. The first part of the alphabet was used to designate known values. A Euclidean plane with a chosen Cartesian coordinate system is called a "Cartesian plane". In a Cartesian plane one can define canonical representatives of certain geometric figures, such as the unit circle (with radius equal to the length unit, and center at the origin), the unit square (whose diagonal has endpoints at and ), the unit hyperbola, and so on. The two axes divide the plane into four right angles, called "quadrants". The quadrants may be named or numbered in various ways, but the quadrant where all coordinates are positive is usually called the "first quadrant". If the coordinates of a point are , then its distances from the "X"-axis and from the "Y"-axis are |"y"| and |"x"|, respectively; where |...| denotes the absolute value of a number. A Cartesian coordinate system for a three-dimensional space consists of an ordered triplet of lines (the "axes") that go through a common point (the "origin"), and are pair-wise perpendicular; an orientation for each axis; and a single unit of length for all three axes. As in the two-dimensional case, each axis becomes a number line. For any point "P" of space, one considers a hyperplane through "P" perpendicular to each coordinate axis, and interprets the point where that hyperplane cuts the axis as a number. The Cartesian coordinates of "P" are those three numbers, in the chosen order. The reverse construction determines the point "P" given its three coordinates. Alternatively, each coordinate of a point "P" can be taken as the distance from "P" to the hyperplane defined by the other two axes, with the sign determined by the orientation of the corresponding axis. Each pair of axes defines a "coordinate hyperplane". These hyperplanes divide space into eight trihedra, called "octants". The octants are: | (+x,+y,+z) | (-x,+y,+z) | (+x,+y,-z) | (-x,+y,-z) | (+x,-y,+z) | (-x,-y,+z) | (+x,-y,-z) | (-x,-y,-z) | The coordinates are usually written as three numbers (or algebraic formulas) surrounded by parentheses and separated by commas, as in or . Thus, the origin has coordinates , and the unit points on the three axes are , , and . There are no standard names for the coordinates in the three axes (however, the terms "abscissa", "ordinate" and "applicate" are sometimes used). The coordinates are often denoted by the letters "X", "Y", and "Z", or "x", "y", and "z". The axes may then be referred to as the "X"-axis, "Y"-axis, and "Z"-axis, respectively. Then the coordinate hyperplanes can be referred to as the "XY"-plane, "YZ"-plane, and "XZ"-plane. In mathematics, physics, and engineering contexts, the first two axes are often defined or depicted as horizontal, with the third axis pointing up. In that case the third coordinate may be called "height" or "altitude". The orientation is usually chosen so that the 90 degree angle from the first axis to the second axis looks counter-clockwise when seen from the point ; a convention that is commonly called "the right hand rule". Since Cartesian coordinates are unique and non-ambiguous, the points of a Cartesian plane can be identified with pairs of real numbers; that is with the Cartesian product formula_1, where formula_2 is the set of all real numbers. In the same way, the points in any Euclidean space of dimension "n" be identified with the tuples (lists) of "n" real numbers, that is, with the Cartesian product formula_3. The concept of Cartesian coordinates generalizes to allow axes that are not perpendicular to each other, and/or different units along each axis. In that case, each coordinate is obtained by projecting the point onto one axis along a direction that is parallel to the other axis (or, in general, to the hyperplane defined by all the other axes). In such an oblique coordinate system the computations of distances and angles must be modified from that in standard Cartesian systems, and many standard formulas (such as the Pythagorean formula for the distance) do not hold (see affine plane). The Cartesian coordinates of a point are usually written in parentheses and separated by commas, as in or . The origin is often labelled with the capital letter "O". In analytic geometry, unknown or generic coordinates are often denoted by the letters ("x", "y") in the plane, and ("x", "y", "z") in three-dimensional space. This custom comes from a convention of algebra, which uses letters near the end of the alphabet for unknown values (such as the coordinates of points in many geometric problems), and letters near the beginning for given quantities. These conventional names are often used in other domains, such as physics and engineering, although other letters may be used. For example, in a graph showing how a pressure varies with time, the graph coordinates may be denoted "p" and "t". Each axis is usually named after the coordinate which is measured along it; so one says the "x-axis", the "y-axis", the "t-axis", etc. Another common convention for coordinate naming is to use subscripts, as ("x"1, "x"2, ..., "x""n") for the "n" coordinates in an "n"-dimensional space, especially when "n" is greater than 3 or unspecified. Some authors prefer the numbering ("x"0, "x"1, ..., "x""n"−1). These notations are especially advantageous in computer programming: by storing the coordinates of a point as an array, instead of a record, the subscript can serve to index the coordinates. In mathematical illustrations of two-dimensional Cartesian systems, the first coordinate (traditionally called the abscissa) is measured along a horizontal axis, oriented from left to right. The second coordinate (the ordinate) is then measured along a vertical axis, usually oriented from bottom to top. Young children learning the Cartesian system, commonly learn the order to read the values before cementing the "x"-, "y"-, and "z"-axis concepts, by starting with 2D mnemonics (e.g. 'Walk along the hall then up the stairs' akin to straight across the "x"-axis then up vertically along the "y"-axis). Computer graphics and image processing, however, often use a coordinate system with the "y"-axis oriented downwards on the computer display. This convention developed in the 1960s (or earlier) from the way that images were originally stored in display buffers. For three-dimensional systems, a convention is to portray the "xy"-plane horizontally, with the "z"-axis added to represent height (positive up). Furthermore, there is a convention to orient the "x"-axis toward the viewer, biased either to the right or left. If a diagram (3D projection or 2D perspective drawing) shows the "x"- and "y"-axis horizontally and vertically, respectively, then the "z"-axis should be shown pointing "out of the page" towards the viewer or camera. In such a 2D diagram of a 3D coordinate system, the "z"-axis would appear as a line or ray pointing down and to the left or down and to the right, depending on the presumed viewer or camera perspective. In any diagram or display, the orientation of the three axes, as a whole, is arbitrary. However, the orientation of the axes relative to each other should always comply with the right-hand rule, unless specifically stated otherwise. All laws of physics and math assume this right-handedness, which ensures consistency. For 3D diagrams, the names "abscissa" and "ordinate" are rarely used for "x" and "y", respectively. When they are, the "z"-coordinate is sometimes called the applicate. The words "abscissa", "ordinate" and "applicate" are sometimes used to refer to coordinate axes rather than the coordinate values. The axes of a two-dimensional Cartesian system divide the plane into four infinite regions, called quadrants, each bounded by two half-axes. These are often numbered from 1st to 4th and denoted by Roman numerals: I (where the signs of the two coordinates are I (+,+), II (−,+), III (−,−), and IV (+,−). When the axes are drawn according to the mathematical custom, the numbering goes counter-clockwise starting from the upper right ("north-east") quadrant. Similarly, a three-dimensional Cartesian system defines a division of space into eight regions or octants, according to the signs of the coordinates of the points. The convention used for naming a specific octant is to list its signs, e.g. or . The generalization of the quadrant and octant to an arbitrary number of dimensions is the orthant, and a similar naming system applies. The Euclidean distance between two points of the plane with Cartesian coordinates formula_4 and formula_5 is This is the Cartesian version of Pythagoras's theorem. In three-dimensional space, the distance between points formula_7 and formula_8 is which can be obtained by two consecutive applications of Pythagoras' theorem. The Euclidean transformations or Euclidean motions are the (bijective) mappings of points of the Euclidean plane to themselves which preserve distances between points. There are four types of these mappings (also called isometries): translations, rotations, reflections and glide reflections. Translating a set of points of the plane, preserving the distances and directions between them, is equivalent to adding a fixed pair of numbers to the Cartesian coordinates of every point in the set. That is, if the original coordinates of a point are , after the translation they will be To rotate a figure counterclockwise around the origin by some angle formula_11 is equivalent to replacing every point with coordinates ("x","y") by the point with coordinates ("x'","y'"), where Thus: formula_14 If are the Cartesian coordinates of a point, then are the coordinates of its reflection across the second coordinate axis (the y-axis), as if that line were a mirror. Likewise, are the coordinates of its reflection across the first coordinate axis (the x-axis). In more generality, reflection across a line through the origin making an angle formula_11 with the x-axis, is equivalent to replacing every point with coordinates by the point with coordinates , where Thus: formula_18 A glide reflection is the composition of a reflection across a line followed by a translation in the direction of that line. It can be seen that the order of these operations does not matter (the translation can come first, followed by the reflection). These Euclidean transformations of the plane can all be described in a uniform way by using matrices. The result formula_19 of applying a Euclidean transformation to a point formula_20 is given by the formula where "A" is a 2×2 orthogonal matrix and is an arbitrary ordered pair of numbers; that is, where To be "orthogonal", the matrix "A" must have orthogonal rows with same Euclidean length of one, that is, and This is equivalent to saying that "A" times its transpose must be the identity matrix. If these conditions do not hold, the formula describes a more general affine transformation of the plane provided that the determinant of "A" is not zero. The formula defines a translation if and only if "A" is the identity matrix. The transformation is a rotation around some point if and only if "A" is a rotation matrix, meaning that A reflection or glide reflection is obtained when, Assuming that translation is not used transformations can be combined by simply multiplying the associated transformation matrices. Another way to represent coordinate transformations in Cartesian coordinates is through affine transformations. In affine transformations an extra dimension is added and all points are given a value of 1 for this extra dimension. The advantage of doing this is that point translations can be specified in the final column of matrix "A". In this way, all of the euclidean transformations become transactable as matrix point multiplications. The affine transformation is given by: Using affine transformations multiple different euclidean transformations including translation can be combined by simply multiplying the corresponding matrices. An example of an affine transformation which is not a Euclidean motion is given by scaling. To make a figure larger or smaller is equivalent to multiplying the Cartesian coordinates of every point by the same positive number "m". If are the coordinates of a point on the original figure, the corresponding point on the scaled figure has coordinates If "m" is greater than 1, the figure becomes larger; if "m" is between 0 and 1, it becomes smaller. A shearing transformation will push the top of a square sideways to form a parallelogram. Horizontal shearing is defined by: Shearing can also be applied vertically: Fixing or choosing the "x"-axis determines the "y"-axis up to direction. Namely, the "y"-axis is necessarily the perpendicular to the "x"-axis through the point marked 0 on the "x"-axis. But there is a choice of which of the two half lines on the perpendicular to designate as positive and which as negative. Each of these two choices determines a different orientation (also called "handedness") of the Cartesian plane. The usual way of orienting the plane, with the positive "x"-axis pointing right and the positive "y"-axis pointing up (and the "x"-axis being the "first" and the "y"-axis the "second" axis), is considered the "positive" or "standard" orientation, also called the "right-handed" orientation. A commonly used mnemonic for defining the positive orientation is the "right-hand rule". Placing a somewhat closed right hand on the plane with the thumb pointing up, the fingers point from the "x"-axis to the "y"-axis, in a positively oriented coordinate system. The other way of orienting the plane is following the "left hand rule", placing the left hand on the plane with the thumb pointing up. When pointing the thumb away from the origin along an axis towards positive, the curvature of the fingers indicates a positive rotation along that axis. Regardless of the rule used to orient the plane, rotating the coordinate system will preserve the orientation. Switching any two axes will reverse the orientation, but switching both will leave the orientation unchanged. Once the "x"- and "y"-axes are specified, they determine the line along which the "z"-axis should lie, but there are two possible orientation for this line. The two possible coordinate systems which result are called 'right-handed' and 'left-handed'. The standard orientation, where the "xy"-plane is horizontal and the "z"-axis points up (and the "x"- and the "y"-axis form a positively oriented two-dimensional coordinate system in the "xy"-plane if observed from "above" the "xy"-plane) is called right-handed or positive. The name derives from the right-hand rule. If the index finger of the right hand is pointed forward, the middle finger bent inward at a right angle to it, and the thumb placed at a right angle to both, the three fingers indicate the relative orientation of the "x"-, "y"-, and "z"-axes in a "right-handed" system. The thumb indicates the "x"-axis, the index finger the "y"-axis and the middle finger the "z"-axis. Conversely, if the same is done with the left hand, a left-handed system results. Figure 7 depicts a left and a right-handed coordinate system. Because a three-dimensional object is represented on the two-dimensional screen, distortion and ambiguity result. The axis pointing downward (and to the right) is also meant to point "towards" the observer, whereas the "middle"-axis is meant to point "away" from the observer. The red circle is "parallel" to the horizontal "xy"-plane and indicates rotation from the "x"-axis to the "y"-axis (in both cases). Hence the red arrow passes "in front of" the "z"-axis. Figure 8 is another attempt at depicting a right-handed coordinate system. Again, there is an ambiguity caused by projecting the three-dimensional coordinate system into the plane. Many observers see Figure 8 as "flipping in and out" between a convex cube and a concave "corner". This corresponds to the two possible orientations of the space. Seeing the figure as convex gives a left-handed coordinate system. Thus the "correct" way to view Figure 8 is to imagine the "x"-axis as pointing "towards" the observer and thus seeing a concave corner. A point in space in a Cartesian coordinate system may also be represented by a position vector, which can be thought of as an arrow pointing from the origin of the coordinate system to the point. If the coordinates represent spatial positions (displacements), it is common to represent the vector from the origin to the point of interest as formula_33. In two dimensions, the vector from the origin to the point with Cartesian coordinates (x, y) can be written as: where formula_35, and formula_36 are unit vectors in the direction of the "x"-axis and "y"-axis respectively, generally referred to as the "standard basis" (in some application areas these may also be referred to as versors). Similarly, in three dimensions, the vector from the origin to the point with Cartesian coordinates formula_37 can be written as: where formula_39 is the unit vector in the direction of the z-axis. There is no "natural" interpretation of multiplying vectors to obtain another vector that works in all dimensions, however there is a way to use complex numbers to provide such a multiplication. In a two dimensional cartesian plane, identify the point with coordinates with the complex number . Here, i is the imaginary unit and is identified with the point with coordinates , so it is not the unit vector in the direction of the "x"-axis. Since the complex numbers can be multiplied giving another complex number, this identification provides a means to "multiply" vectors. In a three dimensional cartesian space a similar identification can be made with a subset of the quaternions. Cartesian coordinates are an abstraction that have a multitude of possible applications in the real world. However, three constructive steps are involved in superimposing coordinates on a problem application. 1) Units of distance must be decided defining the spatial size represented by the numbers used as coordinates. 2) An origin must be assigned to a specific spatial location or landmark, and 3) the orientation of the axes must be defined using available directional cues for all but one axis. Consider as an example superimposing 3D Cartesian coordinates over all points on the Earth (i.e. geospatial 3D). What units make sense? Kilometers are a good choice, since the original definition of the kilometer was geospatial—10 000 km equaling the surface distance from the Equator to the North Pole. Where to place the origin? Based on symmetry, the gravitational center of the Earth suggests a natural landmark (which can be sensed via satellite orbits). Finally, how to orient X-, Y- and Z-axis? The axis of Earth's spin provides a natural orientation strongly associated with "up vs. down", so positive Z can adopt the direction from geocenter to North Pole. A location on the Equator is needed to define the X-axis, and the prime meridian stands out as a reference orientation, so the X-axis takes the orientation from geocenter out to 0 degrees longitude, 0 degrees latitude. Note that with three dimensions, and two perpendicular axes orientations pinned down for X and Z, the Y-axis is determined by the first two choices. In order to obey the right-hand rule, the Y-axis must point out from the geocenter to 90 degrees longitude, 0 degrees latitude. So what are the geocentric coordinates of the Empire State Building in New York City? From a longitude of −73.985656 degrees, a latitude 40.748433 degrees, and Earth radius of 40,000/2π km, and transforming from spherical to Cartesian coordinates, you can estimate the geocentric coordinates of the Empire State Building, ("x", "y", "z") = (1330.53 km, –4635.75 km, 4155.46 km). GPS navigation relies on such geocentric coordinates. In engineering projects, agreement on the definition of coordinates is a crucial foundation. One cannot assume that coordinates come predefined for a novel application, so knowledge of how to erect a coordinate system where there is none is essential to applying René Descartes' thinking. While spatial applications employ identical units along all axes, in business and scientific applications, each axis may have different units of measurement associated with it (such as kilograms, seconds, pounds, etc.). Although four- and higher-dimensional spaces are difficult to visualize, the algebra of Cartesian coordinates can be extended relatively easily to four or more variables, so that certain calculations involving many variables can be done. (This sort of algebraic extension is what is used to define the geometry of higher-dimensional spaces.) Conversely, it is often helpful to use the geometry of Cartesian coordinates in two or three dimensions to visualize algebraic relationships between two or three of many non-spatial variables. The graph of a function or relation is the set of all points satisfying that function or relation. For a function of one variable, "f", the set of all points , where is the graph of the function "f". For a function "g" of two variables, the set of all points , where is the graph of the function "g". A sketch of the graph of such a function or relation would consist of all the salient parts of the function or relation which would include its relative extrema, its concavity and points of inflection, any points of discontinuity and its end behavior. All of these terms are more fully defined in calculus. Such graphs are useful in calculus to understand the nature and behavior of a function or relation.
https://en.wikipedia.org/wiki?curid=7706