text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
Acceptance testing
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests.
In systems engineering it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.
In software testing the ISTQB defines "acceptance testing" as: Acceptance testing is also known as user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance-test-driven development (ATTD) or field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity.
A smoke test may be used as an acceptance test prior to introducing a build of software to the main testing process.
Testing is a set of activities conducted to facilitate discovery and/or evaluation of properties of one or more items under test. Each individual test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification and other valued detail. The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures and/or documentation intended for or used to perform the testing of software.
UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. It's essential that these tests include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction.
The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration.
The acceptance test suite is run using predefined acceptance test procedures to direct the testers which data to use, the step-by-step processes to follow and the expected result following execution. The actual results are retained for comparison with the expected results. If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer.
The anticipated result of a successful test execution:
The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign-off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer).
User acceptance testing (UAT) consists of a process of verifying that a solution works for the user. It is not system testing (ensuring software does not crash and meets documented requirements), but rather ensures that the solution will work for the user (i.e., tests that the user accepts the solution); software vendors often refer to this as "Beta testing".
This testing should be undertaken by a subject-matter expert (SME), preferably the owner or client of the solution under test, and provide a summary of the findings for confirmation to proceed after trial or review. In software development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios.
It is important that the materials given to the tester be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake.
The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production.
User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software crashes; testers and developers identify and fix these issues during earlier unit testing, integration testing, and system testing phases.
UAT should be executed against test scenarios. Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behaviour. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes.
In industry, a common UAT is a factory acceptance test (FAT). This test takes place before installation of the equipment. Most of the time testers not only check that the equipment meets the specification, but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test) and a final inspection.
The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system.
Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment.
Acceptance testing is a term used in agile software development methodologies, particularly extreme programming, referring to the functional testing of a user story by the software development team during the implementation phase.
The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration or the development team will report zero progress.
Typical types of acceptance testing include the following
Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether a component or system satisfies the requirements, normally including hardware as well as software. | https://en.wikipedia.org/wiki?curid=3233 |
Ansbach
Ansbach (; ) is a city in the German state of Bavaria. It is the capital of the administrative region of Middle Franconia. Ansbach is southwest of Nuremberg and north of Munich, on the Fränkische Rezat (Rezat River), a tributary of the Main river. In 2004, its population was 40,723.
Developed in the 8th century as a Benedictine monastery, it became the seat of the Hohenzollern family in 1331. In 1460, the Margraves of Brandenburg-Ansbach lived here. The city has a castle known as Margrafen–Schloss, built between 1704–1738. It was not badly damaged during the World Wars and hence retains its original historical baroque sheen. Ansbach is now home to a US military base and to the Ansbach University of Applied Sciences.
The city has connections via autobahn A6 and highways B13 and B14. Ansbach station is on the Nürnberg–Crailsheim and Treuchtlingen–Würzburg railways and is the terminus of line S4 of the Nuremberg S-Bahn.
Ansbach was originally called Onoltesbach (about 790 AD), a term composed of three parts.
The individual word elements are "Onold" (the city founder's name), the Suffix "-es" (a possessive ending, like "-'s" in English) and the Old High German expression "pah" or "bach" (for brook). The name of the city has slightly changed throughout the centuries into Onoltespah (837 AD), Onoldesbach (1141 AD), Onoldsbach (1230 AD), Onelspach (1338 AD), Onsbach (1508 AD) and finally Ansbach (1732 AD).
It was also formerly known as Anspach.
According to folklore, towards the end of the 7th century a group of Franconian peasants and their families went up into the wilderness to found a new settlement. Their leader Onold led them to an area called the "Rezattal" (Rezat valley). This is where they founded the "Urhöfe" (meaning the first farms: Knollenhof, Voggenhof and Rabenhof). Gradually more settlers, such as the "Winden-Tribe" came, and the farms grew into a small village. Many villages around Ansbach were founded by the "Winden" during that period (even today their settlements can easily identified by their names, like "Meinhardszwinden", "Dautenwinden" or "Brodswinden" for example).
A Benedictine monastery was established there around 748 by the Frankish noble St Gumbertus. The adjoining village of Onoltesbach is first noticed as a proper town in 1221.
The counts of Öttingen ruled over Ansbach until the Hohenzollern burgrave of Nürnberg took over in 1331. The Hohenzollerns made Ansbach the seat of their dynasty until their acquisition of the Margraviate of Brandenburg in 1415. After the 1440 death of Frederick I, a cadet branch of the family established itself as the margraves of Ansbach. George the Pious introduced the Protestant Reformation to Ansbach in 1528, leading to the secularization of Gumbertus Abbey in 1563.
The Markgrafenschloß was built between 1704–1738. Its gardens continued to be a notable attraction into the 19th century. In 1791, the last margrave sold his realm to the Kingdom of Prussia. In 1796, the Duke of Zweibrücken, Maximilian Joseph — the future Bavarian king Max I Joseph — was exiled to Ansbach after Zweibrücken had been taken by the French. In Ansbach, Maximilian von Montgelas wrote an elaborate concept for the future political organization of Bavaria, which is known as the Ansbacher Mémoire. Napoleon forced Prussia to cede Ansbach and its principality to Bavaria in the Franco-Prussian treaty of alliance signed at Schönbrunn Palace on 15 December 1805 at the end of the Third Coalition. The act was confirmed by the 1815 Congress of Vienna; Prussia was compensated with the Bavarian duchy of Berg. Ansbach became the capital of the circle of Middle Franconia following the unification of Germany; at the time, it had a population of 12,635.
Jewish families were resident in Ansbach from at least the end of the 18th century. They set up a Jewish Cemetery in the Ruglaender Strasse, which was vandalised and razed under the Nazi regime in the Kristallnacht. It was repaired in 1946, but it was damaged several times more. A plaque on the wall of the cemetery commemorates these events. The Jewish Congregation built its synagogue at No 3 Rosenbadstrasse, but it too was damaged by the SA, though it was not burnt down for fear of damaging the neighbouring buildings. It serves today as a "Symbolic House of God". A plaque in the entrance serves as a memorial to the synagogue and to Jewish residents who were murdered during the Holocaust. In 1940, at least 500 patients were deported from the Heil- und Pflegeanstalt Ansbach ["Ansbach Medical and Nursing Clinic"] to the extermination facilities Sonnenstein and Hartheim which were disguised as psychiatric institutions, as part of the Action T4 euthanasia action. They were gassed there. At the clinic in Ansbach itself, around 50 intellectually disabled children were injected with the drug Luminal and killed that way. A plaque was erected in their memory in 1988 in the local hospital at No. 38 Feuchtwangerstrasse.
During World War II, a subcamp of Flossenbürg concentration camp was located here. Also during the Second World War the Luftwaffe and Wehrmacht had bases here. The nearby airbase was the home station for the Stab & I/KG53 (Staff & 1st Group of Kampfgeschwader 53) operating 38 Heinkel He 111 bombers. On 1 September 1939 this unit was one of the many that participated in the attack on Poland that started the war. All of its bridges were destroyed during the course of the war. During the Western Allied invasion of Germany in April 1945, the airfield was seized by the United States Third Army, and used by the USAAF 354th Fighter Group which flew P-47 Thunderbolts from the aerodrome (designated ALG R-82) from late April until the German capitulation on 7 May 1945. At the end of the war, 19-year-old student Robert Limpert tried to get the town to surrender to the US Forces without a fight. He was betrayed by Hitler Youth and was hung from the portal of the City Hall by the city's military commander, Col. ("Oberst") Ernst Meyer. Several memorials to his heroic deed have been erected over the years, despite opposition from some residents — in the Ludwigskirche, in the Gymnasium Carolinum and at No 6 Kronenstrasse. After the Second World War, Ansbach belonged to the American Zone. The American Military authorities established a displaced persons (DP) camp in what used to be a sanatorium in what is today the Strüth quarter.
Bachwoche Ansbach has been held in Ansbach since 1947. Since 1970, Ansbach has enlarged its municipal area by incorporating adjacent communities. Ansbach hosts several units of the U.S. armed forces, associated with German units under NATO. There are five separate U.S. installations: Shipton Kaserne, home to 412th Aviation Support Battalion, Katterbach Kaserne, formerly the home of the 1st Infantry Division's 4th Combat Aviation Brigade, which has been replaced by the 12th Combat Aviation Brigade as of 2006, as part of the 1st Infantry Division's return to Fort Riley, Kansas; Bismarck Kaserne, which functions as a satellite post to Katterbach, hosting their Post Theater, barracks, Von Steuben Community Center, Military Police, and other support agencies, Barton Barracks, home to the USAG Ansbach and Bleidorn Barracks, which has a library and housing, and Urlas, which hosts the Post Exchange as well as a housing area opened in 2010. Ansbach was also home to the headquarters of the 1st Armored Division (United States) from 1972 to the early 1990s.
On 24 July 2016 a bomb was detonated in a restaurant in the city, killing only the bomber himself and injuring few people. The perpetrator was reported to be a Syrian refugee whose asylum application had been rejected but who had been given exceptional leave to remain until the security situation in Syria returned to a safe condition. Witnesses reported he had tried to enter a nearby music festival but had been turned away, before detonating his device outside a nearby wine bar.
Climate in this area has mild differences between highs and lows, and there is adequate rainfall year-round. The Köppen climate classification subtype for this climate is "" (Marine West Coast Climate/Oceanic climate).
Around the time of the unification of Germany in 1871, the chief manufactures of Ansbach were woollen, cotton, and half-silk goods; earthenware; tobacco; cutlery; and playing cards. A considerable trade in grain, wool, and flax was also supported. By the onset of the First World War, it also produced machinery, toys, and embroidery.
Today there is a large density of plastics industry in the city and rural districts around Ansbach.
Ansbach lies on the Treuchtlingen-Würzburg railway.
Ansbach is twinned with:
In the novel "The Schirmer Inheritance" (1953) by Eric Ambler (1909–1998), Sergeant Franz Schirmer of the Ansbach Dragoons is wounded in the battle of Preussisch-Eylau in 1807. He returns to Ansbach to settle but changes his name as he has been posted as a deserter. The bulk of the novel concerns efforts by an American law firm to trace his descendants to claim an inheritance. | https://en.wikipedia.org/wiki?curid=3236 |
National Alliance (Italy)
National Alliance (, AN) was a conservative political party in Italy.
AN was the successor of the post-fascist Italian Social Movement (MSI), which had moderated its policies over its last decades and finally distanced itself from its former ideology during a convention in Fiuggi by dissolving into the new party in 1995.
Gianfranco Fini was the leader of AN from its foundation through 2008, after being elected President of the Chamber of Deputies. Fini was succeeded by Ignazio La Russa, who managed the merger of the party with Forza Italia (FI) into The People of Freedom (PdL) in 2009. A group of former AN members, led by La Russa, left FI in 2012 in order to launch the Brothers of Italy (FdI), while others remained in the PdL and were among the founding members of the new Forza Italia (FI) in 2013.
National Alliance, launched in 1994, was officially founded in January 1995, when the Italian Social Movement (MSI), the former neo-fascist party, merged with conservative elements of the former Christian Democracy, which had disbanded in 1994 after two years of scandals and various splits due to corruption at its highest levels, exposed by the "Mani pulite" investigation, and the Italian Liberal Party, disbanded in the same year. Former MSI members were however still the bulk of the new party and former MSI leader Gianfranco Fini was elected leader of the new party.
The AN logo followed a template very similar to that of the Democratic Party of the Left, with the logo of the direct predecessor party in a small circle, as a means of legally preventing others from using it. The name was suggested by an article on the Italian newspaper "Il Tempo" written in 1992 by Domenico Fisichella, a prominent conservative academic. Starting in the 1990s, the MSI gradually transformed into a mainstream right-wing party, culminating in its 1995 dissolution into AN.
The party was part of all three House of Freedoms coalition governments led by Silvio Berlusconi. Gianfranco Fini was nominated Deputy Prime Minister after the 2001 general election and was Foreign Minister from November 2004 to May 2006.
When Gianfranco Fini visited Israel in late November 2003 in the function of Italian Deputy Prime Minister, he labeled the racial laws issued by the fascist regime in 1938 as "infamous", as also Giorgio Almirante, historic leader of MSI, had done before. He also referred to the Italian Social Republic as belonging to the most shameful pages of the past, and considered fascism part of an era of "absolute evil", something which was hardly acceptable to the few remaining hardliners of the party. As a result, Alessandra Mussolini, the granddaughter of the former fascist dictator Benito Mussolini, who had been at odds with the party on a number of issues for a long time, and some hardliners left the party and formed Social Action.
In occasion of the 2006 general election, AN ran within the House of Freedoms, with new allies. The centre-right lost by 24,000 votes in favour of the centre-left coalition The Union. Individually AN received nearly 5 million votes, amounting to 12.3%. In July 2007 a group of splinters led by Francesco Storace formed The Right, which was officially founded on 10 November. Seven MPs of AN, including Teodoro Buontempo and Daniela Santanchè, joined the new party.
In November 2007 Silvio Berlusconi announced that Forza Italia would have soon merged or transformed into The People of Freedom (PdL) party.
After the sudden fall of the second Prodi government in January 2008, the breakup of The Union and the subsequent political crisis which led to a fresh general election, Berlusconi hinted that Forza Italia would have probably contested its last election and the new party would have been officially founded only after that election. In an atmosphere of reconciliation with Gianfranco Fini, Berlusconi also stated that the new party could see the participation of other parties. Finally, on 8 February, Berlusconi and Fini agreed to form a joint list under the banner of the "People of Freedom", allied with the Northern League (LN). After the victory of the PdL in the 2008 general election, AN was merged into the PdL in early 2009.
National Alliance's political program emphasized:
Distinguishing itself from the MSI, the party distanced itself from Benito Mussolini and Fascism and made efforts to improve relations with Jewish groups. With most hardliners leaving the party, it sought to present itself as a respectable conservative party and to join forces with Forza Italia in the European People's Party and, eventually, in a united party of the centre-right.
Although the party approved the market economy and held favourable views on liberalizations and the privatization of state industries, however AN was to the left of Forza Italia on economic issues and sometimes supported statist policies. That is why the party was strong in Rome and Lazio, where most civil servants live. Moreover, AN presented itself as a party promoting national cohesion, national identity and patriotism.
Regarding institutional reforms, the party was a long-time supporter of presidentialism and a plurality voting system, and came to support also federalism and to fully accept the alliance with the Northern League, although the relations with that party were tense at times, especially about issues regarding national unity.
Gianfranco Fini, a moderniser who saw Nicolas Sarkozy and David Cameron as role-models, impressed an ambitious political line to the party, combining the pillars of conservative ideology like security, family values and patriotism with a progressive approach in other areas such as stem cell research and supporting voting rights for legal aliens. Some of these positions were not shared by many members of the party, most of whom staunchly opposed stem cell research and artificial insemination.
National Alliance was a heterogeneous political party and within it members were divided in different factions, some of them very organized:
In the party there was also a group named Ethic-Religious Council, whose board members included Gaetano Rebecchini (founder, ex-DC), Riccardo Pedrizzi (president), Franco Tofoni (vice president), Luigi Gagliardi (secretary-general), Alfredo Mantovano, Antonio Mazzocchi and Riccardo Migliori. This was not a faction but an official organism within the party and expressed the official position of the party on ethical and religious matters. Sometimes the group criticized Gianfranco Fini for his liberal views on abortion, artificial insemination and stem-cell research, which led some notable ex-DC members as Publio Fiori to leave the party. Some members of the Council, such as Pedrizzi and Mantovano were described as members of an unofficial Catholic Right faction.
The party had roughly 10–15% support across Italy, having its strongholds in Central and Southern Italy (Lazio 18.6%, Umbria 15.2%, Marche 14.3%, Abruzzo 14.3%, Apulia 13.2%, Sardinia 12.9%, Tuscany 12.6% and Campania 12.6% in the last general election), scoring badly in Lombardy (10.2%) and Sicily (10.9%), while competing in the North-East (Friuli-Venezia Giulia 15.5% and Veneto 11.3%).
The party had a good showing in the first general election to which it took part (13.5% in 1994) and reached 15.7% in 1996, when Fini tried for the first time to replace Silvio Berlusconi as leader of the centre-right. From that moment the party suffered an electoral decline, but remained the third force of Italian politics.
In the 2006 general election, the final election to which the party participated with its own list, AN won 12.3% of the vote, securing 71 seats in the Chamber of Deputies and 41 in the Senate. In the 2008 general election the party had 90 deputies and 48 senators elected.
The electoral results of National Alliance in general (Chamber of Deputies) and European Parliament elections since 1994 are shown in the chart below.
The electoral results of National Alliance in the 10 most populated regions of Italy are shown in the table below. | https://en.wikipedia.org/wiki?curid=3237 |
Arno
The Arno is a river in the Tuscany region of Italy. It is the most important river of central Italy after the Tiber.
The river originates on Monte Falterona in the Casentino area of the Apennines, and initially takes a southward curve. The river turns to the west near Arezzo passing through Florence, Empoli and Pisa, flowing into the Tyrrhenian Sea at Marina di Pisa.
With a length of , it is the largest river in the region. It has many tributaries: Sieve at long, Bisenzio at , and the Era, Elsa, Pesa, and Pescia. The drainage basin amounts to more than and drains the waters of the following subbasins:
It crosses Florence, where it passes below the Ponte Vecchio and the Santa Trinita bridge (built by Bartolomeo Ammannati but inspired by Michelangelo). The river flooded this city regularly in historical times, most recently in 1966, with after rainfall of in Badia Agnano and in Florence, in only 24 hours.
Before Pisa, the Arno is crossed by the Imperial Canal at La Botte. This water channel passes under the Arno through a tunnel, and serves to drain the former area of the Lago di Bientina, which was once the largest lake in Tuscany before its reclamation.
The flow rate of the Arno is irregular. It is sometimes described as having a torrentlike behaviour, because it can easily go from almost dry to near flood in a few days. At the point where the Arno leaves the Apennines, flow measurements can vary between . New dams built upstream of Florence have greatly alleviated the problem in recent years.
The flood on November 4, 1966 collapsed the embankment in Florence, killing at least 40 people and damaging or destroying millions of works of art and rare books. New conservation techniques were inspired by the disaster, but even decades later hundreds of works still await restoration.
From Latin "Arnus" (Pliny, "Natural History" 3.50). The philologist Hans Krahe related this toponym on a paleo-European basis "*Ar-n-", derived from the Proto-Indo-European root *"er-", "flow, move". | https://en.wikipedia.org/wiki?curid=3240 |
Aveiro, Portugal
Aveiro ( or ) is a city and a municipality in Portugal. In 2011, the population was 78,450, in an area of : it is the second most populous city in the Centro Region of Portugal (after Coimbra). Along with the neighbouring city of Ílhavo, Aveiro is part of an urban agglomeration that includes 120,000 inhabitants, making it one of the most important populated regions by density in the North Region, and primary centre of the Intermunicipal Community of Aveiro and Baixo Vouga. Administratively, the president of the municipal government is José Ribau Esteves, elected by coalition between the Social Democratic Party and the Democratic Social Centre, who governs the ten civil parishes ().
The presence of human settlement in the territory of Aveiro extends to the period associated with the great dolmens of pre-history, which exist in most of the region. The Latinised toponym ‘'Averius'’ derived from the Celtic word "aber" (river-mouth, etym.< Brythonic *aber < Proto-Celtic *adberos, compare Welsh Aberystwyth).
For a long period Aveiro was an important economic link in the production of salt and commercial shipping. It was a centre of salt exploration by the Romans and trade centre through the Middle-ages, registered since 26 January 959 (from the testament of Countess Mumadona Dias to the "cenóbio" of Guimarães). During this testament, Mumadona Dias also highlighted the ancient name for Aveiro, this time referring to the monastery's lands in "Alauario et Salinas", literally, ""a gathering place or preserve of birds and of great salt"".
From 11th century onwards, Aveiro became popular with Portuguese royalty.
Later, King João I, on the advice of his son Pedro, who was the donatary of Aveiro, requested the construction of fortification walls.
King D. Duarte conceded in 1435 the privilege of providing an annual duty-free fair, later referred to as the "Feira de Março" ("March Fair"), today still an annual tradition.
The Princess St. Joana, daughter of Afonso V lived in Aveiro, entering the convent of Jesus, and lived there until her death on 12 May 1490. During her life her presence brought attention to the town, and favoured it with an elevated level of development for the time.
The first charter (foral) was conceded by Manuel I of Portugal on 4 August 1515, as indicated in the "Livro de Leituras Novas de Forais da Estremadura". Its geographic position along the Aveiro River had always helped it to subsist and grow, supported by salt market, fishing and maritime commercial development.
By the beginning of the 15th century, there already existed a great wall around the historical centre, intonating the significance of the community and growth of the population. This included the founding of many religious institutions and their supports, which assisted during the 17th and 18th century crises associated with silt in the waterway. In the winter of 1575, a terrible storm closed the entrance to its port, ending a thriving trade in metals and tiles, and creating a reef barrier at the Atlantic Ocean. The walls were subsequently demolished and used to create the docks around the new sand bar.
Between the 16th and 17th centuries, the river's instability at the mouth (between the Ria and open ocean) resulted in the closure of the canal, impeding the use of the port of Aveiro, and creating stagnation in the waters of the lagoon. This blow to the economy created a social and economic crisis, and resulted in the decrease in the population and emigration. It was at this time that the Church of the Miserícordia was constructed, during the Philippine Dynastic union.
In 1759, King José I elevated the town to the status of city, a few months after condemning the Duke of Aveiro (a title established in 1547 by João III), José Mascarenhas, to death. As a result, Aveiro became known as Nova Bragança: it was later abandoned much later, and returned to Aveiro. In 1774, by request of King José, Pope Clement XIV instituted the Diocese of Aveiro.
In the 19th century, the Aveirense were active during the Liberal Wars, and it was José Estêvão Coelho de Magalhães, a parliamentary member who was determinant in resolving the problem of access along the Ria. He also helped with the development of transport, especially the railway line between Lisbon and Porto. It was the opening of the artificial canals, completed in 1808, that allowed Aveiro to expand economically, marking the beginning in the town's growth.
The municipality was elevated to the status of town, centered on its principal church, consecrated to the Archangel Michael, today the location of the "Praça da República" (having been demolished in 1835).
Located on the shore of the Atlantic Ocean, Aveiro is an industrial city with an important seaport.
The seat of the municipality is the city of Aveiro, comprising the five urban parishes with about 73,003 inhabitants. The city of Aveiro is also the capital of the District of Aveiro, and the largest city in the Baixo Vouga intermunicipal community subregion.
Aveiro is known as "the Portuguese Venice", due to its system of canals and boats similar to the Italian city of Venice.
Aveiro has a warm-summer mediterranean climate influenced by its proximity to the Atlantic Ocean. The maritime influence causes a narrow temperature range resulting in summers averaging around in daytime temperatures, considerably lower than inland areas on the same parallel on the Iberian Peninsula. As typical of mediterranean climates, summers are dry and winters are wet. A coastal feature is that frosts are rare and never severe. The hottest temperature recorded was . Temperatures above are extremely occasional, and averages only a couple of times per annum.
Administratively, the municipality is divided into 10 civil parishes ():
São Jacinto is located on an eponymous peninsula, between the Atlantic Ocean and Ria de Aveiro. Aveiro had 61,430 eligible voters in 2006.
Aveiro's sister cities are:
Aveiro was known for many years for its production of salt and for the moliço seaweed harvest, which was used as fertilizer before the development of chemicals for that purpose. The boats once used for harvesting now carry tourists on the canals. Salt production has also decreased dramatically with only a few salt ponds still remaining.
The region is now known for the preponderance of ceramics industries, a reflection of the regions advancements, resulting in a long productive tradition since the late Roman, early Medieval period (reflected in the ceramics kilns).
Software development is important too, both at the R&D centre for a large telecom company and at the University of Aveiro (UAVR) which is attended by 15,000 students on undergraduate and postgraduate programs. UAVR
works with companies in national and European R&D projects.
The city of Aveiro has several shopping centers and malls (Pingo Doce Shopping Center, Fórum Aveiro, Glicínias Plaza (Jumbo – Auchan), Aveiro's Shopping Center (Continente & Mediamarkt), Aveiro's Retail Park and the Oita Shopping Center). This city has lots of traditional commerce stores. The most central one being Forum Aveiro with clothes stores, restaurant zone, a book shop and a cinema.
The town's unemployment rate in 2015 was 12.5%; the university is a major employer.
Tourism is also important for the economy. The old town centre, with its Art Nouveau and Romanesque architecture and "gondolas" (barcos moliceiros once used for collecting moliço seaweed) plying the Ria de Aveiro canals, is referred to as "The Venice of Portugal" in some tourist brochures.
Important tourist attractions are the Arte Nova (Art Nouveau) architectural designs and tiles of some buildings that were created in the early 20th century, the Art Nouveau museum, the Aveiro Museum (Museu de Aveiro, formerly the Mosteiro de Jesus convent with exhibits of King Afonso V's daughter, Santa Joana), the 15th century Aveiro Sé or São Domingos cathedral and the Church of Jesus (Igreja de Jesus) with its beautiful architecture. The nearby beaches, Costa Nova and Barra, attract many visitors in warm weather; they can be reached by bus from Aveiro. Other sites of interest to tourists include the Carmelite Church and the Misericórdia Church built in the 16th century.
The local economy is fed by a series of transport networks that cross the municipal boundaries.
Regional gateways include air service through the Aeródromo de Aveiro/São Jacinto (LPAV) and the Porto de Aveiro (Ílhavo/Aveiro).
Rail service includes service by Alfa Pendular (between Lisbon and Braga; Lisbon and Oporto; Faro and Oporto) and Intercity (between Lisbon and Oporto as well as Lisbon and Guimarães) trains; suburban links through the Urbanos do Porto and, also, the Linha do Vouga, a narrow gauge railway to Águeda and Sernada do Vouga.
The primary expressways and inter-regional thoroughfares include: A1 (between Porto and Lisbon); and the A25 (which links Viseu, Guarda and Vilar Formoso).
Intercity buses connect Aveiro with Porto and Lisbon several times a day.
"Moliceiros" provide access along the Ria for tourist visits, in addition to traditional fishing or recreational purposes, including regattas.
The architecture of Aveiro is influenced by two phases: the pre-Kingdom era, with a number of historical monuments; and the modernist movements resulting from the expansion of economy during the 19th-20th centuries.
The city's primary landmark is the 15th century Monastery of Jesus (), containing the tomb of King Afonso V's daughter, St. Joana (who died in 1490). The presence of this royal personage, beatified in 1693, proved to be of great benefit when she bequeathed her valuable estate to the convent. In the 17th and 18th centuries, the convent housed a school of embroidery, but was transformed into the "Museu de Santa Joana", or simply, the Museum of Aveiro, housing many of these handicrafts.
The abundance of 19th-20th century architectural buildings reflects the effects of the boom during that period, including many of the Art Novo and Art Deco buildings, inspired by modernist trends and Nationalist tendencies of the Estado Novo regime. The best of these is in the university campus, where many of the nationalist architects were involved in construction projects. The Art Novo architecture was built by wealthy families from Brazil; their buildings included homes and shops. Traditional Portuguese decorations such as tiles were used. The concept did not last for a long time, but its presence is very distinctive in Aveiro; it is one of only 20 cities in the world that are included in the Réseau Art Nouveau Network, listing cities in Europe that are known for this architectural style.
There are several attractions in the city of Aveiro, including cathedrals, canals and the beaches, including the "Ílhavo ceramica de Vista Alegre" and the beaches of Barra, Costa Nova do Prado, and Gafanha da Nazaré.
Aveiro is known in Portugal for its traditional sweets, "Ovos Moles de Aveiro" (PGI), "trouxas de ovos", both made from eggs. "Raivas" are also typical biscuits of Aveiro.
The municipal holiday is 12 May, the day of Joanna, Princess of Portugal (1452-1490).
The University of Aveiro was created in 1973 and attracts thousands of students to the city. It is ranked as the 354th best university in the world in the "Times" World University Rankings, and the 2nd best in Portugal.
The University has about 430 professors (with PhD degrees), 11,000 undergraduate students, and 1,300 post-graduate students.
Sport Clube Beira-Mar is an association football club. Founded in 1922, it has a sports academy with various youth levels in sports including basketball and futsal. The club used to play at Estádio Municipal de Aveiro, designed by Portuguese architect Tomás Taveira for Euro 2004, where it held two group matches.
The other long-established club in the city, Os Galitos, was founded in 1904 and houses a wide variety of sports. Its rowers have represented Portugal in international tournaments including the Olympic Games. | https://en.wikipedia.org/wiki?curid=3244 |
Anthony the Great
Anthony or Antony the Great ( "Antṓnios"; ; ; c. 12 January 251 – 17 January 356), was a Christian monk from Egypt, revered since his death as a saint. He is distinguished from other saints named Anthony such as , by various epithets of his own: , , and For his importance among the Desert Fathers and to all later Christian monasticism, he is also known as the . His feast day is celebrated on 17 January among the Orthodox and Catholic churches and on Tobi 22 in the Coptic calendar.
The biography of Anthony's life by Athanasius of Alexandria helped to spread the concept of Christian monasticism, particularly in Western Europe via its Latin translations. He is often erroneously considered the first Christian monk, but as his biography and other sources make clear, there were many ascetics before him. Anthony was, however, among the first known to go into the wilderness (about 270), which seems to have contributed to his renown. Accounts of Anthony enduring supernatural temptation during his sojourn in the Eastern Desert of Egypt inspired the often-repeated subject of the temptation of St. Anthony in Western art and literature.
Anthony is appealed to against infectious diseases, particularly skin diseases. In the past, many such afflictions, including ergotism, erysipelas, and shingles, were referred to as "St. Anthony's fire".
Most of what is known about Anthony comes from the "Life of Anthony". Written in Greek around 360 by Athanasius of Alexandria, it depicts Anthony as an illiterate and holy man who through his existence in a primordial landscape has an absolute connection to the divine truth, which always is in harmony with that of Athanasius as the biographer.
A continuation of the genre of secular Greek biography, it became his most widely read work. Sometime before 374 it was translated into Latin by Evagrius of Antioch. The Latin translation helped the "Life" become one of the best known works of literature in the Christian world, a status it would hold through the Middle Ages.
Translated into several languages, it became something of a best seller in its day and played an important role in the spreading of the ascetic ideal in Eastern and Western Christianity. It later served as an inspiration to Christian monastics in both the East and the West, and helped to spread the concept of Christian monasticism, particularly in Western Europe via its Latin translations.
Many stories are also told about Anthony in various collections of sayings of the Desert Fathers. He is often erroneously considered the first Christian monk, but as his biography and other sources make clear, there were many ascetics before him. Anthony was, however, the first to go into the wilderness (about AD 270), a geographical move that seems to have contributed to his renown.
Anthony probably spoke only his native language, Coptic, but his sayings were spread in a Greek translation. He himself dictated letters in Coptic, seven of which are extant.
Anthony was born in Coma in Lower Egypt to wealthy landowner parents. When he was about 20 years old, his parents died and left him with the care of his unmarried sister. Shortly thereafter, he decided to follow the gospel exhortation in Matthew 19: 21, "If you want to be perfect, go, sell what you have and give to the poor, and you will have treasures in heaven." Anthony gave away some of his family's lands to his neighbors, sold the remaining property, and donated the funds to the poor. He then left to live an ascetic life, placing his sister with a group of Christian virgins.
For the next fifteen years, Anthony remained in the area, spending the first years as the disciple of another local hermit. There are various legends that he worked as a swineherd during this period.
Anthony is sometimes considered the first monk, and the first to initiate solitary desertification, but there were others before him. There were already ascetic hermits (the "Therapeutae"), and loosely organized cenobitic communities were described by the Jewish philosopher Philo of Alexandria in the 1st century as long established in the harsh environment of Lake Mareotis and in other less accessible regions. Philo opined that "this class of persons may be met with in many places, for both Greece and barbarian countries want to enjoy whatever is perfectly good." Christian ascetics such as Thecla had likewise retreated to isolated locations at the outskirts of cities. Anthony is notable for having decided to surpass this tradition and headed out into the desert proper. He left for the alkaline Nitrian Desert (later the location of the noted monasteries of Nitria, Kellia, and Scetis) on the edge of the Western Desert about west of Alexandria. He remained there for 13 years.
Anthony maintained a very strict ascetic diet. He ate only bread, salt and water and never meat or wine. He ate at most only once a day and sometimes fasted through two or four days.
According to Athanasius, the devil fought Anthony by afflicting him with boredom, laziness, and the phantoms of women, which he overcame by the power of prayer, providing a theme for Christian art. After that, he moved to one of the tombs near his native village. There it was that the "Life" records those strange conflicts with demons in the shape of wild beasts, who inflicted blows upon him, and sometimes left him nearly dead.
After fifteen years of this life, at the age of thirty-five, Anthony determined to withdraw from the habitations of men and retire in absolute solitude. He went into the desert to a mountain by the Nile called Pispir (now Der-el-Memun), opposite Arsinoë. There he lived strictly enclosed in an old abandoned Roman fort for some 20 years. Food was thrown to him over the wall. He was at times visited by pilgrims, whom he refused to see; but gradually a number of would-be disciples established themselves in caves and in huts around the mountain. Thus a colony of ascetics was formed, who begged Anthony to come forth and be their guide in the spiritual life. Eventually, he yielded to their importunities and, about the year 305, emerged from his retreat. To the surprise of all, he appeared to be not emaciated, but healthy in mind and body.
For five or six years he devoted himself to the instruction and organization of the great body of monks that had grown up around him; but then he once again withdrew into the inner desert that lay between the Nile and the Red Sea, near the shore of which he fixed his abode on a mountain where still stands the monastery that bears his name, Der Mar Antonios. Here he spent the last forty-five years of his life, in a seclusion, not so strict as Pispir, for he freely saw those who came to visit him, and he used to cross the desert to Pispir with considerable frequency. Amid the Diocletian Persecutions, around 311 Anthony went to Alexandria and was conspicuous visiting those who were imprisoned.
Anthony was not the first ascetic or hermit, but he may properly be called the "Father of Monasticism" in Christianity, as he organized his disciples into a community and later, following the spread of Athanasius's hagiography, was the inspiration for similar communities throughout Egypt and, elsewhere. Macarius the Great was a disciple of Anthony. Visitors traveled great distances to see the celebrated holy man. Anthony is said to have spoken to those of a spiritual disposition, leaving the task of addressing the more worldly visitors to Macarius. Macarius later founded a monastic community in the Scetic desert.
The fame of Anthony spread and reached Emperor Constantine, who wrote to him requesting his prayers. The brethren were pleased with the Emperor's letter, but Anthony was not overawed and wrote back exhorting the Emperor and his sons not to esteem this world but remember the next.
The stories of the meeting of Anthony and Paul of Thebes, the raven who brought them bread, Anthony being sent to fetch the cloak given him by "Athanasius the bishop" to bury Paul's body in, and Paul's death before he returned, are among the familiar legends of the "Life". However, belief in the existence of Paul seems to have existed quite independently of the "Life".
In 338, he left the desert temporarily to visit Alexandria to help refute the teachings of Arius.
When Anthony sensed his death approaching, he commanded his disciples to give his staff to Macarius of Egypt, and to give one sheepskin cloak to Athanasius of Alexandria and the other sheepskin cloak to Serapion of Thmuis, his disciple. Anthony was interred, according to his instructions, in a grave next to his cell.
Accounts of Anthony enduring supernatural temptation during his sojourn in the Eastern Desert of Egypt inspired the often-repeated subject of the temptation of St. Anthony in Western art and literature.
Anthony is said to have faced a series of supernatural temptations during his pilgrimage to the desert. The first to report on the temptation was his contemporary Athanasius of Alexandria. It is possible these events, like the paintings, are full of rich metaphor or in the case of the animals of the desert, perhaps a vision or dream. Emphasis on these stories, however, did not really begin until the Middle Ages when the psychology of the individual became of greater interest.
Some of the stories included in Anthony's biography are perpetuated now mostly in paintings, where they give an opportunity for artists to depict their more lurid or bizarre interpretations. Many artists, including Martin Schongauer, Hieronymus Bosch, Dorothea Tanning, Max Ernst, Leonora Carrington and Salvador Dalí, have depicted these incidents from the life of Anthony; in prose, the tale was retold and embellished by Gustave Flaubert in "The Temptation of Saint Anthony".
Anthony was on a journey in the desert to find Paul of Thebes, who according to his dream was a better Hermit than he. Anthony had been under the impression that he was the first person to ever dwell in the desert; however, due to the dream, Anthony was called into the desert to find his "better", Paul. On his way there, he ran into two creatures in the forms of a centaur and a satyr. Although chroniclers sometimes postulated they might have been living beings, Western theology considers to have been demons.
While traveling through the desert, Anthony first found the centaur, a "creature of mingled shape, half horse half man," whom he asked about directions. The creature tried to speak in an unintelligible language, but ultimately pointed with his hand the way desired, and then ran away and vanished from sight. It was interpreted as a demon trying to terrify him, or alternately a creature engendered by the desert.
Anthony found next the satyr, a "a manikin with hooked snout, horned forehead, and extremities like goats's feet." This creature was peaceful and offered him fruits, and when Anthony asked who he was, the satyr replied, "I'm a mortal being and one of those inhabitants of the desert whom the Gentiles deluded by various forms of error worship under the names of Fauns, Satyrs, and Incubi. I am sent to represent my tribe. We pray you in our behalf to entreat the favor of your Lord and ours, who, we have learnt, came once to save the world, and 'whose sound has gone forth into all the earth.'" Upon hearing this, Anthony was overjoyed and rejoiced over the glory of Christ. He condemned the city of Alexandria for worshipping monsters instead of God while beasts like the satyr spoke about Christ.
Another time Anthony was travelling in the desert and found a plate of silver coins in his path.
Once, Anthony tried hiding in a cave to escape the demons that plagued him. There were so many little demons in the cave though that Anthony's servant had to carry him out because they had beaten him to death. When the hermits were gathered to Anthony's corpse to mourn his death, Anthony was revived. He demanded that his servants take him back to that cave where the demons had beaten him. When he got there he called out to the demons, and they came back as wild beasts to rip him to shreds. All of a sudden a bright light flashed, and the demons ran away. Anthony knew that the light must have come from God, and he asked God where he was before when the demons attacked him. God replied, "I was here but I would see and abide to see thy battle, and because thou hast mainly fought and well maintained thy battle, I shall make thy name to be spread through all the world."
Anthony had been secretly buried on the mountain-top where he had chosen to live. His remains were reportedly discovered in 361, and transferred to Alexandria. Some time later, they were taken from Alexandria to Constantinople, so that they might escape the destruction being perpetrated by invading Saracens. In the eleventh century, the Byzantine emperor gave them to the French Count Jocelin. Jocelin had them transferred to La-Motte-Saint-Didier, later renamed. There, Jocelin undertook to build a church to house the remains, but died before the church was even started. The building was finally erected in 1297 and became a centre of veneration and pilgrimage, known as Saint-Antoine-l'Abbaye.
Anthony is credited with assisting in a number of miraculous healings, primarily from ergotism, which became known as "St. Anthony's Fire". He was credited by two local noblemen of assisting them in recovery from the disease. They then founded the Hospital Brothers of St. Anthony in honor of him, who specialized in nursing the victims of skin diseases.
Veneration of Anthony in the East is more restrained. There are comparatively few icons and paintings of him. He is, however, regarded as the "first master of the desert and the pinnacle of holy monks", and there are monastic communities of the Maronite, Chaldean, and Orthodox churches which state that they follow his monastic rule. During the Middle Ages, Anthony, along with Quirinus of Neuss, Cornelius and Hubertus, was venerated as one of the Four Holy Marshals ("Vier Marschälle Gottes") in the Rhineland.
Though Anthony himself did not organize or create a monastery, a community grew around him based on his example of living an ascetic and isolated life. Athanasius' biography helped propagate Anthony's ideals. Athanasius writes, "For monks, the life of Anthony is a sufficient example of asceticism."
Examples of purely Coptic literature are the works of Anthony and Pachomius, who only spoke Coptic, and the sermons and preachings of Shenouda the Archmandrite, who chose to only write in Coptic. The earliest original writings in Coptic language were the letters by Anthony. During the 3rd and 4th centuries many ecclesiastics and monks wrote in Coptic.
The main character in the Hervey Allen novel "Anthony Adverse", and the 1936 film of the same name, is an abandoned child who is placed in a foundling wheel on the saint's feast day, and given the name Anthony in his honor. | https://en.wikipedia.org/wiki?curid=3246 |
Amblypoda
Amblypoda is a taxonomic hypothesis uniting a group of extinct, herbivorous mammals. They were considered a suborder of the primitive ungulate mammals and have since been shown to represent a polyphyletic group.
The Amblypoda take their name from their short and stumpy feet, which were furnished with five toes each and supported massive pillar-like limbs. The brain cavity was extremely small and insignificant in comparison to the bodily mass, which was equal to that of the largest rhinoceroses. These animals were descendants of the small ancestral ungulates that retained all the primitive characteristics of the latter, accompanied by a huge increase in body size.
The Amblypoda were confined to the Paleocene and Eocene periods and occurred in North America, Asia (especially Mongolia) and Europe. The cheek teeth were short-crowned (brachyodont), with the tubercles more-or-less completely fused into transverse ridges, or cross-crests (lophodont type), and the total number of teeth was in one case the typical 44, but in another was fewer. The vertebra of the neck unite on nearly flat surfaces, the humerus had lost the foramen, or perforation, at the lower end, and the third trochanter to the femur may have also been wanting. In the forelimb, the upper and lower series of carpal (finger) bones scarcely alternated, but in the hind foot, the astragalus overlapped the cuboid, while the fibula, which was quite distinct from the tibia (as was the radius from the ulna in the forelimb), articulated with both astragalus and calcaneum.
The most generalized type was "Coryphodon", representing the family Coryphodontidae, from the lower Eocene of Europe and North America, in which there were 44 teeth and no horn-like excrescences on the long skull, while the femur had a third trochanter. The canines were somewhat elongated and were followed by a short gap in each jaw, and the cheek-teeth were adapted for succulent food. The length of the body reached about six feet in some cases.
In the middle Eocene formations of North America occurred the more specialized "Uintatherium" (or "Dinoceras"), typifying the family Uintatheriidae. Uintatheres were huge creatures with long narrow skulls, of which the elongated facial portion carried three pairs of bony horn-cores, probably covered with short horns in life, the hind-pair having been much the largest. The dental formula was i. 0/3, c. 1/1, p. 3/3·4, m. 3/3, the upper canines having been long sabre-like weapons, protected by a descending flange on each side of the lower front jaw.
In the basal Eocene of North America, the Amblypoda were represented by extremely primitive, five-toed, small ungulates such as "Periptychus" and "Pantolambda", each of these typifying a family. The full typical series of 44 teeth was developed in each, but whereas in the Periptychidae, the upper molars were bunodont and tritubercular, in the Pantolambdidae, they had assumed a selenodont structure. Creodont characters were displayed in the skeleton.
Few authorities recognize Amblypoda in modern classifications. The following mammals were once considered part of this group: | https://en.wikipedia.org/wiki?curid=3250 |
Amblygonite
Amblygonite () is a fluorophosphate mineral, (Li,Na)AlPO4(F,OH), composed of lithium, sodium, aluminium, phosphate, fluoride and hydroxide. The mineral occurs in pegmatite deposits and is easily mistaken for albite and other feldspars. Its density, cleavage and flame test for lithium are diagnostic. Amblygonite forms a series with "montebrasite", the low fluorine endmember. Geologic occurrence is in granite pegmatites, high-temperature tin veins, and greisens. Amblygonite occurs with spodumene, apatite, lepidolite, tourmaline, and other lithium-bearing minerals in pegmatite veins. It contains about 10% lithium, and has been utilized as a source of lithium. The chief commercial sources have historically been the deposits of California and France.
The mineral was first discovered in Saxony by August Breithaupt in 1817, and named by him from the Greek "amblus", blunt, and "gonia", angle, because of the obtuse angle between the cleavages. Later it was found at Montebras, Creuse, France, and at Hebron in Maine; and because of slight differences in optical character and chemical composition the names montebrasite and hebronite have been applied to the mineral from these localities. It has been discovered in considerable quantity at Pala in San Diego county, California; Caceres, Spain; and the Black Hills of South Dakota. The largest documented single crystal of amblygonite measured 7.62×2.44×1.83 m3 and weighed ~102 tons.
Transparent amblygonite has been faceted and used as a gemstone. As a gemstone set into jewelry it is vulnerable to breakage and abrasion from general wear, as its hardness and toughness are poor. The main sources for gem material are Brazil and the United States. Australia, France, Germany, Namibia, Norway, and Spain have also produced gem quality amblygonite. | https://en.wikipedia.org/wiki?curid=3251 |
Amygdalin
Amygdalin (from Ancient Greek: "" "almond") is a naturally occurring chemical compound best known for being falsely promoted as a cancer cure. It is found in many plants, but most notably in the seeds (kernels) of apricots, bitter almonds, apples, peaches, and plums.
Amygdalin is classified as a cyanogenic glycoside because each amygdalin molecule includes a nitrile group, which can be released as the toxic cyanide anion by the action of a beta-glucosidase. Eating amygdalin will cause it to release cyanide in the human body, and may lead to cyanide poisoning.
Since the early 1950s, both amygdalin and a chemical derivative named laetrile have been promoted as alternative cancer treatments, often under the misnomer vitamin B17 (neither amygdalin nor laetrile is a vitamin). Scientific study has found them to be clinically ineffective in treating cancer, as well as potentially toxic or lethal when taken by mouth due to cyanide poisoning. The promotion of laetrile to treat cancer has been described in the medical literature as a canonical example of quackery, and as "the slickest, most sophisticated, and certainly the most remunerative cancer quack promotion in medical history".
Amygdalin is a cyanogenic glycoside derived from the aromatic amino acid phenylalanine. Amygdalin and prunasin are common among plants of the family Rosaceae, particularly the genus "Prunus", Poaceae (grasses), Fabaceae (legumes), and in other food plants, including flaxseed and manioc. Within these plants, amygdalin and the enzymes necessary to hydrolyze it are stored in separate locations so that they will mix in response to tissue damage. This provides a natural defense system.
Amygdalin is contained in stone fruit kernels, such as almonds, apricot (14 g/kg), peach (6.8 g/kg), and plum (4–17.5 g/kg depending on variety), and also in the seeds of the apple (3 g/kg). Benzaldehyde released from amygdalin provides a bitter flavor. Because of a difference in a recessive gene called "Sweet kernal [Sk]", less amygdalin is present in nonbitter (or sweet) almond than bitter almond. In one study, bitter almond amygdalin concentrations ranged from 33–54 g/kg depending on variety; semibitter varieties averaged 1 g/kg and sweet varieties averaged 0.063 g/kg with significant variability based on variety and growing region.
For one method of isolating amygdalin, the stones are removed from the fruit and cracked to obtain the kernels, which are dried in the sun or in ovens. The kernels are boiled in ethanol; on evaporation of the solution and the addition of diethyl ether, amygdalin is precipitated as minute white crystals. Natural amygdalin has the ("R")-configuration at the chiral phenyl center. Under mild basic conditions, this stereogenic center isomerizes; the ("S")-epimer is called neoamygdalin. Although the synthesized version of amygdalin is the ("R")-epimer, the stereogenic center attached to the nitrile and phenyl groups easily epimerizes if the manufacturer does not store the compound correctly.
Amygdalin is hydrolyzed by intestinal β-glucosidase (emulsin) and amygdalin beta-glucosidase (amygdalase) to give gentiobiose and L-mandelonitrile. Gentiobiose is further hydrolyzed to give glucose, whereas mandelonitrile (the cyanohydrin of benzaldehyde) decomposes to give benzaldehyde and hydrogen cyanide. Hydrogen cyanide in sufficient quantities (allowable daily intake: ~0.6 mg) causes cyanide poisoning which has a fatal oral dose range of 0.6–1.5 mg/kg of body weight.
Laetrile (patented 1961) is a simpler semisynthetic derivative of amygdalin. Laetrile is synthesized from amygdalin by hydrolysis. The usual preferred commercial source is from apricot kernels ("Prunus armeniaca"). The name is derived from the separate words "laevorotatory" and "mandelonitrile". Laevorotatory describes the stereochemistry of the molecule, while mandelonitrile refers to the portion of the molecule from which cyanide is released by decomposition.
A 500 mg laetrile tablet may contain between 2.5–25 mg of hydrogen cyanide.
Like amygdalin, laetrile is hydrolyzed in the duodenum (alkaline) and in the intestine (enzymatically) to D-glucuronic acid and L-mandelonitrile; the latter hydrolyzes to benzaldehyde and hydrogen cyanide, that in sufficient quantities causes cyanide poisoning.
Claims for laetrile were based on three different hypotheses: The first hypothesis proposed that cancerous cells contained copious beta-glucosidases, which release HCN from laetrile via hydrolysis. Normal cells were reportedly unaffected, because they contained low concentrations of beta-glucosidases and high concentrations of rhodanese, which converts HCN to the less toxic thiocyanate. Later, however, it was shown that both cancerous and normal cells contain only trace amounts of beta-glucosidases and similar amounts of rhodanese.
The second proposed that, after ingestion, amygdalin was hydrolyzed to mandelonitrile, transported intact to the liver and converted to a beta-glucuronide complex, which was then carried to the cancerous cells, hydrolyzed by beta-glucuronidases to release mandelonitrile and then HCN. Mandelonitrile, however, dissociates to benzaldehyde and hydrogen cyanide, and cannot be stabilized by glycosylation.
Finally, the third asserted that laetrile is the discovered vitamin B-17, and further suggests that cancer is a result of "B-17 deficiency". It postulated that regular dietary administration of this form of laetrile would, therefore, actually prevent all incidence of cancer. There is no evidence supporting this conjecture in the form of a physiologic process, nutritional requirement, or identification of any deficiency syndrome. The term "vitamin B-17" is not recognized by Committee on Nomenclature of the American Institute of Nutrition Vitamins. Ernst T. Krebs branded laetrile as a vitamin in order to have it classified as a nutritional supplement rather than as a pharmaceutical.
Amygdalin was first isolated in 1830 from bitter almond seeds ("Prunus dulcis") by Pierre-Jean Robiquet and Antoine Boutron-Charlard. Liebig and Wöhler found three hydrolysis products of amygdalin: sugar, benzaldehyde, and prussic acid (hydrogen cyanide, HCN). Later research showed that sulfuric acid hydrolyzes it into D-glucose, benzaldehyde, and prussic acid; while hydrochloric acid gives mandelic acid, D-glucose, and ammonia.
In 1845 amygdalin was used as a cancer treatment in Russia, and in the 1920s in the United States, but it was considered too poisonous. In the 1950s, a purportedly non-toxic, synthetic form was patented for use as a meat preservative, and later marketed as laetrile for cancer treatment.
The U.S. Food and Drug Administration prohibited the interstate shipment of amygdalin and laetrile in 1977. Thereafter, 27 U.S. states legalized the use of amygdalin within those states.
In a 1977 controlled, blinded trial, laetrile showed no more activity than placebo.
Subsequently, laetrile was tested on 14 tumor systems without evidence of effectiveness. The Memorial Sloan–Kettering Cancer Center (MSKCC) concluded that "laetrile showed no beneficial effects." Mistakes in an earlier MSKCC press release were highlighted by a group of laetrile proponents led by Ralph Moss, former public affairs official of MSKCC who had been fired following his appearance at a press conference accusing the hospital of covering up the benefits of laetrile. These mistakes were considered scientifically inconsequential, but Nicholas Wade in "Science" stated that "even the appearance of a departure from strict objectivity is unfortunate." The results from these studies were published all together.
A 2015 systematic review from the Cochrane Collaboration found:
The authors also recommended, on ethical and scientific grounds, that no further clinical research into laetrile or amygdalin be conducted.
Given the lack of evidence, laetrile has not been approved by the U.S. Food and Drug Administration or the European Commission.
The U.S. National Institutes of Health evaluated the evidence separately and concluded that clinical trials of amygdalin showed little or no effect against cancer. For example, a 1982 trial by the Mayo Clinic of 175 patients found that tumor size had increased in all but one patient. The authors reported that "the hazards of amygdalin therapy were evidenced in several patients by symptoms of cyanide toxicity or by blood cyanide levels approaching the lethal range."
The study concluded "Patients exposed to this agent should be instructed about the danger of cyanide poisoning, and their blood cyanide levels should be carefully monitored. Amygdalin (Laetrile) is a toxic drug that is not effective as a cancer treatment".
Additionally, "No controlled clinical trials (trials that compare groups of patients who receive the new treatment to groups who do not) of laetrile have been reported."
The side effects of laetrile treatment are the symptoms of cyanide poisoning. These symptoms include: nausea and vomiting, headache, dizziness, cherry red skin color, liver damage, abnormally low blood pressure, droopy upper eyelid, trouble walking due to damaged nerves, fever, mental confusion, coma, and death.
The European Food Safety Agency's Panel on Contaminants in the Food Chain has studied the potential toxicity of the amygdalin in apricot kernels. The Panel reported, "If consumers follow the recommendations of websites that promote consumption of apricot kernels, their exposure to cyanide will greatly exceed" the dose expected to be toxic. The Panel also reported that acute cyanide toxicity had occurred in adults who had consumed 20 or more kernels and that in children "five or more kernels appear to be toxic".
Advocates for laetrile assert that there is a conspiracy between the US Food and Drug Administration, the pharmaceutical industry and the medical community, including the American Medical Association and the American Cancer Society, to exploit the American people, and especially cancer patients.
Advocates of the use of laetrile have also changed the rationale for its use, first as a treatment of cancer, then as a vitamin, then as part of a "holistic" nutritional regimen, or as treatment for cancer pain, among others, none of which have any significant evidence supporting its use.
Despite the lack of evidence for its use, laetrile developed a significant following due to its wide promotion as a "pain-free" treatment of cancer as an alternative to surgery and chemotherapy that have significant side effects. The use of laetrile led to a number of deaths.
The FDA and AMA crackdown, begun in the 1970s, effectively escalated prices on the black market, played into the conspiracy narrative and enabled unscrupulous profiteers to foster multimillion-dollar smuggling empires.
Some American cancer patients have traveled to Mexico for treatment with the substance, for example at the Oasis of Hope Hospital in Tijuana. The actor Steve McQueen died in Mexico following surgery to remove a stomach tumor, having previously undergone extended treatment for pleural mesothelioma (a cancer associated with asbestos exposure) under the care of William D. Kelley, a de-licensed dentist and orthodontist who claimed to have devised a cancer treatment involving pancreatic enzymes, 50 daily vitamins and minerals, frequent body shampoos, enemas, and a specific diet as well as laetrile.
Laetrile advocates in the United States include Dean Burk, a former chief chemist of the National Cancer Institute cytochemistry laboratory, and national arm wrestling champion Jason Vale, who falsely claimed that his kidney and pancreatic cancers were cured by eating apricot seeds. Vale was convicted in 2004 for, among other things, fraudulently marketing laetrile as a cancer cure. The court also found that Vale had made at least $500,000 from his fraudulent sales of laetrile.
In the 1970s, court cases in several states challenged the FDA's authority to restrict access to what they claimed are potentially lifesaving drugs. More than twenty states passed laws making the use of Laetrile legal. After the unanimous Supreme Court ruling in "Rutherford v. United States" which established that interstate transport of the compound was illegal, usage fell off dramatically. The US Food and Drug Administration continues to seek jail sentences for vendors marketing laetrile for cancer treatment, calling it a "highly toxic product that has not shown any effect on treating cancer." | https://en.wikipedia.org/wiki?curid=3252 |
Apostles' Creed
The Apostles' Creed (Latin: "Symbolum Apostolorum" or "Symbolum Apostolicum"), sometimes titled the Apostolic Creed or the Symbol of the Apostles, is an early statement of Christian belief—a creed or "symbol". It is widely used by a number of Christian denominations for both liturgical and catechetical purposes, most visibly by liturgical Churches of Western tradition, including the Catholic Church, Lutheranism and Anglicanism. It is also used by Presbyterians, Moravians, Methodists and Congregationalists.
The Apostles' Creed is trinitarian in structure with sections affirming belief in God the Father, God the Son, and God the Holy Spirit. The Apostles' Creed was based on Christian theological understanding of the canonical gospels, the letters of the New Testament and to a lesser extent the Old Testament. Its basis appears to be the old Roman Creed known also as the Old Roman Symbol.
Because of the early origin of its original form, it does not address some Christological issues defined in the Nicene and other Christian creeds. It thus says nothing explicitly about the divinity of either Jesus or the Holy Spirit. Nor does it address many other theological questions which became objects of dispute centuries later.
The earliest known mention of the expression "Apostles' Creed" occurs in a letter of AD 390 from a synod in Milan and may have been associated with the belief, widely accepted in the 4th century, that, under the inspiration of the Holy Spirit, each of the Twelve Apostles contributed an article to the twelve articles of the creed.
The word "Symbolum", standing alone, appears around the middle of the third century in the correspondence of St. Cyprian and St. Firmilian, the latter in particular speaking of the Creed as the "Symbol of the Trinity", and recognizing it as an integral part of the rite of baptism.
The title "Symbolum Apostolicum" (Symbol or Creed of the Apostles) appears for the first time in a letter, probably written by Ambrose, from a Council in Milan to Pope Siricius in about AD 390 "Let them give credit to the Creed of the Apostles, which the Roman Church has always kept and preserved undefiled". But what existed at that time was not what is now known as the Apostles' Creed but a shorter statement of belief that, for instance, did not include the phrase "maker of heaven and earth", a phrase that may have been inserted only in the 7th century.
The account of the origin of this creed, the forerunner and principal source of the Apostles' Creed, as having been jointly created by the Apostles under the inspiration of the Holy Spirit, with each of the twelve contributing one of the articles, was already current at that time.
The earlier text evolved from simpler texts based on , part of the Great Commission, and it has been argued that this earlier text was already in written form by the late 2nd century (c. 180).
While the individual statements of belief that are included in the Apostles' Creed – even those not found in the Old Roman Symbol – are found in various writings by Irenaeus, Tertullian, Novatian, Marcellus, Rufinus, Ambrose, Augustine, Nicetas, and Eusebius Gallus, the earliest appearance of what we know as the Apostles' Creed was in the "De singulis libris canonicis scarapsus" (""Excerpt from Individual Canonical Books"") of St. Pirminius (Migne, "Patrologia Latina" 89, 1029 ff.), written between 710 and 714. Bettenson and Maunder state that it is first from "Dicta Abbatis Pirminii de singulis libris canonicis scarapsus" ("idem quod excarpsus", excerpt), c. 750. This longer Creed seems to have arisen in what is now France and Spain. Charlemagne imposed it throughout his dominions, and it was finally accepted in Rome, where the Old Roman Symbol or similar formulas had survived for centuries. It has been argued nonetheless that it dates from the second half of the 5th century, though no earlier.
As can be seen from various creeds all quoted in full below, although the original Greek and Latin creeds both specifically refer to “the resurrection of the "flesh"” (σαρκὸς ἀνάστασιν and "carnis resurrectionem"), the versions used by several churches, like the Catholic Church, the Church of England, Lutheran churches and Methodist churches, talk more generally of “the resurrection of the "body"”.
Some have suggested that the Apostles' Creed was spliced together with phrases from the New Testament. For instance, the phrase "descendit ad inferos" ("he descended into hell") echoes , "κατέβη εἰς τὰ κατώτερα μέρη τῆς γῆς" ("he descended into the lower earthly regions"). It is of interest that this phrase first appeared in one of the two versions of Rufinus in AD 390 and then did not appear again in any version of the creed until AD 650.
This phrase and that on the communion of saints are articles found in the Apostles' Creed, but not in the Old Roman Symbol nor in the Nicene Creed.
The Greek text is "not normally used in Greek and Eastern Orthodox churches".
The International Consultation on English Texts (ICET), a first inter-church ecumenical group that undertook the writing of texts for use by English-speaking Christians in common, published "Prayers We Have in Common" (Fortress Press, 1970,1971,1975). Its version of the Apostles' Creed was adopted by several churches.
The English Language Liturgical Consultation (ELLC), a successor body to the International Consultation on English Texts (ICET), published in 1988 a revised translation of the Apostles' Creed. It avoided the word "his" in relation to God and spoke of Jesus Christ as "God's only Son" instead of "his only Son". In the fourth line, it replaced the personal pronoun "he" with the relative "who", and changed the punctuation, so as no longer to present the Creed as a series of separate statements. In the same line it removed the words "the power of". It explained its rationale for making these changes and for preserving other controverted expressions in the 1988 publication "Praying Together", with which it presented its new version:
The initial (1970) English official translation of the Roman Missal of the Catholic Church adopted the ICET version, as did catechetical texts such as the "Catechism of the Catholic Church".
In 2008 the Catholic Church published a new English translation of the texts of the Mass of the Roman Rite, use of which came into force at the end of 2011. It included the following translation of the Apostles' Creed:
In its discussion of the contents of the Creed, the "Catechism of the Catholic Church" presents it in the traditional division into twelve articles:
The same division into twelve articles is found also in Anabaptist catechesis:
Pelbartus Ladislaus of Temesvár gives a slightly different division, assigning one phrase to each apostle: Peter (No. 1), John (No. 2), James, son of Zebedee (No. 3), Andrew (No. 4), Philip (No. 5a: descendit ad infernos...), Thomas (No. 5b: ascendit ad caelos...), Bartholomew (No. 6), Matthew (No. 7), James, son of Alphaeus (No. 8), Simon (No. 9), Jude (No. 10), Matthias (No. 11–12).
In the Church of England there are currently two authorized forms of the creed: that of the "Book of Common Prayer" (1662) and that of "Common Worship" (2000).
Book of Common Prayer, 1662
I believe in God the Father Almighty,
Maker of heaven and earth:
And in Jesus Christ his only Son our Lord,
Who was conceived by the Holy Ghost,
Born of the Virgin Mary,
Suffered under Pontius Pilate,
Was crucified, dead, and buried:
He descended into hell;
The third day he rose again from the dead;
He ascended into heaven,
And sitteth on the right hand of God the Father Almighty;
From thence he shall come to judge the quick and the dead.
I believe in the Holy Ghost;
The holy Catholick Church;
The Communion of Saints;
The Forgiveness of sins;
The Resurrection of the body,
And the Life everlasting.
Amen.
Common Worship
The publication "Evangelical Lutheran Worship" published by Augsburg Fortress, is the primary worship resource for the Evangelical Lutheran Church in America, the largest Lutheran denomination in the United States, and the Evangelical Lutheran Church in Canada. It presents the official ELCA version, footnoting the phrase "he descended to the dead" to indicate an alternative reading: "or ‘he descended into hell,’ another translation of this text in widespread use".
The text is as follows:
The Church of Denmark still uses the phrase "We renounce the devil and all his doings and all his beings" as the beginning of this creed, before the line "We believe in God etc." This is mostly due to the influence of the Danish pastor Grundtvig. See .
The United Methodists commonly incorporate the Apostles' Creed into their worship services. The version which is most often used is located at No. 881 in the "United Methodist Hymnal", one of their most popular hymnals and one with a heritage to brothers John Wesley and Charles Wesley, founders of Methodism. It is notable for omitting the line "he descended into hell", but is otherwise very similar to the Book of Common Prayer version. The 1989 Hymnal has both the traditional version and the 1988 ecumenical version, which includes "he descended to the dead."
The Apostles' Creed as found in "The Methodist Hymnal" of 1939 also omits the line "he descended..." "The Methodist Hymnal" of 1966 has the same version of the creed, but with a note at the bottom of the page stating, "Traditional use of this creed includes these words: 'He descended into hell.'"
However, when the Methodist Episcopal Church was organized in the United States in 1784, John Wesley sent the new American Church a Sunday Service which included the phrase "he descended into hell" in the text of The Apostles' Creed. It is clear that Wesley intended American Methodists to use the phrase in the recitation of the Creed.
The "United Methodist Hymnal" of 1989 also contains (at #882) what it terms the "Ecumenical Version" of this creed which is the ecumenically accepted modern translation of the International Committee on English Texts (1975) as amended by the subsequent successor body, the English Language Liturgical Consultation (1987). This form of the Apostles' Creed can be found incorporated into the Eucharistic and Baptismal Liturgies in the Hymnal and in "The United Methodist Book of Worship", and hence it is growing in popularity and use. The word "catholic" is intentionally left lowercase in the sense that the word catholic applies to the universal and ecumenical Christian church.
The Apostles' Creed is used in its direct form or in interrogative forms by Western Christian communities in several of their liturgical rites, in particular those of baptism and the Eucharist.
The Apostles' Creed, whose present form is similar to the baptismal creed used in Rome in the third and fourth centuries, actually developed from questions addressed to those seeking baptism. The Catholic Church still today uses an interrogative form of it in the Rite of Baptism (for both children and adults). In the official English translation (ICEL, 1974) the minister of baptism asks:
To each question, the catechumen, or, in the case of an infant, the parents and sponsor(s) (godparent(s)) in his or her place, answers "I do." Then the celebrant says:
And all respond: Amen.
The Presbyterian Church of Aotearoa New Zealand uses the Apostles' Creed in its baptism rite in spite of the reservations of some of its members regarding the phrase "born of the virgin Mary".
The Episcopal Church in the United States of America uses the Apostles' Creed as part of a Baptismal Covenant for those who are to receive the Rite of Baptism. The Apostles' Creed is recited by candidates, sponsors and congregation, each section of the Creed being an answer to the celebrant's question, "Do you believe in God the Father (God the Son, God the Holy Spirit)?" It is also used in an interrogative form at the Easter Vigil in The Renewal of Baptismal Vows.
The Church of England likewise asks the candidates, sponsors and congregation to recite the Apostles' Creed in answer to similar interrogations, in which it avoids using the word "God" of the Son and the Holy Spirit, asking instead: "Do you believe and trust in his Son Jesus Christ?", and "Do you believe and trust in the Holy Spirit?" Moreover, "where there are strong pastoral reasons", it allows use of an alternative formula in which the interrogations, while speaking of "God the Son" and "God the Holy Spirit", are more elaborate but are not based on the Apostles' Creed, and the response in each case is: "I believe and trust in him." The "Book of Common Prayer" may also be used, which in its rite of baptism has the minister recite the Apostles' Creed in interrogative form. asking the godparents or, in the case "of such as are of Riper Years", the candidate: "Dost thou believe in God the Father ..." The response is: "All this I stedfastly believe."
Lutherans following the "Lutheran Service Book" (Lutheran Church–Missouri Synod and the Lutheran Church–Canada), like Catholics and Anglicans, use the Apostles' Creed during the Sacrament of Baptism:
Following each question, the candidate answers: "Yes, I believe". If the candidates are unable to answer for themselves, the sponsors are to answer the questions.
For ELCA Lutherans who use the Evangelical Lutheran Worship book, the Apostles' Creed appears during the Sacrament of Holy Baptism Rite on p. 229 of the hardcover pew edition.
Methodists use the Apostles' Creed as part of their baptismal rites in the form of an interrogatory addressed to the candidate(s) for baptism and the whole congregation as a way of professing the faith within the context of the Church's sacramental act. For infants, it is the professing of the faith by the parents, sponsors, and congregation on behalf of the candidate(s); for confirmands, it is the professing of the faith before and among the congregation. For the congregation, it is a reaffirmation of their professed faith.
Since the 2002 edition, the Apostles' Creed is included in the Roman Missal as an alternative, with the indication, "Instead of the Niceno-Constantinopolitan Creed, especially during Lent and Easter time, the baptismal Symbol of the Roman Church, known as the Apostles' Creed, may be used." Previously the Nicene Creed was the only profession of faith that the Missal gave for use at Mass, except in Masses for children; but in some countries use of the Apostles' Creed was already permitted.
The Apostles' Creed is used in Anglican services of Matins and Evening Prayer (Evensong). It is invoked after the recitation or singing of the Canticles, and is the only part of the services in which the congregation traditionally turns to face the altar, if they are seated transversely in the quire.
The Episcopal Church (United States) uses the Apostles' Creed in Morning Prayer and Evening Prayer.
Before the 1955 simplification of the rubrics of the Roman Breviary by Pope Pius XII, the Apostles' Creed was recited at the beginning of matins and prime, at the end of compline, and in some "preces" (a series of versicles and responses preceded by "Kyrie, eleison" ("Lord, have mercy") and the Our Father) of prime and compline on certain days during Advent and Lent.
Musical settings of the Symbolum Apostolorum as a motet are rare. The French composer Le Brung published one Latin setting in 1540, and the Spanish composer Fernando de las Infantas published two in 1578.
Martin Luther wrote the hymn "Wir glauben all an einen Gott" (translated into Engish as "We all believe in one God") in 1524 as a paraphrase of the Apostles' Creed.
In 1957, William P. Latham wrote "Credo (Metrical Version of the Apostle’s Creed)" in an SATB arrangement suitable for boys' and men's voices.
In 1979 John Michael Talbot, a Third Order Franciscan, composed and recorded "Creed" on his album, "The Lord's Supper".
In 1986 Graham Kendrick published the popular "We believe in God the Father", closely based on the Apostles' Creed.
The song "Creed" on Petra's 1990 album "Beyond Belief" is loosely based on the Apostles' Creed.
GIA Publications published a hymn text in 1991 directly based on the Apostles' Creed, called "I Believe in God Almighty." It has been sung to hymn tunes from Wales, the Netherlands, and Ireland.
Rich Mullins and Beaker also composed a musical setting titled "Creed", released on Mullins' 1993 album "A Liturgy, a Legacy, & a Ragamuffin Band". Notably, Mullins' version replaces "one holy catholic church" with "one holy church".
Integrity Music under the Hosanna! Music series, produced a live worship acoustic album in 1993, ‘"Be Magnified"’, which featured Randy Rothwell as worship leader, had an upbeat enthusiastic hymn called “The Apostle’s Creed”, written by Randy Rothwell Burbank.
In 2014 Hillsong released a version of the Apostles' Creed under the title "This I Believe (The Creed)" on their album "No Other Name".
Keith & Kristyn Getty released an expression of the Apostles' Creed under the title "We Believe (Apostle's Creed)" on their 2016 album "Facing a Task Unfinished". | https://en.wikipedia.org/wiki?curid=3255 |
Amicable numbers
Amicable numbers are two different numbers so related that the sum of the proper divisors of each is equal to the other number.
The smallest pair of amicable numbers is (220, 284). They are amicable because the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, of which the sum is 284; and the proper divisors of 284 are 1, 2, 4, 71 and 142, of which the sum is 220. (A proper divisor of a number is a positive factor of that number other than the number itself. For example, the proper divisors of 6 are 1, 2, and 3.)
A pair of amicable numbers constitutes an aliquot sequence of period 2. It is unknown if there are infinitely many pairs of amicable numbers.
A related concept is that of a perfect number, which is a number that equals the sum of "its own" proper divisors, in other words a number which forms an aliquot sequence of period 1. Numbers that are members of an aliquot sequence with period greater than 2 are known as sociable numbers.
The first ten amicable pairs are: (220, 284), (1184, 1210), (2620, 2924), (5020, 5564), (6232, 6368), (10744, 10856), (12285, 14595), (17296, 18416), (63020, 76084), and (66928, 66992). . (Also see and )
Amicable numbers were known to the Pythagoreans, who credited them with many mystical properties. A general formula by which some of these numbers could be derived was invented circa 850 by the Iraqi mathematician Thābit ibn Qurra (826–901). Other Arab mathematicians who studied amicable numbers are al-Majriti (died 1007), al-Baghdadi (980–1037), and al-Fārisī (1260–1320). The Iranian mathematician Muhammad Baqir Yazdi (16th century) discovered the pair (9363584, 9437056), though this has often been attributed to Descartes. Much of the work of Eastern mathematicians in this area has been forgotten.
Thābit ibn Qurra's formula was rediscovered by Fermat (1601–1665) and Descartes (1596–1650), to whom it is sometimes ascribed, and extended by Euler (1707–1783). It was extended further by Borho in 1972. Fermat and Descartes also rediscovered pairs of amicable numbers known to Arab mathematicians. Euler also discovered dozens of new pairs. The second smallest pair, (1184, 1210), was discovered in 1866 by a then teenage B. Nicolò I. Paganini (not to be confused with the composer and violinist), having been overlooked by earlier mathematicians.
By 1946 there were 390 known pairs, but the advent of computers has allowed the discovery of many thousands since then. Exhaustive searches have been carried out to find all pairs less than a given bound, this bound being extended from 108 in 1970, to 1010 in 1986, 1011 in 1993, 1017 in 2015, and to 1018 in 2016.
, there are over 1,225,063,681 known amicable pairs.
While these rules do generate some pairs of amicable numbers, many other pairs are known, so these rules are by no means comprehensive.
In particular, the two rules below produce only even amicable pairs, so they are of no interest for the open problem of finding amicable pairs coprime to 210 = 2·3·5·7, while over 1000 pairs coprime to 30 = 2·3·5 are known [García, Pedersen & te Riele (2003), Sándor & Crstici (2004)].
The Thābit ibn Qurra theorem is a method for discovering amicable numbers invented in the ninth century by the Arab mathematician Thābit ibn Qurra.
It states that if
where is an integer and , , and are prime numbers, then and are a pair of amicable numbers. This formula gives the pairs for , for , and for , but no other such pairs are known. Numbers of the form are known as Thabit numbers. In order for Ibn Qurra's formula to produce an amicable pair, two consecutive Thabit numbers must be prime; this severely restricts the possible values of .
To establish the theorem, Thâbit ibn Qurra proved nine lemmas divided into two groups. The first three lemmas deal with the determination of the aliquot parts of a natural integer. The second group of lemmas deals more specifically with the formation of perfect, abundant and deficient numbers.
"Euler's rule" is a generalization of the Thâbit ibn Qurra theorem. It states that if
where are integers and , , and are prime numbers, then and are a pair of amicable numbers. Thābit ibn Qurra's theorem corresponds to the case . Euler's rule creates additional amicable pairs for with no others being known. Euler (1747 & 1750) overall found 58 new pairs to make all the by then existing pairs into 61.
Let (, ) be a pair of amicable numbers with , and write and where is the greatest common divisor of and . If and are both coprime to and square free then the pair (, ) is said to be regular , otherwise it is called irregular or exotic. If (, ) is regular and and have and prime factors respectively, then is said to be of type .
For example, with , the greatest common divisor is and so and . Therefore, is regular of type .
An amicable pair is twin if there are no integers between and belonging to any other amicable pair .
In every known case, the numbers of a pair are either both even or both odd. It is not known whether an even-odd pair of amicable numbers exists, but if it does, the even number must either be a square number or twice one, and the odd number must be a square number. However, amicable numbers where the two members have different smallest prime factors do exist: there are seven such pairs known. Also, every known pair shares at least one common prime factor. It is not known whether a pair of coprime amicable numbers exists, though if any does, the product of the two must be greater than 1067. Also, a pair of coprime amicable numbers cannot be generated by Thabit's formula (above), nor by any similar formula.
In 1955, Paul Erdős showed that the density of amicable numbers, relative to the positive integers, was 0.
According to the sum of amicable pairs conjecture, as the number of the amicable numbers approaches infinity, the percentage of the sums of the amicable pairs divisible by ten approaches 100% .
Amicable numbers formula_1 satisfy formula_2 and formula_3 which can be written together as formula_4. This can be generalized to larger tuples, say formula_5, where we require
For example, (1980, 2016, 2556) is an amicable triple , and (3270960, 3361680, 3461040, 3834000) is an amicable quadruple .
Amicable multisets are defined analogously and generalizes this a bit further .
Sociable numbers are the numbers in cyclic lists of numbers (with a length greater than 2) where each number is the sum of the proper divisors of the preceding number. For example, formula_7 are sociable numbers of order 4.
The aliquot sequence can be represented as a directed graph, formula_8, for a given integer formula_9, where formula_10 denotes the
sum of the proper divisors of formula_11.
Cycles in formula_8 represent sociable numbers within the interval formula_13. Two special cases are loops that represent perfect numbers and cycles of length two that represent amicable pairs. | https://en.wikipedia.org/wiki?curid=3259 |
Agar
Agar ( or ), or agar-agar, is a jelly-like substance, obtained from red algae.
Agar is a mixture of two components: the linear polysaccharide agarose, and a heterogeneous mixture of smaller molecules called agaropectin. It forms the supporting structure in the cell walls of certain species of algae, and is released on boiling. These algae are known as agarophytes, and belong to the Rhodophyta (red algae) phylum.
Agar has been used as an ingredient in desserts throughout Asia, and also as a solid substrate to contain culture media for microbiological work. Agar can be used as a laxative, an appetite suppressant, a vegetarian substitute for gelatin, a thickener for soups, in fruit preserves, ice cream, and other desserts, as a clarifying agent in brewing, and for sizing paper and fabrics.
The gelling agent in agar is an unbranched polysaccharide obtained from the cell walls of some species of red algae, primarily from "tengusa" ("Gelidiaceae") and "ogonori" ("Gracilaria"). For commercial purposes, it is derived primarily from "ogonori". In chemical terms, agar is a polymer made up of subunits of the sugar galactose.
Agar may have been discovered in Japan in 1658 by Mino Tarōzaemon (), an innkeeper in current Fushimi-ku, Kyoto who, according to legend, was said to have discarded surplus seaweed soup and noticed that it gelled later after a winter night's freezing. Over the following centuries, agar became a common gelling agent in several Southeast Asian cuisines.
Agar was first subjected to chemical analysis in 1859 by the French chemist Anselme Payen, who had obtained agar from the marine algae "Gelidium corneum".
Beginning in the late 19th century, agar began to be used heavily as a solid medium for growing various microbes. Agar was first described for use in microbiology in 1882 by the German microbiologist Walther Hesse, an assistant working in Robert Koch's laboratory, on the suggestion of his wife Fannie Hesse. Agar quickly supplanted gelatin as the base of microbiological media, due to its higher melting temperature, allowing microbes to be grown at higher temperatures without the media liquefying.
With its newfound use in microbiology, agar production quickly increased. This production centered on Japan, which produced most of the world's agar until World War II. However, with the outbreak of World War II, many nations were forced to establish domestic agar industries in order to continue microbiological research. Around the time of World War II, approximately 2,500 tons of agar were produced annually. By the mid-1970s, production worldwide had increased dramatically to approximately 10,000 tons each year. Since then, production of agar has fluctuated due to unstable and sometimes over-utilized seaweed populations.
The word "agar" comes from agar-agar, the Malay name for red algae ("Gigartina", "Gracilaria") from which the jelly is produced. It is also known as Kanten () (from the phrase "kan-zarashi tokoroten" () or “cold-exposed agar”), Japanese isinglass, China grass, Ceylon moss or Jaffna moss. "Gracilaria lichenoides" is specifically referred to as agal-agal or Ceylon agar.
Agar consists of a mixture of two polysaccharides: agarose and agaropectin, with agarose making up about 70% of the mixture. Agarose is a linear polymer, made up of repeating units of agarobiose, a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agaropectin is a heterogeneous mixture of smaller molecules that occur in lesser amounts, and is made up of alternating units of D-galactose and L-galactose heavily modified with acidic side-groups, such as sulfate and pyruvate.
Agar exhibits hysteresis, melting at 85 °C (358 K, 185 °F) and solidifying from 32–40 °C (305–313 K, 90–104 °F). This property lends a suitable balance between easy melting and good gel stability at relatively high temperatures. Since many scientific applications require incubation at temperatures close to human body temperature (37 °C), agar is more appropriate than other solidifying agents that melt at this temperature, such as gelatin.
Agar-agar is a natural vegetable gelatin counterpart. It is white and semi-translucent when sold in packages as washed and dried strips or in powdered form. It can be used to make jellies, puddings, and custards. When making jelly, it is boiled in water until the solids dissolve. Sweetener, flavoring, coloring, fruits and or vegetables are then added, and the liquid is poured into molds to be served as desserts and vegetable aspics or incorporated with other desserts such as a layer of jelly in a cake.
Agar-agar is approximately 80% fiber, so it can serve as an intestinal regulator. Its bulking quality has been behind fad diets in Asia, for example the "kanten" (the Japanese word for agar-agar) diet. Once ingested, "kanten" triples in size and absorbs water. This results in the consumers feeling fuller. This diet has recently received some press coverage in the United States as well. The diet has shown promise in obesity studies.
One use of agar in Japanese cuisine (Wagashi) is "anmitsu", a dessert made of small cubes of agar jelly and served in a bowl with various fruits or other ingredients. It is also the main ingredient in "mizu yōkan", another popular Japanese food. In Philippine cuisine, it is used to make the jelly bars in the various gulaman refreshments or desserts such as "sago gulaman", "buko pandan", "agar flan", "halo-halo", and the black and red "gulaman" used in various fruit salads. In Vietnamese cuisine, jellies made of flavored layers of agar agar, called "thạch", are a popular dessert, and are often made in ornate molds for special occasions. In Indian cuisine, agar agar is known as "China grass" and is used for making desserts. In Burmese cuisine, a sweet jelly known as "kyauk kyaw" (Burmese: ကျောက်ကျော, ) is made from agar.
Agar jelly is widely used in Taiwanese bubble tea. The bubble teahouses such as Gong Cha and Chatime can be seen in Australia, the United States, the United Kingdom, Middle East and many Asian countries.
In Russia, it is used in addition to or as a replacement for pectin in jams and marmalades, as a substitute to gelatin for its superior gelling properties, and as a strengthening ingredient in souffles and custards. Another use of agar-agar is in "ptich'ye moloko" (bird's milk), a rich jellified custard (or soft meringue) used as a cake filling or chocolate-glazed as individual sweets. Agar-agar may also be used as the gelling agent in gel clarification, a culinary technique used to clarify stocks, sauces, and other liquids. Mexico has traditional candies made out of Agar gelatin, most of them in colorful, half-circle shapes that resemble a melon or watermelon fruit slice, and commonly covered with sugar. They are known in Spanish as "Dulce de Agar" (Agar sweets)
Agar-agar is an allowed nonorganic/nonsynthetic additive used as a thickener, gelling agent, texturizer, moisturizer, emulsifier, flavor enhancer, and absorbent in certified organic foods.
An agar plate or Petri dish is used to provide a growth medium using a mix of agar and other nutrients in which microorganisms, including bacteria and fungi, can be cultured and observed under the microscope. Agar is indigestible for many organisms so that microbial growth does not affect the gel used and it remains stable. Agar is typically sold commercially as a powder that can be mixed with water and prepared similarly to gelatin before use as a growth medium. Other ingredients are added to the agar to meet the nutritional needs of the microbes. Many microbe-specific formulations are available, because some microbes prefer certain environmental conditions over others. Agar is often dispensed using a sterile media dispenser.
As a gel, an agar or agarose medium is porous and therefore can be used to measure microorganism motility and mobility. The gel's porosity is directly related to the concentration of agarose in the medium, so various levels of effective viscosity (from the cell's "point of view") can be selected, depending on the experimental objectives.
A common identification assay involves culturing a sample of the organism deep within a block of nutrient agar. Cells will attempt to grow within the gel structure. Motile species will be able to migrate, albeit slowly, throughout the gel and infiltration rates can then be visualized, whereas non-motile species will show growth only along the now-empty path introduced by the invasive initial sample deposition.
Another setup commonly used for measuring chemotaxis and chemokinesis utilizes the under-agarose cell migration assay, whereby a layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient.
Research grade agar is used extensively in plant biology as it is optionally supplemented with a nutrient and/or vitamin mixture that allows for seedling germination in Petri dishes under sterile conditions (given that the seeds are sterilized as well). Nutrient and/or vitamin supplementation for "Arabidopsis thaliana" is standard across most experimental conditions. Murashige & Skoog (MS) nutrient mix and Gamborg's B5 vitamin mix in general are used. A 1.0% agar/0.44% MS+vitamin dH2O solution is suitable for growth media between normal growth temps.
When using agar, within any growth medium, it is important to know that the solidification of the agar is pH-dependent. The optimal range for solidification is between 5.4–5.7. Usually, the application of KOH is needed to increase the pH to this range. A general guideline is about 600 µl 0.1M KOH per 250 ml GM. This entire mixture can be sterilized using the liquid cycle of an autoclave.
This medium nicely lends itself to the application of specific concentrations of phytohormones etc. to induce specific growth patterns in that one can easily prepare a solution containing the desired amount of hormone, add it to the known volume of GM, and autoclave to both sterilize and evaporate off any solvent that may have been used to dissolve the often-polar hormones. This hormone/GM solution can be spread across the surface of Petri dishes sown with germinated and/or etiolated seedlings.
Experiments with the moss "Physcomitrella patens", however, have shown that choice of the gelling agent – agar or Gelrite – does influence phytohormone sensitivity of the plant cell culture.
Agar is used:
Gelidium agar is used primarily for bacteriological plates. Gracilaria agar is used mainly in food applications.
In 2016, AMAM, a Japanese company, developed a prototype for Agar-based commercial packaging system called Agar Plasticity, intended as a replacement for oil-based plastic packaging. | https://en.wikipedia.org/wiki?curid=3262 |
Acid rain
Acid rain is a rain or any other form of precipitation that is unusually acidic, meaning that it has elevated levels of hydrogen ions (low pH). It can have harmful effects on plants, aquatic animals, and infrastructure. Acid rain is caused by emissions of sulfur dioxide and nitrogen oxide, which react with the water molecules in the atmosphere to produce acids. Some governments have made efforts since the 1970s to reduce the release of sulfur dioxide and nitrogen oxide into the atmosphere with positive results. Nitrogen oxides can also be produced naturally by lightning strikes, and sulfur dioxide is produced by volcanic eruptions. Acid rain has been shown to have adverse impacts on forests, freshwaters, and soils, killing insect and aquatic life-forms, causing paint to peel, corrosion of steel structures such as bridges, and weathering of stone buildings and statues as well as having impacts on human health.
"Acid rain" is a popular term referring to the deposition of a mixture from wet (rain, snow, sleet, fog, cloudwater, and dew) and dry (acidifying particles and gases) acidic components. Distilled water, once carbon dioxide is removed, has a neutral pH of 7. Liquids with a pH less than 7 are acidic, and those with a pH greater than 7 are alkaline. "Clean" or unpolluted rain has an acidic pH, but usually no lower than 5.7, because carbon dioxide and water in the air react together to form carbonic acid, a weak acid according to the following reaction:
Carbonic acid then can ionize in water forming low concentrations of carbonate and hydronium ions:
Unpolluted rain can also contain other chemicals which affect its pH (acidity level). A common example is nitric acid produced by electric discharge in the atmosphere such as lightning. Acid deposition as an environmental issue (discussed later in the article) would include additional acids other than .
The corrosive effect of polluted, acidic city air on limestone and marble was noted in the 17th century by John Evelyn, who remarked upon the poor condition of the Arundel marbles.
Since the Industrial Revolution, emissions of sulfur dioxide and nitrogen oxides into the atmosphere have increased. In 1852, Robert Angus Smith was the first to show the relationship between acid rain and atmospheric pollution in Manchester, England.
In the late 1960s, scientists began widely observing and studying the phenomenon. The term "acid rain" was coined in 1872 by Robert Angus Smith. Canadian Harold Harvey was among the first to research a "dead" lake. At first, the main focus in research lay on local effects of acid rain. Waldemar Christofer Brøgger was the first to acknowledge long-distance transportation of pollutants crossing borders from the United Kingdom to Norway. Public awareness of acid rain in the US increased in the 1970s after "The New York Times" published reports from the Hubbard Brook Experimental Forest in New Hampshire of the harmful environmental effects that result from it.
Occasional pH readings in rain and fog water of well below 2.4 have been reported in industrialized areas. Industrial acid rain is a substantial problem in China and Russia and areas downwind from them. These areas all burn sulfur-containing coal to generate heat and electricity.
The problem of acid rain has not only increased with population and industrial growth, but has become more widespread. The use of tall smokestacks to reduce local pollution has contributed to the spread of acid rain by releasing gases into regional atmospheric circulation. Often deposition occurs a considerable distance downwind of the emissions, with mountainous regions tending to receive the greatest deposition (because of their higher rainfall). An example of this effect is the low pH of rain which falls in Scandinavia.
The earliest report about acid rain in the United States was from the chemical evidence from Hubbard Brook Valley. In 1972, a group of scientists including Gene Likens discovered the rain that was deposited at White Mountains of New Hampshire was acidic. The pH of the sample was measured to be 4.03 at Hubbard Brook. The Hubbard Brook Ecosystem Study followed up with a series of research that analyzed the environmental effects of acid rain. Acid rain that mixed with stream water at Hubbard Brook was neutralized by the alumina from soils. The result of this research indicates the chemical reaction between acid rain and aluminum leads to an increasing rate of soil weathering. Experimental research was done to examine the effects of increased acidity in stream on ecological species. In 1980, a group of scientists modified the acidity of Norris Brook, New Hampshire, and observed the change in species' behaviors. There was a decrease in species diversity, an increase in community dominants, and a decrease in the food web complexity.
In 1980, the US Congress passed an Acid Deposition Act. This Act established an 18-year assessment and research program under the direction of the National Acidic Precipitation Assessment Program (NAPAP). NAPAP looked at the entire problem from a scientific perspective. It enlarged a network of monitoring sites to determine how acidic the precipitation actually was, and to determine long-term trends, and established a network for dry deposition. It focused on the effects of acid rain by funding research to identify and quantify the effects of acid precipitation on freshwater and terrestrial ecosystems, historical buildings, monuments, and building materials. It also funded extensive studies on atmospheric processes and potential control programs.
From the start, policy advocates from all sides attempted to influence NAPAP activities to support their particular policy advocacy efforts, or to disparage those of their opponents. For the US Government's scientific enterprise, a significant impact of NAPAP were lessons learned in the assessment process and in environmental research management to a relatively large group of scientists, program managers, and the public.
In 1981, the National Academy of Sciences was looking into research about the controversial issues regarding acid rain. President Ronald Reagan did not place a huge attention on the issues of acid rain until his personal visit to Canada and confirmed that Canadian border suffered from the drifting pollution from smokestacks in Midwest of US. Reagan honored the agreement to Canadian Prime Minister Pierre Trudeau’s enforcement of anti-pollution regulation. In 1982, US President Ronald Reagan commissioned William Nierenberg to serve on the National Science Board. Nierenberg selected scientists including Gene Likens to serve on a panel to draft a report on acid rain. In 1983, the panel of scientists came up with a draft report, which concluded that acid rain is a real problem and solutions should be sought. White House Office of Science and Technology Policy reviewed the draft report and sent Fred Singer’s suggestions of the report, which cast doubt on the cause of acid rain. The panelists revealed rejections against Singer's positions and submitted the report to Nierenberg in April. In May 1983, the House of Representatives voted against legislation that aimed to control sulfur emissions. There was a debate about whether Nierenberg delayed to release the report. Nierenberg himself denied the saying about his suppression of the report and explained that the withheld of the report after the House's vote was due to the fact that the report was not ready to be published.
In 1991, the US National Acid Precipitation Assessment Program (NAPAP) provided its first assessment of acid rain in the United States. It reported that 5% of New England Lakes were acidic, with sulfates being the most common problem. They noted that 2% of the lakes could no longer support Brook Trout, and 6% of the lakes were unsuitable for the survival of many species of minnow. Subsequent "Reports to Congress" have documented chemical changes in soil and freshwater ecosystems, nitrogen saturation, decreases in amounts of nutrients in soil, episodic acidification, regional haze, and damage to historical monuments.
Meanwhile, in 1990, the US Congress passed a series of amendments to the Clean Air Act. Title IV of these amendments established the a cap and trade system designed to control emissions of sulfur dioxide and nitrogen oxides. Title IV called for a total reduction of about 10 million tons of SO2 emissions from power plants, close to a 50% reduction. It was implemented in two phases. Phase I began in 1995, and limited sulfur dioxide emissions from 110 of the largest power plants to a combined total of 8.7 million tons of sulfur dioxide. One power plant in New England (Merrimack) was in Phase I. Four other plants (Newington, Mount Tom, Brayton Point, and Salem Harbor) were added under other provisions of the program. Phase II began in 2000, and affects most of the power plants in the country.
During the 1990s, research continued. On March 10, 2005, the EPA issued the Clean Air Interstate Rule (CAIR). This rule provides states with a solution to the problem of power plant pollution that drifts from one state to another. CAIR will permanently cap emissions of SO2 and NOx in the eastern United States. When fully implemented, CAIR will reduce SO2 emissions in 28 eastern states and the District of Columbia by over 70% and NOx emissions by over 60% from 2003 levels.
Overall, the program's cap and trade program has been successful in achieving its goals. Since the 1990s, SO2 emissions have dropped 40%, and according to the Pacific Research Institute, acid rain levels have dropped 65% since 1976. Conventional regulation was used in the European Union, which saw a decrease of over 70% in SO2 emissions during the same time period.
In 2007, total SO2 emissions were 8.9 million tons, achieving the program's long-term goal ahead of the 2010 statutory deadline.
In 2007 the EPA estimated that by 2010, the overall costs of complying with the program for businesses and consumers would be $1 billion to $2 billion a year, only one fourth of what was originally predicted. Forbes says: "In 2010, by which time the cap and trade system had been augmented by the George W. Bush administration's Clean Air Interstate Rule, SO2 emissions had fallen to 5.1 million tons."
The term citizen science can be traced back as far as January 1989 and a campaign by the Audubon Society to measure acid rain. Scientist Muki Haklay cites in a policy report for the Wilson Center entitled 'Citizen Science and Policy: A European Perspective' a first use of the term 'citizen science' by R. Kerson in the magazine MIT Technology Review from January 1989. Quoting from the Wilson Center report: "The new form of engagement in science received the name "citizen science". The first recorded example of the use of the term is from 1989, describing how 225 volunteers across the US collected rain samples to assist the Audubon Society in an acid-rain awareness raising campaign. The volunteers collected samples, checked for acidity, and reported back to the organization. The information was then used to demonstrate the full extent of the phenomenon."
The most important gas which leads to acidification is sulfur dioxide. Emissions of nitrogen oxides which are oxidized to form nitric acid are of increasing importance due to stricter controls on emissions of sulfur compounds. 70 Tg(S) per year in the form of SO2 comes from fossil fuel combustion and industry, 2.8 Tg(S) from wildfires and 7–8 Tg(S) per year from volcanoes.
The principal natural phenomena that contribute acid-producing gases to the atmosphere are emissions from volcanoes. Thus, for example, fumaroles from the Laguna Caliente crater of Poás Volcano create extremely high amounts of acid rain and fog, with acidity as high as a pH of 2, clearing an area of any vegetation and frequently causing irritation to the eyes and lungs of inhabitants in nearby settlements. Acid-producing gasses are also created by biological processes that occur on the land, in wetlands, and in the oceans. The major biological source of sulfur compounds is dimethyl sulfide.
Nitric acid in rainwater is an important source of fixed nitrogen for plant life, and is also produced by electrical activity in the atmosphere such as lightning.
Acidic deposits have been detected in glacial ice thousands of years old in remote parts of the globe.
Soils of coniferous forests are naturally very acidic due to the shedding of needles, and the results of this phenomenon should not be confused with acid rain.
The principal cause of acid rain is sulfur and nitrogen compounds from human sources, such as electricity generation, animal agriculture, factories, and motor vehicles. Electrical power generation using coal is among the greatest contributors to gaseous pollution responsible for acidic rain. The gases can be carried hundreds of kilometers in the atmosphere before they are converted to acids and deposited. In the past, factories had short funnels to let out smoke but this caused many problems locally; thus, factories now have taller smoke funnels. However, dispersal from these taller stacks causes pollutants to be carried farther, causing widespread ecological damage.
Combustion of fuels produces sulfur dioxide and nitric oxides. They are converted into sulfuric acid and nitric acid.
In the gas phase sulfur dioxide is oxidized by reaction with the hydroxyl radical via an intermolecular reaction:
which is followed by:
In the presence of water, sulfur trioxide (SO3) is converted rapidly to sulfuric acid:
Nitrogen dioxide reacts with OH to form nitric acid:
When clouds are present, the loss rate of SO2 is faster than can be explained by gas phase chemistry alone. This is due to reactions in the liquid water droplets.
Sulfur dioxide dissolves in water and then, like carbon dioxide, hydrolyses in a series of equilibrium reactions:
There are a large number of aqueous reactions that oxidize sulfur from S(IV) to S(VI), leading to the formation of sulfuric acid. The most important oxidation reactions are with ozone, hydrogen peroxide and oxygen (reactions with oxygen are catalyzed by iron and manganese in the cloud droplets).
Wet deposition of acids occurs when any form of precipitation (rain, snow, and so on.) removes acids from the atmosphere and delivers it to the Earth's surface. This can result from the deposition of acids produced in the raindrops (see aqueous phase chemistry above) or by the precipitation removing the acids either in clouds or below clouds. Wet removal of both gases and aerosols are both of importance for wet deposition.
Acid deposition also occurs via dry deposition in the absence of precipitation. This can be responsible for as much as 20 to 60% of total acid deposition. This occurs when particles and gases stick to the ground, plants or other surfaces.
Acid rain has been shown to have adverse impacts on forests, freshwaters and soils, killing insect and aquatic life-forms as well as causing damage to buildings and having impacts on human health.
Both the lower pH and higher aluminium concentrations in surface water that occur as a result of acid rain can cause damage to fish and other aquatic animals. At pH lower than 5 most fish eggs will not hatch and lower pH can kill adult fish. As lakes and rivers become more acidic biodiversity is reduced. Acid rain has eliminated insect life and some fish species, including the brook trout in some lakes, streams, and creeks in geographically sensitive areas, such as the Adirondack Mountains of the United States. However, the extent to which acid rain contributes directly or indirectly via runoff from the catchment to lake and river acidity (i.e., depending on characteristics of the surrounding watershed) is variable. The United States Environmental Protection Agency's (EPA) website states: "Of the lakes and streams surveyed, acid rain caused acidity in 75% of the acidic lakes and about 50% of the acidic streams". Lakes hosted by silicate basement rocks are more acidic than lakes within limestone or other basement rocks with a carbonate composition (i.e. marble) due to buffering effects by carbonate minerals, even with the same amount of acid rain.
Soil biology and chemistry can be seriously damaged by acid rain. Some microbes are unable to tolerate changes to low pH and are killed. The enzymes of these microbes are denatured (changed in shape so they no longer function) by the acid. The hydronium ions of acid rain also mobilize toxins, such as aluminium, and leach away essential nutrients and minerals such as magnesium.
Soil chemistry can be dramatically changed when base cations, such as calcium and magnesium, are leached by acid rain thereby affecting sensitive species, such as sugar maple (Acer saccharum).
Soil acidification
Impacts of acidic water and Soil acidification on plants could be minor or in most cases major. In minor cases which do not result in fatality of plant life include; less-sensitive plants to acidic conditions and or less potent acid rain. Also in minor cases the plant will eventually die due to the acidic water lowering the plants natural pH. Acidic water enters the plant and causes important plant minerals to dissolve and get carried away; which ultimately causes the plant to die of lack of minerals for nutrition. In major cases which are more extreme; the same process of damage occurs as in minor cases, which is removal of essential minerals, but at a much quicker rate. Likewise, acid rain that falls on soil and on plant leaves causes drying of the waxy leaf cuticle; which ultimately causes rapid water loss from the plant to the outside atmosphere and results in death of the plant. To see if a plant is being affected by soil acidification, one can closely observe the plant leaves. If the leaves are green and look healthy, the soil pH is normal and acceptable for plant life. But if the plant leaves have yellowing between the veins on their leaves, that means the plant is suffering from acidification and is unhealthy. Moreover, a plant suffering from soil acidification cannot photosynthesize. Drying out of the plant due to acidic water destroy chloroplast organelles. Without being able to photosynthesize a plant cannot create nutrients for its own survival or oxygen for the survival of aerobic organisms; which affects most species of earth and ultimately end the purpose of the plants existence.
Adverse effects may be indirectly related to acid rain, like the acid's effects on soil (see above) or high concentration of gaseous precursors to acid rain. High altitude forests are especially vulnerable as they are often surrounded by clouds and fog which are more acidic than rain.
Other plants can also be damaged by acid rain, but the effect on food crops is minimized by the application of lime and fertilizers to replace lost nutrients. In cultivated areas, limestone may also be added to increase the ability of the soil to keep the pH stable, but this tactic is largely unusable in the case of wilderness lands. When calcium is leached from the needles of red spruce, these trees become less cold tolerant and exhibit winter injury and even death.
Acid rain has a much less harmful effect on oceans on a global scale, but it creates an amplified impact in the shallower waters of coastal waters. Acid rain can cause the ocean's pH to fall, known as ocean acidification, making it more difficult for different coastal species to create their exoskeletons that they need to survive. These coastal species link together as part of the ocean's food chain and without them being a source for other marine life to feed off of more marine life will die.
Coral's limestone skeleton is sensitive to pH drop, because the calcium carbonate, core component of the limestone dissolves in acidic (low pH) solutions.
In addition to acidification, excess nitrogen inputs from the atmosphere promote increased growth of phytoplankton and other marine plants which, in turn, may cause more frequent harmful algal blooms and eutrophication (the creation of oxygen-depleted “dead zones”) in some parts of the ocean.
Acid rain does not directly affect human health. The acid in the rainwater is too dilute to have direct adverse effects. The particulates responsible for acid rain (sulfur dioxide and nitrogen oxides) do have an adverse effect. Increased amounts of fine particulate matter in the air contribute to heart and lung problems including asthma and bronchitis.
Acid rain can damage buildings, historic monuments, and statues, especially those made of rocks, such as limestone and marble, that contain large amounts of calcium carbonate. Acids in the rain react with the calcium compounds in the stones to create gypsum, which then flakes off.
The effects of this are commonly seen on old gravestones, where acid rain can cause the inscriptions to become completely illegible. Acid rain also increases the corrosion rate of metals, in particular iron, steel, copper and bronze.
Places significantly impacted by acid rain around the globe include most of eastern Europe from Poland northward into Scandinavia, the eastern third of the United States, and southeastern Canada. Other affected areas include the southeastern coast of China and Taiwan.
Many coal-firing power stations use flue-gas desulfurization (FGD) to remove sulfur-containing gases from their stack gases. For a typical coal-fired power station, FGD will remove 95% or more of the SO2 in the flue gases. An example of FGD is the wet scrubber which is commonly used. A wet scrubber is basically a reaction tower equipped with a fan that extracts hot smoke stack gases from a power plant into the tower. Lime or limestone in slurry form is also injected into the tower to mix with the stack gases and combine with the sulfur dioxide present. The calcium carbonate of the limestone produces pH-neutral calcium sulfate that is physically removed from the scrubber. That is, the scrubber turns sulfur pollution into industrial sulfates.
In some areas the sulfates are sold to chemical companies as gypsum when the purity of calcium sulfate is high. In others, they are placed in landfill. The effects of acid rain can last for generations, as the effects of pH level change can stimulate the continued leaching of undesirable chemicals into otherwise pristine water sources, killing off vulnerable insect and fish species and blocking efforts to restore native life.
Fluidized bed combustion also reduces the amount of sulfur emitted by power production.
Vehicle emissions control reduces emissions of nitrogen oxides from motor vehicles.
International treaties on the long-range transport of atmospheric pollutants have been agreed for example, the 1985 Helsinki Protocol on the Reduction of Sulphur Emissions under the Convention on Long-Range Transboundary Air Pollution. Canada and the US signed the Air Quality Agreement in 1991. Most European countries and Canada have signed the treaties.
In this regulatory scheme, every current polluting facility is given or may purchase on an open market an emissions allowance for each unit of a designated pollutant it emits. Operators can then install pollution control equipment, and sell portions of their emissions allowances they no longer need for their own operations, thereby recovering some of the capital cost of their investment in such equipment. The intention is to give operators economic incentives to install pollution controls.
The first emissions trading market was established in the United States by enactment of the Clean Air Act Amendments of 1990. The overall goal of the Acid Rain Program established by the Act is to achieve significant environmental and public health benefits through reductions in emissions of sulfur dioxide (SO2) and nitrogen oxides (NOx), the primary causes of acid rain. To achieve this goal at the lowest cost to society, the program employs both regulatory and market based approaches for controlling air pollution. | https://en.wikipedia.org/wiki?curid=3263 |
Arlo Guthrie
Arlo Davy Guthrie (born July 10, 1947) is an American folk singer-songwriter. He is known for singing songs of protest against social injustice, and storytelling while performing songs, following the tradition of his father Woody Guthrie. Guthrie's best-known work is his debut piece, "Alice's Restaurant Massacree", a satirical talking blues song about 18 minutes in length that has since become a Thanksgiving anthem. His only top-40 hit was a cover of Steve Goodman's "City of New Orleans". His song "Massachusetts" was named the official folk song of the state in which he has lived most of his adult life. Guthrie has also made several acting appearances. He is the father of four children, who have also had careers as musicians.
Guthrie was born in the Coney Island neighborhood of Brooklyn, New York, the son of the folk singer and composer Woody Guthrie and dancer Marjorie Mazia Guthrie. His sister is the record producer Nora Guthrie. His mother was a professional dancer with the Martha Graham Company and founder of the Committee to Combat Huntington's disease (later Huntington's Disease Society of America), the illness from which Woody Guthrie died in 1967. Arlo's father was from a Protestant family and his mother was Jewish. His maternal grandmother was the renowned Yiddish poet Aliza Greenblatt.
Guthrie received religious training for his bar mitzvah from Rabbi Meir Kahane, who would go on to form the Jewish Defense League. "Rabbi Kahane was a really nice, patient teacher," Guthrie later recalled, "but shortly after he started giving me my lessons, he started going haywire. Maybe I was responsible." Guthrie converted to Catholicism in 1977, before embracing interfaith beliefs later in his life. "I firmly believe that different religious traditions can reside in one person, or one nation or even one world," Guthrie said in 2015.
Guthrie attended Woodward School in Clinton Hill, Brooklyn, from first through eighth grades and later graduated from the Stockbridge School, in Stockbridge, Massachusetts, in 1965. He spent the summer of 1965 in London, eventually meeting Karl Dallas, who connected Guthrie with London's folk rock scene and became a lifelong friend of his. He briefly attended Rocky Mountain College, in Billings, Montana. He received an honorary doctorate from Siena College in 1981 and from Westfield State College in 2008.
As a singer, songwriter and lifelong political activist, Guthrie carries on the legacy of his father. He was awarded the Peace Abbey Courage of Conscience award on September 26, 1992.
On Thanksgiving Day 1965, while in Stockbridge, Massachusetts, during a break from his brief stint in college, 18-year-old Guthrie was arrested for illegally dumping on private property what he described as "a half-ton of garbage" from the home of his friends, teachers Ray and Alice Brock, after he discovered the local landfill was closed for the holiday. Guthrie and his friend Richard Robbins appeared in court, pled guilty to the charges, were levied a nominal fine and picked up the garbage that weekend.
This littering charge would soon serve as the basis for Guthrie's most famous work, "Alice's Restaurant Massacree", a talking blues song that lasts 18 minutes and 34 seconds in its original recorded version. Guthrie has pointed out that this was also the exact length of one of the infamous gaps in Richard Nixon's Watergate tapes, and that Nixon owned a copy of the record. The Alice in the song is Alice Brock, who had been a librarian at Arlo's boarding school in the town before opening her restaurant. She later opened an art studio in Provincetown, Massachusetts.
The song lampoons the Vietnam War draft. However, Guthrie has stated in multiple interviews that the song is more an "anti-stupidity" song than an anti-war song, adding that it is based on a true incident. In the song, Guthrie is called up for a draft examination and rejected as unfit for military service as a result of a criminal record consisting solely of one conviction for the aforementioned littering. Alice and her restaurant are the subjects of the refrain, but are generally mentioned only incidentally in the story (early drafts of the song explained that the restaurant was a place to hide from the police). Though her presence is implied at certain points in the story, Alice herself is described explicitly in the tale only briefly when she bails Guthrie and a friend out of jail. On the DVD commentary for the 1969 movie, Guthrie stated that the events presented in the song all actually happened (others, such as the arresting officer, William Obanhein, disputed some of the song's details, but generally verified the truth of the overall story).
"Alice's Restaurant" was the song that earned Guthrie his first recording contract, after counterculture radio host Bob Fass began playing a tape recording of one of Guthrie's live performances of the song repeatedly one night in 1967. A performance at the Newport Folk Festival on July 17, 1967, was also very well received. Soon afterward, Guthrie recorded the song in front of a studio audience in New York City and released it as side one of the album, "Alice's Restaurant". By the end of the decade, Guthrie had gone from playing coffee houses and small venues to playing massive and prestigious venues such as Carnegie Hall and the Woodstock Festival.
For a short period after its release in October 1967, "Alice's Restaurant" was heavily played on U.S. college and counterculture radio stations. It became a symbol of the late 1960s, and for many it defined an attitude and lifestyle that were lived out across the country in the ensuing years. Its leisurely finger-picking acoustic guitar and rambling lyrics were widely memorized and played by irreverent youth. Many stations in the United States have a Thanksgiving Day tradition of playing "Alice's Restaurant".
A 1969 film, directed and co-written by Arthur Penn, was based on the true story told in the song, but with the addition of a large number of fictional scenes. This film, also called "Alice's Restaurant", featured Guthrie and several other figures in the song portraying themselves. The part of his father Woody Guthrie, who had died in 1967, was played by actor Joseph Boley; Alice, who made a cameo appearance as an extra, was also recast, with actress Pat Quinn in the title role (Alice Brock later disowned the film's portrayal of her).
Despite its popularity, the song "Alice's Restaurant Massacree" is not always featured on the setlist of any given Guthrie performance. Since putting it back into his setlist in 1984, he has performed the song every ten years, stating in a 2014 interview that the Vietnam War had ended by the 1970s and that everyone who was attending his concerts had likely already heard the song anyway. So, after a brief period in the late 1960s and early 1970s when he replaced the monologue with a fictional one involving "multicolored rainbow roaches", he decided to do it only on special occasions from that point forward.
The "Alice's Restaurant" song was one of a few very long songs to become popular just when albums began replacing hit singles as young people's main music listening. But in 1972 Guthrie had a highly successful single too, Steve Goodman's song "City of New Orleans", a wistful paean to long-distance passenger rail travel. Guthrie's first trip on that train was in December 2005 (when his family joined other musicians on a train trip across the country to raise money for musicians financially devastated by Hurricane Katrina and Hurricane Rita, in the South of the United States). He also had a minor hit with his song "Coming into Los Angeles", which was played at the 1969 Woodstock Festival, but did not get much radio airplay because of its plot (involving the smuggling of drugs from London by airplane), and success with a live version of "The Motorcycle Song" (one of the songs on the B-side of the "Alice's Restaurant" album). A cover of the folk song "Gypsy Davy" was a hit on the easy listening charts.
In the fall of 1975 during a benefit concert in Massachusetts, Arlo Guthrie performed with his band, Shenandoah, in public for the first time. They continued to tour and record throughout the 1970s until the early 1990s. Although the band received good reviews, it never gained the popularity that Guthrie did while playing solo. Shenandoah consisted of (after 1976) David Grover, Steve Ide, Carol Ide, Terry A La Berry and Dan Velika and is not to be confused with the country music group Shenandoah. The Ides, along with Terry a la Berry, reunited with Guthrie for a 2018 tour. Guthrie has performed a concert almost every Thanksgiving weekend since he became famous at Carnegie Hall, a tradition he announced would come to an end after the 2019 concert.
Guthrie's 1976 album "Amigo" received a five-star (highest rating) from "Rolling Stone", and may be his best-received work. However, that album, like Guthrie's earlier Warner Bros. Records albums, is rarely heard today, even though each contains strong folk and folk rock music accompanied by widely regarded musicians such as Ry Cooder.
A number of musicians from a variety of genres have joined Guthrie onstage, including Pete Seeger, David Bromberg, Cyril Neville, Emmylou Harris, Willie Nelson, Judy Collins, John Prine, Wesley Gray, Josh Ritter, and others.
Though Guthrie is best known for being a musician, singer, and composer, throughout the years he has also appeared as an actor in films and on television. The film "Alice's Restaurant" (1969) is his best known role, but he has had small parts in several films and even co-starred in a television drama, "Byrds of Paradise".
Guthrie has had minor roles in several movies and television series. Usually, he has appeared as himself, often performing music and/or being interviewed about the 1960s, folk music and various social causes. His television appearances have included a broad range of programs from "The Muppet Show" (1979) to "Politically Incorrect" (1998). A rare dramatic film part was in the 1992 movie "Roadside Prophets". Guthrie's memorable appearance at the 1969 Woodstock Festival was documented in the Michael Wadleigh film "Woodstock".
Guthrie also made a pilot for a TV variety show called "The Arlo Guthrie Show" in February 1987. The hour-long program included story telling and musical performances and was filmed in Austin, Texas. It was broadcast nationally on PBS. Special guests were Pete Seeger, Bonnie Raitt, David Bromberg and Jerry Jeff Walker.
In his earlier years, at least from the 1960s to the 1980s, Guthrie had taken what seemed a left-leaning approach to American politics, influenced by his father. In his often lengthy comments during concerts his expressed positions were consistently anti-war, anti-Nixon, pro-drugs and in favor of making nuclear power illegal. However, he apparently did not perceive himself as the major youth culture spokesperson he had been regarded as by the media, as evidenced by the lyrics in his 1979 song "Prologue": "I can remember all of your smiles during the demonstrations ... and together we sang our victory songs though we were worlds apart." A 1969 rewrite of "Alice's Restaurant" pokes fun at then-former President Lyndon Johnson and his staff.
In 1984, he was the featured celebrity in George McGovern's campaign for the Democratic presidential nomination in Guthrie's home state of Massachusetts, performing at rallies and receptions.
Guthrie identified as a registered Republican in 2008. He endorsed Texas Congressman Ron Paul for the 2008 Republican Party nomination, and said, "I love this guy. Dr. Paul is the only candidate I know of who would have signed the Constitution of the United States had he been there. I'm with him, because he seems to be the only candidate who actually believes it has as much relevance today as it did a couple of hundred years ago. I look forward to the day when we can work out the differences we have with the same revolutionary vision and enthusiasm that is our American legacy." He told "The New York Times Magazine" that he is a Republican because, "We had enough good Democrats. We needed a few more good Republicans. We needed a loyal opposition."
Commenting on the upcoming 2016 election, Guthrie identified himself as an independent, and said he was "equally suspicious of Democrats as I am of Republicans." He declined to endorse a candidate, noting that he personally liked Bernie Sanders despite disagreeing with Sanders' platform, and he praised Donald Trump for not relying on campaign donations, stating that he thought it "wonderful" that "he's [Trump] not in anyone's pocket," but did not believe that this necessarily means that Trump has the best interests of the country in mind.
In 2018, Guthrie contacted publication Urban Milwaukee to clarify his political stance. He stated "I am not a Republican," and expressed deep disagreement with the Trump administration's views and policies on immigration. Guthrie further clarified, "I left the party years ago and do not identify myself with either party these days. I strongly urge my fellow Americans to stop the current trend of guilt by association, and look beyond the party names and affiliations, and work for candidates whose policies are more closely aligned with their own, whatever they may be. ... I don't pretend to be right all the time, and sometimes I've gone so far as to change my mind from time to time."
Like his father, Woody Guthrie, he often sings songs of protest against social injustice. He collaborated with poet Adrian Mitchell to tell the story of Chilean folk singer and activist Víctor Jara in song. He regularly performed with folk musician Pete Seeger, one of his father's longtime partners. Ramblin' Jack Elliott, who had lived for two years in the Guthries' home before Arlo left for boarding school, had absorbed Woody's style perhaps better than anyone; Arlo has been said to have credited Elliott for passing it along to him.
In 1991, Guthrie bought the church that had served as Alice and Ray Brock's former home in Great Barrington, Massachusetts, and converted it to the Guthrie Center, an interfaith meeting place that serves people of all religions. The center provides weekly free lunches in the community and support for families living with HIV/AIDS, as well as other life-threatening illnesses. It also hosts a summertime concert series and Guthrie does six or seven fund raising shows there every year. There are several annual events such as the Walk-A-Thon to Cure Huntington's Disease and a "Thanksgiving Dinner That Can't Be Beat" for families, friends, doctors and scientists who live and work with Huntington's disease.
One of the title characters in the comic strip "Arlo and Janis" is named after Guthrie. Cartoonist Jimmy Johnson noted he was inspired by a friend who resembled Guthrie to name one of his characters Arlo.
Guthrie resides in the town of Washington, Massachusetts, where he and Jackie Hyde, his wife of 43 years, were long time residents. Jackie died on October 14, 2012, shortly after being diagnosed with liver cancer. He also has a home in Sebastian, Florida.
Guthrie's son Abe Guthrie and his daughters Annie, Sarah Lee Guthrie, and Cathy Guthrie are also musicians. Abe Guthrie was formerly in the folk-rock band Xavier and has toured with his father. Annie Guthrie writes songs, performs, and takes care of family touring details. Sarah Lee performs and records with her husband Johnny Irion. Cathy plays ukulele in Folk Uke, a group she formed with Amy Nelson, a daughter of Willie Nelson. | https://en.wikipedia.org/wiki?curid=3273 |
Antioxidant
Antioxidants are compounds that inhibit oxidation. Oxidation is a chemical reaction that can produce free radicals, thereby leading to chain reactions that may damage the cells of organisms. Antioxidants such as thiols or ascorbic acid (vitamin C) terminate these chain reactions. To balance the oxidative stress, plants and animals maintain complex systems of overlapping antioxidants, such as glutathione and enzymes (e.g., catalase and superoxide dismutase), produced internally, or the dietary antioxidants vitamin C and vitamin E.
The term "antioxidant" is mostly used for two entirely different groups of substances: industrial chemicals that are added to products to prevent oxidation, and naturally occurring compounds that are present in foods and tissue. The former, industrial antioxidants, have diverse uses: acting as preservatives in food and cosmetics, and being oxidation-inhibitors in fuels.
Antioxidant dietary supplements have not been shown to improve health in humans, or to be effective at preventing disease. Supplements of beta-carotene, vitamin A, and vitamin E have no positive effect on mortality rate or cancer risk. Additionally, supplementation with selenium or vitamin E does not reduce the risk of cardiovascular disease.
Although certain levels of antioxidant vitamins in the diet are required for good health, there is still considerable debate on whether antioxidant-rich foods or supplements have anti-disease activity. Moreover, if they are actually beneficial, it is unknown which antioxidants are health-promoting in the diet and in what amounts beyond typical dietary intake. Some authors dispute the hypothesis that antioxidant vitamins could prevent chronic diseases, and others declare that the hypothesis is unproven and misguided. Polyphenols, which have antioxidant properties in vitro, have unknown antioxidant activity in vivo due to extensive metabolism following digestion and little clinical evidence of efficacy.
Common pharmaceuticals (and supplements) with antioxidant properties may interfere with the efficacy of certain anticancer medication and radiation therapy.
Relatively strong reducing acids can have antinutrient effects by binding to dietary minerals such as iron and zinc in the gastrointestinal tract and preventing them from being absorbed. Examples are oxalic acid, tannins and phytic acid, which are high in plant-based diets. Calcium and iron deficiencies are not uncommon in diets in developing countries where less meat is eaten and there is high consumption of phytic acid from beans and unleavened whole grain bread.
High doses of some antioxidants may have harmful long-term effects. The "beta-Carotene and Retinol Efficacy Trial" (CARET) study of lung cancer patients found that smokers given supplements containing beta-carotene and vitamin A had increased rates of lung cancer. Subsequent studies confirmed these adverse effects. These harmful effects may also be seen in non-smokers, as one meta-analysis including data from approximately 230,000 patients showed that β-carotene, vitamin A or vitamin E supplementation is associated with increased mortality, but saw no significant effect from vitamin C. No health risk was seen when all the randomized controlled studies were examined together, but an increase in mortality was detected when only high-quality and low-bias risk trials were examined separately. As the majority of these low-bias trials dealt with either elderly people, or people with disease, these results may not apply to the general population. This meta-analysis was later repeated and extended by the same authors, confirming the previous results. These two publications are consistent with some previous meta-analyses that also suggested that vitamin E supplementation increased mortality, and that antioxidant supplements increased the risk of colon cancer. Beta-carotene may also increase lung cancer. Overall, the large number of clinical trials carried out on antioxidant supplements suggest that either these products have no effect on health, or that they cause a small increase in mortality in elderly or vulnerable populations.
A paradox in metabolism is that, while the vast majority of complex life on Earth requires oxygen for its existence, oxygen is a highly reactive molecule that damages living organisms by producing reactive oxygen species. Consequently, organisms contain a complex network of antioxidant metabolites and enzymes that work together to prevent oxidative damage to cellular components such as DNA, proteins and lipids. In general, antioxidant systems either prevent these reactive species from being formed, or remove them before they can damage vital components of the cell. However, reactive oxygen species also have useful cellular functions, such as redox signaling. Thus, the function of antioxidant systems is not to remove oxidants entirely, but instead to keep them at an optimum level.
The reactive oxygen species produced in cells include hydrogen peroxide (H2O2), hypochlorous acid (HClO), and free radicals such as the hydroxyl radical (·OH) and the superoxide anion (O2−). The hydroxyl radical is particularly unstable and will react rapidly and non-specifically with most biological molecules. This species is produced from hydrogen peroxide in metal-catalyzed redox reactions such as the Fenton reaction. These oxidants can damage cells by starting chemical chain reactions such as lipid peroxidation, or by oxidizing DNA or proteins. Damage to DNA can cause mutations and possibly cancer, if not reversed by DNA repair mechanisms, while damage to proteins causes enzyme inhibition, denaturation and protein degradation.
The use of oxygen as part of the process for generating metabolic energy produces reactive oxygen species. In this process, the superoxide anion is produced as a by-product of several steps in the electron transport chain. Particularly important is the reduction of coenzyme Q in complex III, since a highly reactive free radical is formed as an intermediate (Q·−). This unstable intermediate can lead to electron "leakage", when electrons jump directly to oxygen and form the superoxide anion, instead of moving through the normal series of well-controlled reactions of the electron transport chain. Peroxide is also produced from the oxidation of reduced flavoproteins, such as complex I. However, although these enzymes can produce oxidants, the relative importance of the electron transfer chain to other processes that generate peroxide is unclear. In plants, algae, and cyanobacteria, reactive oxygen species are also produced during photosynthesis, particularly under conditions of high light intensity. This effect is partly offset by the involvement of carotenoids in photoinhibition, and in algae and cyanobacteria, by large amount of iodide and selenium, which involves these antioxidants reacting with over-reduced forms of the photosynthetic reaction centres to prevent the production of reactive oxygen species.
Antioxidants are classified into two broad divisions, depending on whether they are soluble in water (hydrophilic) or in lipids (lipophilic). In general, water-soluble antioxidants react with oxidants in the cell cytosol and the blood plasma, while lipid-soluble antioxidants protect cell membranes from lipid peroxidation. These compounds may be synthesized in the body or obtained from the diet. The different antioxidants are present at a wide range of concentrations in body fluids and tissues, with some such as glutathione or ubiquinone mostly present within cells, while others such as uric acid are more evenly distributed (see table below). Some antioxidants are only found in a few organisms and these compounds can be important in pathogens and can be virulence factors.
The relative importance and interactions between these different antioxidants is a very complex question, with the various antioxidant compounds and antioxidant enzyme systems having synergistic and interdependent effects on one another. The action of one antioxidant may therefore depend on the proper function of other members of the antioxidant system. The amount of protection provided by any one antioxidant will also depend on its concentration, its reactivity towards the particular reactive oxygen species being considered, and the status of the antioxidants with which it interacts.
Some compounds contribute to antioxidant defense by chelating transition metals and preventing them from catalyzing the production of free radicals in the cell. Particularly important is the ability to sequester iron, which is the function of iron-binding proteins such as transferrin and ferritin. Selenium and zinc are commonly referred to as "antioxidant nutrients", but these chemical elements have no antioxidant action themselves and are instead required for the activity of some antioxidant enzymes, as is discussed below.
Uric acid is by far the highest concentration antioxidant in human blood. Uric acid (UA) is an antioxidant oxypurine produced from xanthine by the enzyme xanthine oxidase, and is an intermediate product of purine metabolism. In almost all land animals, urate oxidase further catalyzes the oxidation of uric acid to allantoin, but in humans and most higher primates, the urate oxidase gene is nonfunctional, so that UA is not further broken down. The evolutionary reasons for this loss of urate conversion to allantoin remain the topic of active speculation. The antioxidant effects of uric acid have led researchers to suggest this mutation was beneficial to early primates and humans. Studies of high altitude acclimatization support the hypothesis that urate acts as an antioxidant by mitigating the oxidative stress caused by high-altitude hypoxia.
Uric acid has the highest concentration of any blood antioxidant and provides over half of the total antioxidant capacity of human serum. Uric acid's antioxidant activities are also complex, given that it does not react with some oxidants, such as superoxide, but does act against peroxynitrite, peroxides, and hypochlorous acid. Concerns over elevated UA's contribution to gout must be considered as one of many risk factors. By itself, UA-related risk of gout at high levels (415–530 μmol/L) is only 0.5% per year with an increase to 4.5% per year at UA supersaturation levels (535+ μmol/L). Many of these aforementioned studies determined UA's antioxidant actions within normal physiological levels, and some found antioxidant activity at levels as high as 285 μmol/L.
Ascorbic acid or vitamin C is a monosaccharide oxidation-reduction (redox) catalyst found in both animals and plants. As one of the enzymes needed to make ascorbic acid has been lost by mutation during primate evolution, humans must obtain it from their diet; it is therefore a dietary vitamin. Most other animals are able to produce this compound in their bodies and do not require it in their diets. Ascorbic acid is required for the conversion of the procollagen to collagen by oxidizing proline residues to hydroxyproline. In other cells, it is maintained in its reduced form by reaction with glutathione, which can be catalysed by protein disulfide isomerase and glutaredoxins. Ascorbic acid is a redox catalyst which can reduce, and thereby neutralize, reactive oxygen species such as hydrogen peroxide. In addition to its direct antioxidant effects, ascorbic acid is also a substrate for the redox enzyme ascorbate peroxidase, a function that is used in stress resistance in plants. Ascorbic acid is present at high levels in all parts of plants and can reach concentrations of 20 millimolar in chloroplasts.
Glutathione is a cysteine-containing peptide found in most forms of aerobic life. It is not required in the diet and is instead synthesized in cells from its constituent amino acids. Glutathione has antioxidant properties since the thiol group in its cysteine moiety is a reducing agent and can be reversibly oxidized and reduced. In cells, glutathione is maintained in the reduced form by the enzyme glutathione reductase and in turn reduces other metabolites and enzyme systems, such as ascorbate in the glutathione-ascorbate cycle, glutathione peroxidases and glutaredoxins, as well as reacting directly with oxidants. Due to its high concentration and its central role in maintaining the cell's redox state, glutathione is one of the most important cellular antioxidants. In some organisms glutathione is replaced by other thiols, such as by mycothiol in the Actinomycetes, bacillithiol in some Gram-positive bacteria, or by trypanothione in the Kinetoplastids.
Vitamin E is the collective name for a set of eight related tocopherols and tocotrienols, which are fat-soluble vitamins with antioxidant properties. Of these, α-tocopherol has been most studied as it has the highest bioavailability, with the body preferentially absorbing and metabolising this form.
It has been claimed that the α-tocopherol form is the most important lipid-soluble antioxidant, and that it protects membranes from oxidation by reacting with lipid radicals produced in the lipid peroxidation chain reaction. This removes the free radical intermediates and prevents the propagation reaction from continuing. This reaction produces oxidised α-tocopheroxyl radicals that can be recycled back to the active reduced form through reduction by other antioxidants, such as ascorbate, retinol or ubiquinol. This is in line with findings showing that α-tocopherol, but not water-soluble antioxidants, efficiently protects glutathione peroxidase 4 (GPX4)-deficient cells from cell death. GPx4 is the only known enzyme that efficiently reduces lipid-hydroperoxides within biological membranes.
However, the roles and importance of the various forms of vitamin E are presently unclear, and it has even been suggested that the most important function of α-tocopherol is as a signaling molecule, with this molecule having no significant role in antioxidant metabolism. The functions of the other forms of vitamin E are even less well understood, although γ-tocopherol is a nucleophile that may react with electrophilic mutagens, and tocotrienols may be important in protecting neurons from damage.
Antioxidants that are reducing agents can also act as pro-oxidants. For example, vitamin C has antioxidant activity when it reduces oxidizing substances such as hydrogen peroxide; however, it will also reduce metal ions that generate free radicals through the Fenton reaction.
The relative importance of the antioxidant and pro-oxidant activities of antioxidants is an area of current research, but vitamin C, which exerts its effects as a vitamin by oxidizing polypeptides, appears to have a mostly antioxidant action in the human body. However, less data is available for other dietary antioxidants, such as vitamin E, or the polyphenols. Likewise, the pathogenesis of diseases involving hyperuricemia likely involve uric acid's direct and indirect pro-oxidant properties.
That is, paradoxically, agents which are normally considered antioxidants can act as conditional pro-oxidants and actually increase oxidative stress. Besides ascorbate, medically important conditional pro-oxidants include uric acid and sulfhydryl amino acids such as homocysteine. Typically, this involves some transition-series metal such as copper or iron as catalyst. The potential role of the pro-oxidant role of uric acid in (e.g.) atherosclerosis and ischemic stroke is considered above. Another example is the postulated role of homocysteine in atherosclerosis.
As with the chemical antioxidants, cells are protected against oxidative stress by an interacting network of antioxidant enzymes. Here, the superoxide released by processes such as oxidative phosphorylation is first converted to hydrogen peroxide and then further reduced to give water. This detoxification pathway is the result of multiple enzymes, with superoxide dismutases catalysing the first step and then catalases and various peroxidases removing hydrogen peroxide. As with antioxidant metabolites, the contributions of these enzymes to antioxidant defenses can be hard to separate from one another, but the generation of transgenic mice lacking just one antioxidant enzyme can be informative.
Superoxide dismutases (SODs) are a class of closely related enzymes that catalyze the breakdown of the superoxide anion into oxygen and hydrogen peroxide. SOD enzymes are present in almost all aerobic cells and in extracellular fluids. Superoxide dismutase enzymes contain metal ion cofactors that, depending on the isozyme, can be copper, zinc, manganese or iron. In humans, the copper/zinc SOD is present in the cytosol, while manganese SOD is present in the mitochondrion. There also exists a third form of SOD in extracellular fluids, which contains copper and zinc in its active sites. The mitochondrial isozyme seems to be the most biologically important of these three, since mice lacking this enzyme die soon after birth. In contrast, the mice lacking copper/zinc SOD (Sod1) are viable but have numerous pathologies and a reduced lifespan (see article on superoxide), while mice without the extracellular SOD have minimal defects (sensitive to hyperoxia). In plants, SOD isozymes are present in the cytosol and mitochondria, with an iron SOD found in chloroplasts that is absent from vertebrates and yeast.
Catalases are enzymes that catalyse the conversion of hydrogen peroxide to water and oxygen, using either an iron or manganese cofactor. This protein is localized to peroxisomes in most eukaryotic cells. Catalase is an unusual enzyme since, although hydrogen peroxide is its only substrate, it follows a ping-pong mechanism. Here, its cofactor is oxidised by one molecule of hydrogen peroxide and then regenerated by transferring the bound oxygen to a second molecule of substrate. Despite its apparent importance in hydrogen peroxide removal, humans with genetic deficiency of catalase — "acatalasemia" — or mice genetically engineered to lack catalase completely, suffer few ill effects.
Peroxiredoxins are peroxidases that catalyze the reduction of hydrogen peroxide, organic hydroperoxides, as well as peroxynitrite. They are divided into three classes: typical 2-cysteine peroxiredoxins; atypical 2-cysteine peroxiredoxins; and 1-cysteine peroxiredoxins. These enzymes share the same basic catalytic mechanism, in which a redox-active cysteine (the peroxidatic cysteine) in the active site is oxidized to a sulfenic acid by the peroxide substrate. Over-oxidation of this cysteine residue in peroxiredoxins inactivates these enzymes, but this can be reversed by the action of sulfiredoxin. Peroxiredoxins seem to be important in antioxidant metabolism, as mice lacking peroxiredoxin 1 or 2 have shortened lifespan and suffer from hemolytic anaemia, while plants use peroxiredoxins to remove hydrogen peroxide generated in chloroplasts.
The thioredoxin system contains the 12-kDa protein thioredoxin and its companion thioredoxin reductase. Proteins related to thioredoxin are present in all sequenced organisms. Plants, such as "Arabidopsis thaliana," have a particularly great diversity of isoforms. The active site of thioredoxin consists of two neighboring cysteines, as part of a highly conserved CXXC motif, that can cycle between an active dithiol form (reduced) and an oxidized disulfide form. In its active state, thioredoxin acts as an efficient reducing agent, scavenging reactive oxygen species and maintaining other proteins in their reduced state. After being oxidized, the active thioredoxin is regenerated by the action of thioredoxin reductase, using NADPH as an electron donor.
The glutathione system includes glutathione, glutathione reductase, glutathione peroxidases, and glutathione "S"-transferases. This system is found in animals, plants and microorganisms. Glutathione peroxidase is an enzyme containing four selenium-cofactors that catalyzes the breakdown of hydrogen peroxide and organic hydroperoxides. There are at least four different glutathione peroxidase isozymes in animals. Glutathione peroxidase 1 is the most abundant and is a very efficient scavenger of hydrogen peroxide, while glutathione peroxidase 4 is most active with lipid hydroperoxides. Surprisingly, glutathione peroxidase 1 is dispensable, as mice lacking this enzyme have normal lifespans, but they are hypersensitive to induced oxidative stress. In addition, the glutathione "S"-transferases show high activity with lipid peroxides. These enzymes are at particularly high levels in the liver and also serve in detoxification metabolism.
Oxidative stress is thought to contribute to the development of a wide range of diseases including Alzheimer's disease, Parkinson's disease, the pathologies caused by diabetes, rheumatoid arthritis, and neurodegeneration in motor neuron diseases. In many of these cases, it is unclear if oxidants trigger the disease, or if they are produced as a secondary consequence of the disease and from general tissue damage; One case in which this link is particularly well understood is the role of oxidative stress in cardiovascular disease. Here, low density lipoprotein (LDL) oxidation appears to trigger the process of atherogenesis, which results in atherosclerosis, and finally cardiovascular disease.
Oxidative damage in DNA can cause cancer. Several antioxidant enzymes such as superoxide dismutase, catalase, glutathione peroxidase, glutathione reductase, glutathione S-transferase etc. protect DNA from oxidative stress. It has been proposed that polymorphisms in these enzymes are associated with DNA damage and subsequently the individual's risk of cancer susceptibility.
A low calorie diet extends median and maximum lifespan in many animals. This effect may involve a reduction in oxidative stress. While there is some evidence to support the role of oxidative stress in aging in model organisms such as "Drosophila melanogaster" and "Caenorhabditis elegans", the evidence in mammals is less clear. Indeed, a 2009 review of experiments in mice concluded that almost all manipulations of antioxidant systems had no effect on aging.
Antioxidants are used as food additives to help guard against food deterioration. Exposure to oxygen and sunlight are the two main factors in the oxidation of food, so food is preserved by keeping in the dark and sealing it in containers or even coating it in wax, as with cucumbers. However, as oxygen is also important for plant respiration, storing plant materials in anaerobic conditions produces unpleasant flavors and unappealing colors. Consequently, packaging of fresh fruits and vegetables contains an ~8% oxygen atmosphere. Antioxidants are an especially important class of preservatives as, unlike bacterial or fungal spoilage, oxidation reactions still occur relatively rapidly in frozen or refrigerated food. These preservatives include natural antioxidants such as ascorbic acid (AA, E300) and tocopherols (E306), as well as synthetic antioxidants such as propyl gallate (PG, E310), tertiary butylhydroquinone (TBHQ), butylated hydroxyanisole (BHA, E320) and butylated hydroxytoluene (BHT, E321).
The most common molecules attacked by oxidation are unsaturated fats; oxidation causes them to turn rancid. Since oxidized lipids are often discolored and usually have unpleasant tastes such as metallic or sulfurous flavors, it is important to avoid oxidation in fat-rich foods. Thus, these foods are rarely preserved by drying; instead, they are preserved by smoking, salting or fermenting. Even less fatty foods such as fruits are sprayed with sulfurous antioxidants prior to air drying. Oxidation is often catalyzed by metals, which is why fats such as butter should never be wrapped in aluminium foil or kept in metal containers. Some fatty foods such as olive oil are partially protected from oxidation by their natural content of antioxidants, but remain sensitive to photooxidation. Antioxidant preservatives are also added to fat based cosmetics such as lipstick and moisturizers to prevent rancidity.
Antioxidants are frequently added to industrial products. A common use is as stabilizers in fuels and lubricants to prevent oxidation, and in gasolines to prevent the polymerization that leads to the formation of engine-fouling residues. In 2014, the worldwide market for natural and synthetic antioxidants was US$2.25 billion with a forecast of growth to $3.25 billion by 2020.
Antioxidant polymer stabilizers are widely used to prevent the degradation of polymers such as rubbers, plastics and adhesives that causes a loss of strength and flexibility in these materials. Polymers containing double bonds in their main chains, such as natural rubber and polybutadiene, are especially susceptible to oxidation and ozonolysis. They can be protected by antiozonants. Solid polymer products start to crack on exposed surfaces as the material degrades and the chains break. The mode of cracking varies between oxygen and ozone attack, the former causing a "crazy paving" effect, while ozone attack produces deeper cracks aligned at right angles to the tensile strain in the product. Oxidation and UV degradation are also frequently linked, mainly because UV radiation creates free radicals by bond breakage. The free radicals then react with oxygen to produce peroxy radicals which cause yet further damage, often in a chain reaction. Other polymers susceptible to oxidation include polypropylene and polyethylene. The former is more sensitive owing to the presence of secondary carbon atoms present in every repeat unit. Attack occurs at this point because the free radical formed is more stable than one formed on a primary carbon atom. Oxidation of polyethylene tends to occur at weak links in the chain, such as branch points in low-density polyethylene.
Antioxidant vitamins are found in vegetables, fruits, eggs, legumes and nuts. Vitamins A, C, and E can be destroyed by long-term storage or prolonged cooking. The effects of cooking and food processing are complex, as these processes can also increase the bioavailability of antioxidants, such as some carotenoids in vegetables. Processed food contains fewer antioxidant vitamins than fresh and uncooked foods, as preparation exposes food to heat and oxygen.
Other antioxidants are not obtained from the diet, but instead are made in the body. For example, ubiquinol (coenzyme Q) is poorly absorbed from the gut and is made through the mevalonate pathway. Another example is glutathione, which is made from amino acids. As any glutathione in the gut is broken down to free cysteine, glycine and glutamic acid before being absorbed, even large oral intake has little effect on the concentration of glutathione in the body. Although large amounts of sulfur-containing amino acids such as acetylcysteine can increase glutathione, no evidence exists that eating high levels of these glutathione precursors is beneficial for healthy adults.
Measurement of antioxidant content in food is not a straightforward process, as antioxidants collectively are a diverse group of compounds with different reactivities to various reactive oxygen species. In food science, the oxygen radical absorbance capacity (ORAC) was once an industry standard for estimating antioxidant strength of whole foods, juices and food additives, mainly from the presence of polyphenols. Earlier measurements and ratings by the United States Department of Agriculture were withdrawn in 2012 as biologically irrelevant to human health, referring to an absence of physiological evidence for polyphenols having antioxidant properties "in vivo". Consequently, the ORAC method, derived only from "in vitro" experiments, is no longer considered relevant to human diets or biology.
Alternative "in vitro" measurements of antioxidant content in foods – also based on the presence of polyphenols – include the Folin-Ciocalteu reagent, and the Trolox equivalent antioxidant capacity assay.
As part of their adaptation from marine life, terrestrial plants began producing non-marine antioxidants such as ascorbic acid (vitamin C), polyphenols and tocopherols. The evolution of angiosperm plants between 50 and 200 million years ago resulted in the development of many antioxidant pigments – particularly during the Jurassic period – as chemical defences against reactive oxygen species that are byproducts of photosynthesis. Originally, the term antioxidant specifically referred to a chemical that prevented the consumption of oxygen. In the late 19th and early 20th centuries, extensive study concentrated on the use of antioxidants in important industrial processes, such as the prevention of metal corrosion, the vulcanization of rubber, and the polymerization of fuels in the fouling of internal combustion engines.
Early research on the role of antioxidants in biology focused on their use in preventing the oxidation of unsaturated fats, which is the cause of rancidity. Antioxidant activity could be measured simply by placing the fat in a closed container with oxygen and measuring the rate of oxygen consumption. However, it was the identification of vitamins C and E as antioxidants that revolutionized the field and led to the realization of the importance of antioxidants in the biochemistry of living organisms. The possible mechanisms of action of antioxidants were first explored when it was recognized that a substance with anti-oxidative activity is likely to be one that is itself readily oxidized. Research into how vitamin E prevents the process of lipid peroxidation led to the identification of antioxidants as reducing agents that prevent oxidative reactions, often by scavenging reactive oxygen species before they can damage cells. | https://en.wikipedia.org/wiki?curid=3277 |
Brass
Brass is an alloy of copper and zinc, in proportions which can be varied to achieve varying mechanical and electrical properties. It is a substitutional alloy: atoms of the two constituents may replace each other within the same crystal structure.
Brass is similar to bronze, another alloy containing copper, with tin in place of zinc; both bronze and brass may include small proportions of a range of other elements including arsenic, lead, phosphorus, aluminum, manganese, and silicon. The distinction between the two alloys is largely historical, and modern practice in museums and archaeology increasingly avoids both terms for historical objects in favor of the more general "copper alloy".
Brass has long been a popular material for decoration for its bright gold-like appearance, e.g. for drawer pulls and doorknobs. It has also been widely used for all sorts of utensils due to many properties, such as low melting point, workability (both with hand tools and with modern turning and milling machines), durability, electrical and thermal conductivity. It is still commonly used in applications where low friction and corrosion resistance is required, such as locks, hinges, gears, bearings, ammunition casings, zippers, plumbing, hose couplings, valves, and electrical plugs and sockets. It is used extensively for musical instruments such as horns and bells, and also used as substitute of copper in making costume jewelry, fashion jewelry and other imitation jewelry. The composition of brass, generally 66 percent copper and 34 percent zinc, makes it a favorable substitute for copper based jewelry as it exhibits greater resistance to corrosion. Brass is often used in situations in which it is important that sparks not be struck, such as in fittings and tools used near flammable or explosive materials.
Brass has higher malleability than bronze or zinc. The relatively low melting point of brass (, depending on composition) and its flow characteristics make it a relatively easy material to cast. By varying the proportions of copper and zinc, the properties of the brass can be changed, allowing hard and soft brasses. The density of brass is .
Today, almost 90% of all brass alloys are recycled. Because brass is not ferromagnetic, it can be separated from ferrous scrap by passing the scrap near a powerful magnet. Brass scrap is collected and transported to the foundry, where it is melted and recast into billets. Billets are heated and extruded into the desired form and size. The general softness of brass means that it can often be machined without the use of cutting fluid, though there are exceptions to this.
Aluminium makes brass stronger and more corrosion-resistant. Aluminium also causes a highly beneficial hard layer of aluminium oxide (Al2O3) to be formed on the surface that is thin, transparent and self-healing. Tin has a similar effect and finds its use especially in seawater applications (naval brasses). Combinations of iron, aluminium, silicon and manganese make brass wear- and tear-resistant. Notably, the addition of as little as 1% iron to a brass alloy will result in an alloy with a noticeable magnetic attraction.
Brass will corrode in the presence of moisture, chlorides, acetates, ammonia, and certain acids. This often happens when the copper reacts with sulfur to form a brown and eventually black surface layer of copper sulfide which, if regularly exposed to slightly acidic water such as urban rainwater, can then oxidize in air to form a patina of green-blue copper sulfate. Depending on how the sulfide/ sulfate layer was formed, this layer may protect the underlying brass from further damage.
Although copper and zinc have a large difference in electrical potential, the resulting brass alloy does not experience internalized galvanic corrosion because of the absence of a corrosive environment within the mixture. However, if brass is placed in contact with a more noble metal such as silver or gold in such an environment, the brass will corrode galvanically; conversely, if brass is in contact with a less-noble metal such as zinc or iron, the less noble metal will corrode and the brass will be protected.
To enhance the machinability of brass, lead is often added in concentrations of around 2%. Since lead has a lower melting point than the other constituents of the brass, it tends to migrate towards the grain boundaries in the form of globules as it cools from casting. The pattern the globules form on the surface of the brass increases the available lead surface area which in turn affects the degree of leaching. In addition, cutting operations can smear the lead globules over the surface. These effects can lead to significant lead leaching from brasses of comparatively low lead content.
In October 1999 the California State Attorney General sued 13 key manufacturers and distributors over lead content. In laboratory tests, state researchers found the average brass key, new or old, exceeded the California Proposition 65 limits by an average factor of 19, assuming handling twice a day. In April 2001 manufacturers agreed to reduce lead content to 1.5%, or face a requirement to warn consumers about lead content. Keys plated with other metals are not affected by the settlement, and may continue to use brass alloys with higher percentage of lead content.
Also in California, lead-free materials must be used for "each component that comes into contact with the wetted surface of pipes and pipe fittings, plumbing fittings and fixtures." On January 1, 2010, the maximum amount of lead in "lead-free brass" in California was reduced from 4% to 0.25% lead.
Dezincification-resistant (DZR or DR) brasses, sometimes referred to as CR (corrosion resistant) brasses, are used where there is a large corrosion risk and where normal brasses do not meet the standards. Applications with high water temperatures, chlorides present or deviating water qualities (soft water) play a role. DZR-brass is excellent in water boiler systems. This brass alloy must be produced with great care, with special attention placed on a balanced composition and proper production temperatures and parameters to avoid long-term failures.
An example of DZR brass is the C352 brass, with about 30% zinc, 61-63% copper, 1.7-2.8% lead, and 0.02-0.15% arsenic. The lead and arsenic significantly suppress the zinc loss.
"Red brasses", family of alloys with high copper proportion and generally less than 15% zinc, are more resistant to zinc loss. One of the metals called "red brass" is 85% copper, 5% tin, 5% lead, and 5% zinc. Copper Alloy C23000, which is also known as "red brass", contains 84–86% copper, 0.05% each iron and lead, with the balance being zinc.
Another such material is gunmetal, from the family of red brasses. Gunmetal alloys contain roughly 88% copper, 8-10% tin, and 2-4% zinc. Lead can be added for ease of machining or for bearing alloys.
"Naval brass", for use in seawater, contains 40% zinc but also 1% tin. The tin addition suppresses zinc leaching.
The NSF International requires brasses with more than 15% zinc, used in piping and plumbing fittings, to be dezincification-resistant.
The high malleability and workability, relatively good resistance to corrosion, and traditionally attributed acoustic properties of brass, have made it the usual metal of choice for construction of musical instruments whose acoustic resonators consist of long, relatively narrow tubing, often folded or coiled for compactness; silver and its alloys, and even gold, have been used for the same reasons, but brass is the most economical choice. Collectively known as brass instruments, these include the trombone, tuba, trumpet, cornet, baritone horn, euphonium, tenor horn, and French horn, and many other "horns", many in variously-sized families, such as the saxhorns.
Other wind instruments may be constructed of brass or other metals, and indeed most modern student-model flutes and piccolos are made of some variety of brass, usually a cupronickel alloy similar to nickel silver/German silver. Clarinets, especially low clarinets such as the contrabass and subcontrabass, are sometimes made of metal because of limited supplies of the dense, fine-grained tropical hardwoods traditionally preferred for smaller woodwinds. For the same reason, some low clarinets, bassoons and contrabassoons feature a hybrid construction, with long, straight sections of wood, and curved joints, neck, and/or bell of metal. The use of metal also avoids the risks of exposing wooden instruments to changes in temperature or humidity, which can cause sudden cracking. Even though the saxophones and sarrusophones are classified as woodwind instruments, they are normally made of brass for similar reasons, and because their wide, conical bores and thin-walled bodies are more easily and efficiently made by forming sheet metal than by machining wood.
The keywork of most modern woodwinds, including wooden-bodied instruments, is also usually made of an alloy such as nickel silver/German silver. Such alloys are stiffer and more durable than the brass used to construct the instrument bodies, but still workable with simple hand tools—a boon to quick repairs. The mouthpieces of both brass instruments and, less commonly, woodwind instruments are often made of brass among other metals as well.
Next to the brass instruments, the most notable use of brass in music is in various percussion instruments, most notably cymbals, gongs, and orchestral (tubular) bells (large "church" bells are normally made of bronze). Small handbells and "jingle bell" are also commonly made of brass.
The harmonica is a free reed aerophone, also often made from brass. In organ pipes of the reed family, brass strips (called tongues) are used as the reeds, which beat against the shallot (or beat "through" the shallot in the case of a "free" reed). Although not part of the brass section, snare drums are also sometimes made of brass. Some parts on electric guitars are also made from brass, especially inertia blocks on tremolo systems for its tonal properties, and for string nuts and saddles for both tonal properties and its low friction.
The bactericidal properties of brass have been observed for centuries, particularly in marine environments where it prevents biofouling. Depending upon the type and concentration of pathogens and the medium they are in, brass kills these microorganisms within a few minutes to hours of contact.
A large number of independent studies confirm this antimicrobial effect, even against antibiotic-resistant bacteria such as MRSA and VRSA. The mechanisms of antimicrobial action by copper and its alloys, including brass, are a subject of intense and ongoing investigation.
Brass is susceptible to stress corrosion cracking, especially from ammonia or substances containing or releasing ammonia. The problem is sometimes known as season cracking after it was first discovered in brass cartridges used for rifle ammunition during the 1920s in the British Indian Army. The problem was caused by high residual stresses from cold forming of the cases during manufacture, together with chemical attack from traces of ammonia in the atmosphere. The cartridges were stored in stables and the ammonia concentration rose during the hot summer months, thus initiating brittle cracks. The problem was resolved by annealing the cases, and storing the cartridges elsewhere.
Other phases than α, β and γ are ε, a hexagonal intermetallic CuZn3, and η, a solid solution of copper in zinc.
Although forms of brass have been in use since prehistory, its true nature as a copper-zinc alloy was not understood until the post-medieval period because the zinc vapor which reacted with copper to make brass was not recognised as a metal. The King James Bible makes many references to "brass" to translate "nechosheth" (bronze or copper) from Hebrew to archaic English. The Shakespearean English use of the word 'brass' can mean any bronze alloy, or copper, an even less precise definition than the modern one. The earliest brasses may have been natural alloys made by smelting zinc-rich copper ores. By the Roman period brass was being deliberately produced from metallic copper and zinc minerals using the cementation process, the product of which was calamine brass, and variations on this method continued until the mid-19th century. It was eventually replaced by speltering, the direct alloying of copper and zinc metal which was introduced to Europe in the 16th century.
Brass has sometimes historically been referred to as "yellow copper".
In West Asia and the Eastern Mediterranean early copper-zinc alloys are now known in small numbers from a number of 3rd millennium BC sites in the Aegean, Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia and from 2nd millennium BC sites in West India, Uzbekistan, Iran, Syria, Iraq and Canaan. However, isolated examples of copper-zinc alloys are known in China from as early as the 5th millennium BC.
The compositions of these early "brass" objects are highly variable and most have zinc contents of between 5% and 15% wt which is lower than in brass produced by cementation. These may be "natural alloys" manufactured by smelting zinc rich copper ores in redox conditions. Many have similar tin contents to contemporary bronze artefacts and it is possible that some copper-zinc alloys were accidental and perhaps not even distinguished from copper. However the large number of copper-zinc alloys now known suggests that at least some were deliberately manufactured and many have zinc contents of more than 12% wt which would have resulted in a distinctive golden color.
By the 8th–7th century BC Assyrian cuneiform tablets mention the exploitation of the "copper of the mountains" and this may refer to "natural" brass. "Oreikhalkon" (mountain copper), the Ancient Greek translation of this term, was later adapted to the Latin "aurichalcum" meaning "golden copper" which became the standard term for brass. In the 4th century BC Plato knew "orichalkos" as rare and nearly as valuable as gold and Pliny describes how "aurichalcum" had come from Cypriot ore deposits which had been exhausted by the 1st century AD. X-ray fluorescence analysis of 39 orichalcum ingots recovered from a 2,600-year-old shipwreck off Sicily found them to be an alloy made with 75–80 percent copper, 15–20 percent zinc and small percentages of nickel, lead and iron.
During the later part of first millennium BC the use of brass spread across a wide geographical area from Britain and Spain in the west to Iran, and India in the east. This seems to have been encouraged by exports and influence from the Middle East and eastern Mediterranean where deliberate production of brass from metallic copper and zinc ores had been introduced. The 4th century BC writer Theopompus, quoted by Strabo, describes how heating earth from Andeira in Turkey produced "droplets of false silver", probably metallic zinc, which could be used to turn copper into oreichalkos. In the 1st century BC the Greek Dioscorides seems to have recognised a link between zinc minerals and brass describing how Cadmia (zinc oxide) was found on the walls of furnaces used to heat either zinc ore or copper and explaining that it can then be used to make brass.
By the first century BC brass was available in sufficient supply to use as coinage in Phrygia and Bithynia, and after the Augustan currency reform of 23 BC it was also used to make Roman "dupondii" and "sestertii". The uniform use of brass for coinage and military equipment across the Roman world may indicate a degree of state involvement in the industry, and brass even seems to have been deliberately boycotted by Jewish communities in Palestine because of its association with Roman authority.
Brass was produced by the cementation process where copper and zinc ore are heated together until zinc vapor is produced which reacts with the copper. There is good archaeological evidence for this process and crucibles used to produce brass by cementation have been found on Roman period sites including Xanten and Nidda in Germany, Lyon in France and at a number of sites in Britain. They vary in size from tiny acorn sized to large amphorae like vessels but all have elevated levels of zinc on the interior and are lidded. They show no signs of slag or metal prills suggesting that zinc minerals were heated to produce zinc vapor which reacted with metallic copper in a solid state reaction. The fabric of these crucibles is porous, probably designed to prevent a buildup of pressure, and many have small holes in the lids which may be designed to release pressure or to add additional zinc minerals near the end of the process. Dioscorides mentioned that zinc minerals were used for both the working and finishing of brass, perhaps suggesting secondary additions.
Brass made during the early Roman period seems to have varied between 20% to 28% wt zinc. The high content of zinc in coinage and brass objects declined after the first century AD and it has been suggested that this reflects zinc loss during recycling and thus an interruption in the production of new brass. However it is now thought this was probably a deliberate change in composition and overall the use of brass increases over this period making up around 40% of all copper alloys used in the Roman world by the 4th century AD.
Little is known about the production of brass during the centuries immediately after the collapse of the Roman Empire. Disruption in the trade of tin for bronze from Western Europe may have contributed to the increasing popularity of brass in the east and by the 6th–7th centuries AD over 90% of copper alloy artefacts from Egypt were made of brass. However other alloys such as low tin bronze were also used and they vary depending on local cultural attitudes, the purpose of the metal and access to zinc, especially between the Islamic and Byzantine world. Conversely the use of true brass seems to have declined in Western Europe during this period in favour of gunmetals and other mixed alloys but by about 1000 brass artefacts are found in Scandinavian graves in Scotland, brass was being used in the manufacture of coins in Northumbria and there is archaeological and historical evidence for the production of calamine brass in Germany and The Low Countries, areas rich in calamine ore.
These places would remain important centres of brass making throughout the medieval period, especially Dinant. Brass objects are still collectively known as "dinanderie" in French. The baptismal font at St Bartholomew's Church, Liège in modern Belgium (before 1117) is an outstanding masterpiece of Romanesque brass casting, though also often described as bronze. The metal of the early 12th-century Gloucester Candlestick is unusual even by medieval standards in being a mixture of copper, zinc, tin, lead, nickel, iron, antimony and arsenic with an unusually large amount of silver, ranging from 22.5% in the base to 5.76% in the pan below the candle. The proportions of this mixture may suggest that the candlestick was made from a hoard of old coins, probably Late Roman. Latten is a term for decorative borders and similar objects cut from sheet metal, whether of brass or bronze. Aquamaniles were typically made in brass in both the European and Islamic worlds.
The cementation process continued to be used but literary sources from both Europe and the Islamic world seem to describe variants of a higher temperature liquid process which took place in open-topped crucibles. Islamic cementation seems to have used zinc oxide known as "tutiya" or tutty rather than zinc ores for brass-making, resulting in a metal with lower iron impurities. A number of Islamic writers and the 13th century Italian Marco Polo describe how this was obtained by sublimation from zinc ores and condensed onto clay or iron bars, archaeological examples of which have been identified at Kush in Iran. It could then be used for brass making or medicinal purposes. In 10th century Yemen al-Hamdani described how spreading al-iglimiya, probably zinc oxide, onto the surface of molten copper produced tutiya vapor which then reacted with the metal. The 13th century Iranian writer al-Kashani describes a more complex process whereby "tutiya" was mixed with raisins and gently roasted before being added to the surface of the molten metal. A temporary lid was added at this point presumably to minimise the escape of zinc vapor.
In Europe a similar liquid process in open-topped crucibles took place which was probably less efficient than the Roman process and the use of the term tutty by Albertus Magnus in the 13th century suggests influence from Islamic technology. The 12th century German monk Theophilus described how preheated crucibles were one sixth filled with powdered calamine and charcoal then topped up with copper and charcoal before being melted, stirred then filled again. The final product was cast, then again melted with calamine. It has been suggested that this second melting may have taken place at a lower temperature to allow more zinc to be absorbed. Albertus Magnus noted that the "power" of both calamine and tutty could evaporate and described how the addition of powdered glass could create a film to bind it to the metal.
German brass making crucibles are known from Dortmund dating to the 10th century AD and from Soest and Schwerte in Westphalia dating to around the 13th century confirm Theophilus' account, as they are open-topped, although ceramic discs from Soest may have served as loose lids which may have been used to reduce zinc evaporation, and have slag on the interior resulting from a liquid process.
Some of the most famous objects in African art are the lost wax castings of West Africa, mostly from what is now Nigeria, produced first by the Kingdom of Ife and then the Benin Empire. Though normally described as "bronzes", the Benin Bronzes, now mostly in the British Museum and other Western collections, and the large portrait heads such as the Bronze Head from Ife of "heavily leaded zinc-brass" and the Bronze Head of Queen Idia, both also British Museum, are better described as brass, though of variable compositions. Work in brass or bronze continued to be important in Benin art and other West African traditions such as Akan goldweights, where the metal was regarded as a more valuable material than in Europe.
The Renaissance saw important changes to both the theory and practice of brassmaking in Europe. By the 15th century there is evidence for the renewed use of lidded cementation crucibles at Zwickau in Germany. These large crucibles were capable of producing c.20 kg of brass. There are traces of slag and pieces of metal on the interior. Their irregular composition suggests that this was a lower temperature, not entirely liquid, process. The crucible lids had small holes which were blocked with clay plugs near the end of the process presumably to maximise zinc absorption in the final stages. Triangular crucibles were then used to melt the brass for casting.
16th-century technical writers such as Biringuccio, Ercker and Agricola described a variety of cementation brass making techniques and came closer to understanding the true nature of the process noting that copper became heavier as it changed to brass and that it became more golden as additional calamine was added. Zinc metal was also becoming more commonplace. By 1513 metallic zinc ingots from India and China were arriving in London and pellets of zinc condensed in furnace flues at the Rammelsberg in Germany were exploited for cementation brass making from around 1550.
Eventually it was discovered that metallic zinc could be alloyed with copper to make brass, a process known as speltering, and by 1657 the German chemist Johann Glauber had recognised that calamine was "nothing else but unmeltable zinc" and that zinc was a "half ripe metal." However some earlier high zinc, low iron brasses such as the 1530 Wightman brass memorial plaque from England may have been made by alloying copper with "zinc" and include traces of cadmium similar to those found in some zinc ingots from China.
However, the cementation process was not abandoned, and as late as the early 19th century there are descriptions of solid-state cementation in a domed furnace at around 900–950 °C and lasting up to 10 hours. The European brass industry continued to flourish into the post medieval period buoyed by innovations such as the 16th century introduction of water powered hammers for the production of battery wares. By 1559 the Germany city of Aachen alone was capable of producing 300,000 cwt of brass per year. After several false starts during the 16th and 17th centuries the brass industry was also established in England taking advantage of abundant supplies of cheap copper smelted in the new coal fired reverberatory furnace. In 1723 Bristol brass maker Nehemiah Champion patented the use of granulated copper, produced by pouring molten metal into cold water. This increased the surface area of the copper helping it react and zinc contents of up to 33% wt were reported using this new technique.
In 1738 Nehemiah's son William Champion patented a technique for the first industrial scale distillation of metallic zinc known as "distillation per descencum" or "the English process". This local zinc was used in speltering and allowed greater control over the zinc content of brass and the production of high-zinc copper alloys which would have been difficult or impossible to produce using cementation, for use in expensive objects such as scientific instruments, clocks, brass buttons and costume jewellery. However Champion continued to use the cheaper calamine cementation method to produce lower-zinc brass and the archaeological remains of bee-hive shaped cementation furnaces have been identified at his works at Warmley. By the mid-to-late 18th century developments in cheaper zinc distillation such as John-Jaques Dony's horizontal furnaces in Belgium and the reduction of tariffs on zinc as well as demand for corrosion-resistant high zinc alloys increased the popularity of speltering and as a result cementation was largely abandoned by the mid-19th century. | https://en.wikipedia.org/wiki?curid=3292 |
Bonn
The Federal city of Bonn ( ) is a city on the banks of the Rhine in the German state of North Rhine-Westphalia, with a population of over 300,000. About south-southeast of Cologne, Bonn is in the southernmost part of the Rhine-Ruhr region, Germany's largest metropolitan area, with over 11 million inhabitants. It is famously known as the birthplace of Ludwig Van Beethoven in 1770. He spent his childhood and teenage years in Bonn.
Founded in the 1st century BC as a Roman settlement, Bonn is one of Germany's oldest cities. From 1597 to 1794, Bonn was the capital of the Electorate of Cologne, and residence of the Archbishops and Prince-electors of Cologne. From 1949 to 1990, Bonn was the capital of West Germany, and Germany's present constitution, the Basic Law, was declared in the city in 1949. The era when Bonn served as the capital of West Germany is referred to by historians as the Bonn Republic. From 1990 to 1999, Bonn served as the seat of government – but no longer capital – of reunited Germany.
Because of a political compromise following the reunification, the German federal government maintains a substantial presence in Bonn. Roughly a third of all ministerial jobs are located in Bonn , and the city is considered a second, unofficial, capital of the country. Bonn is the secondary seat of the President, the Chancellor, the Bundesrat and the primary seat of six federal government ministries and twenty federal authorities. The title of Federal City () reflects its important political status within Germany.
The headquarters of Deutsche Post DHL and Deutsche Telekom, both DAX-listed corporations, are in Bonn. The city is home to the University of Bonn and a total of 20 United Nations institutions, the highest number in all of Germany. These institutions include the headquarters for Secretariat of the UN Framework Convention Climate Change (UNFCCC), the Secretariat of the UN Convention to Combat Desertification (UNCCD), and the UN Volunteers programme.
Situated in the southernmost part of the Rhine-Ruhr region, Germany's largest metropolitan area with over 11 million inhabitants, Bonn lies within the German state of North Rhine-Westphalia, close to the border with Rhineland-Palatinate. Spanning an area of more on both sides of the river Rhine, almost three-quarters of the city lies on the river's left bank.
To the south and to the west, Bonn is bordering the Eifel region which encompasses the Rhineland Nature Park. To the north, Bonn borders the Cologne Lowland. Natural borders are constituted by the river Sieg to the north-east and by the Siebengebirge (also known as the Seven Hills) to the east. The largest extension of the city in north–south dimensions is and in west–east dimensions. The city borders have a total length of . The geographical centre of Bonn is the Bundeskanzlerplatz "(Chancellor Square)" in Bonn-Gronau.
The German state of North Rhine-Westphalia is divided into five governmental districts (), and Bonn is part of the governmental district of Cologne (). Within this governmental district, the city of Bonn is an urban district in its own right. The urban district of Bonn is then again divided into four administrative municipal districts (). These are Bonn, Bonn-Bad Godesberg, Bonn-Beuel and Bonn-Hardtberg.
In 1969, the independent towns of Bad Godesberg and Beuel as well as several villages were incorporated into Bonn, resulting in a city more than twice as large as before.
Bonn has an oceanic climate ("Cfb"). In the south of the Cologne lowland in the Rhine valley, Bonn is in one of Germany's warmest regions.
The history of the city dates back to Roman times. In about 12 BC, the Roman army appears to have stationed a small unit in what is presently the historical centre of the city. Even earlier, the army had resettled members of a Germanic tribal group allied with Rome, the Ubii, in Bonn. The Latin name for that settlement, "Bonna", may stem from the original population of this and many other settlements in the area, the Eburoni. The Eburoni were members of a large tribal coalition effectively wiped out during the final phase of Caesar's War in Gaul. After several decades, the army gave up the small camp linked to the Ubii-settlement. During the 1st century AD, the army then chose a site to the north of the emerging town in what is now the section of Bonn-Castell to build a large military installation dubbed Castra Bonnensis, i.e., literally, "Fort Bonn". Initially built from wood, the fort was eventually rebuilt in stone. With additions, changes and new construction, the fort remained in use by the army into the waning days of the Western Roman Empire, possibly the mid-5th century. The structures themselves remained standing well into the Middle Ages, when they were called the Bonnburg. They were used by Frankish kings until they fell into disuse. Eventually, much of the building materials seem to have been re-used in the construction of Bonn's 13th-century city wall. The Sterntor ("star gate") in the city center is a reconstruction using the last remnants of the medieval city wall.
To date, Bonn's Roman fort remains the largest fort of its type known from the ancient world, i.e. a fort built to accommodate a full-strength Imperial Legion and its auxiliaries. The fort covered an area of approximately . Between its walls it contained a dense grid of streets and a multitude of buildings, ranging from spacious headquarters and large officers' quarters to barracks, stables and a military jail. Among the legions stationed in Bonn, the "1st", i.e. the Prima Legio Minervia, seems to have served here the longest. Units of the Bonn legion were deployed to theatres of war ranging from modern-day Algeria to what is now the Russian republic of Chechnya.
The chief Roman road linking the provincial capitals of Cologne and Mainz cut right through the fort where it joined the fort's main road (now, Römerstraße). Once past the South Gate, the Cologne–Mainz road continued along what are now streets named Belderberg, Adenauerallee et al. On both sides of the road, the local settlement, "Bonna", grew into a sizeable Roman town. Bonn is shown on the 4th century Peutinger Map.
In late antiquity, much of the town seems to have been destroyed by marauding invaders. The remaining civilian population then took refuge inside the fort along with the remnants of the troops stationed here. During the final decades of Imperial rule, the troops were supplied by Franci chieftains employed by the Roman administration. When the end came, these troops simply shifted their allegiances to the new barbarian rulers, the Kingdom of the Franks. From the fort, the Bonnburg, as well as from a new medieval settlement to the South centered around what later became the minster, grew the medieval city of Bonn. Local legends arose from this period that the name of the village came from Saint Boniface via Vulgar Latin "*Bonnifatia", but this proved to be a myth.
Between the 11th and 13th centuries, the Romanesque style Bonn Minster was built, and in 1597 Bonn became the seat of the Archdiocese of Cologne. The city gained more influence and grew considerably. The city was subject to a major bombardment during the Siege of Bonn in 1689. The elector Clemens August (ruled 1723–1761) ordered the construction of a series of Baroque buildings which still give the city its character. Another memorable ruler was Max Franz (ruled 1784–1794), who founded the university and the spa quarter of Bad Godesberg. In addition he was a patron of the young Ludwig van Beethoven, who was born in Bonn in 1770; the elector financed the composer's first journey to Vienna.
In 1794, the city was seized by French troops, becoming a part of the First French Empire. In 1815 following the Napoleonic Wars, Bonn became part of the Kingdom of Prussia. Administered within the Prussian Rhine Province, the city became part of the German Empire in 1871 during the Prussian-led unification of Germany. Bonn was of little relevance in these years.
During the Second World War, Bonn acquired military significance because of its strategic location on the Rhine, which formed a natural barrier to easy penetration into the German heartland from the west. The Allied ground advance into Germany reached Bonn on 7 March 1945, and the US 1st Infantry Division captured the city during the battle of 8–9 March 1945.
Following the Second World War, Bonn was in the British zone of occupation. Following the advocacy of West Germany's first chancellor, Konrad Adenauer, a former Cologne Mayor and a native of that area, Bonn became the "de facto" capital, officially designated the "temporary seat of the Federal institutions," of the newly formed Federal Republic of Germany in 1949. However, the Bundestag, seated in Bonn's Bundeshaus, affirmed Berlin's status as the German capital. Bonn was chosen as the provisional capital and seat of government despite the fact that Frankfurt already had most of the required facilities and using Bonn was estimated to be 95 million DM more expensive than using Frankfurt. Bonn was chosen because Adenauer and other prominent politicians intended to make Berlin the capital of the reunified Germany, and felt that locating the capital in a major city like Frankfurt or Hamburg would imply a permanent capital and weaken support in West Germany for reunification.
In 1949, the Parliamentary Council in Bonn drafted and adopted the current German constitution, the Basic Law for the Federal Republic of Germany. As the political centre of West Germany, Bonn saw six Chancellors and six Presidents of the Federal Republic of Germany. Bonn's time as the capital of West Germany is commonly referred to as the "Bonn Republic," in contrast to the "Berlin Republic" which followed reunification in 1990.
German reunification in 1990 made Berlin the nominal capital of Germany again. This decision, however, did not mandate that the republic's political institutions would also move. While some argued for the seat of government to move to Berlin, others advocated leaving it in Bonn – a situation roughly analogous to that of the Netherlands, where Amsterdam is the capital but The Hague is the seat of government. Berlin's previous history as united Germany's capital was strongly connected with the German Empire, the Weimar Republic and more ominously with Nazi Germany. It was felt that a new peacefully united Germany should not be governed from a city connected to such overtones of war. Additionally, Bonn was closer to Brussels, headquarters of the European Economic Community. Former chancellor and mayor of West Berlin Willy Brandt caused considerable offence to the Western Allies during the debate by stating that France would not have kept the seat of government at Vichy after Liberation.
The heated debate that resulted was settled by the "Bundestag" (Germany's parliament) only on 20 June 1991. By a vote of 338–320, the Bundestag voted to move the seat of government to Berlin. The vote broke largely along regional lines, with legislators from the south and west favouring Bonn and legislators from the north and east voting for Berlin. It also broke along generational lines as well; older legislators with memories of Berlin's past glory favoured Berlin, while younger legislators favoured Bonn. Ultimately, the votes of the eastern German legislators tipped the balance in favour of Berlin.
From 1990 to 1999, Bonn served as the seat of government of reunited Germany. In recognition of its former status as German capital, it holds the name of Federal City (). Bonn currently shares the status of Germany's seat of government with Berlin, with the President, the Chancellor and many government ministries (such as Food & Agriculture and Defence) maintaining large presences in Bonn. Over 8,000 of the 18,000 federal officials remain in Bonn. A total of 19 United Nations (UN) institutions operate from Bonn today.
The city council of Bonn used to be based in the Rococo-style and 1737 built ' (old city hall) adjacent to Bonn's central market square. However, due to the enlargement of Bonn in 1969 through the incorporation of Beuel and Bad Godesberg, it moved into the larger Stadthaus facilities further up north. This was necessary for the city council to accommodate the increased number of representatives. The mayor of Bonn still sits in the ', which is also used for representative and official purposes.
As of the 2014–2020 election cycle, the Christian Democrats (CDU) hold a plurality of mandates in the city council (27 seats), followed by the Social Democrats (SPD) with 20 seats, the Greens (Bündnis '90/Die Grünen) with 16 seats, the Liberals (FDP) with 7 seats, the Left (Die Linke) with 5 seats, the local Bürgerbund Bonn with 4 seats, the Alternative for Germany (AfD) with 3 seats, and independent candidates with a total of 4 seats. There are currently 86 seats in the city council of Bonn.
The mayor is (CDU), directly elected in 2015.
Four delegates represent the Federal city of Bonn in the Landtag of North Rhine-Westphalia. The last election took place in May 2012. The current delegates are Bernhard von Grünberg (SPD),Renate Hendricks (SPD), Joachim Stamp (FDP) and Rolf Beu (Bündnis 90/Die Grünen).
Bonn's constituency is called " (096). In the German federal election 2017, Ulrich Kelber (SPD) was elected a member of German Federal parliament, the Bundestag by direct mandate. It is his fifth term. Katja Dörner representing Bündnis 90/Die Grünen, Alexander Graf Lambsdorff for FDP and Claudia Lücking-Michel of the CDU were elected from regional lists.
Beethoven's birthplace is located in Bonngasse near the market place. Next to the market place is the Old City Hall, built in 1737 in Rococo style, under the rule of Clemens August of Bavaria. It is used for receptions of guests of the city, and as an office for the mayor. Nearby is the "Kurfürstliches Schloss", built as a residence for the prince-elector and now the main building of the University of Bonn.
The "Poppelsdorfer Allee" is an avenue flanked by Chestnut trees which had the first horsecar of the city. It connects the "Kurfürstliches Schloss" with the "Poppelsdorfer Schloss", a palace that was built as a resort for the prince-electors in the first half of the 18th century, and whose grounds are now a botanical garden (the Botanischer Garten Bonn). This axis is interrupted by a railway line and Bonn Hauptbahnhof, a building erected in 1883/84.
The Beethoven Monument stands on the Münsterplatz, which is flanked by the Bonn Minster, one of Germany's oldest churches.
The three highest structures in the city are the WDR radio mast in Bonn-Venusberg (), the headquarters of the Deutsche Post called "Post Tower" () and the former building for the German members of parliament "Langer Eugen" () now the location of the UN Campus.
Just as Bonn's other four major museums, the "Haus der Geschichte" or Museum of the History of the Federal Republic of Germany, is located on the so-called "Museumsmeile" ("Museum Mile")"." The Haus der Geschichte is one of the foremost German museums of contemporary German history, with branches in Berlin and Leipzig. In its permanent exhibition, the Haus der Geschichte presents German history from 1945 until the present, also shedding light on Bonn's own role as former capital of West Germany. Numerous temporary exhibitions emphasize different features, such as Nazism or important personalities in German history.
The "Kunstmuseum Bonn" or Bonn Museum of Modern Art is an art museum founded in 1947. The Kunstmuseum exhibits both temporary exhibitions and its permanent collection. The latter is focused on Rhenish Expressionism and post-war German art. German artists on display include Georg Baselitz, Joseph Beuys, Hanne Darboven, Anselm Kiefer, Blinky Palermo and Wolf Vostell. The museum owns one of the largest collections of artwork by Expressionist painter August Macke. His work is also on display in the August-Macke-Haus, located in Macke's former home where he lived from 1911 to 1914.
The "Bundeskunsthalle" (full name: Kunst- und Ausstellungshalle der Bundesrepublik Deutschland or Art and Exhibition Hall of the Federal Republic of Germany), focuses on the crossroads of culture, arts, and science. To date, it attracted more than 17 million visitors. One of its main objectives is to show the cultural heritage outside of Germany or Europe. Next to its changing exhibitions, the Bundeskunsthalle regularly hosts concerts, discussion panels, congresses, and lectures.
The "Museum Koenig" is Bonn's natural history museum. Affiliated with the University of Bonn, it is also a zoological research institution housing the "Leibniz-Institut für Biodiversität der Tiere". Politically interesting, it is on the premises of the Museum Koenig where the Parlamentarischer Rat first met. The "Deutsches Museum Bonn", affiliated with one of the world's foremost science museums, the Deutsches Museum in Munich, is an interactive science museum focusing on post-war German scientists, engineers, and inventions. Other museums include the Beethoven House, birthplace of Ludwig van Beethoven, the Rheinisches Landesmuseum Bonn (Rhinish Regional Museum Bonn), the Bonn Women's Museum, the Rheinisches Malermuseum and the Arithmeum.
There are several parks, leisure and protected areas in and around Bonn. The "" is Bonn's most important leisure park, with its role being comparable to what Central Park is for New York City. It lies on the banks of the Rhine and is the city's biggest park intra muros. The Rhine promenade and the "Alter Zoll" (Old Toll Station) are in direct neighbourhood of the city centre and are popular amongst both residents and visitors. The "Arboretum Park Härle" is an arboretum with specimens dating to back to 1870. The "Botanischer Garten" (Botanical Garden) is affiliated with the university and it is here where Titan arum set a world record. The natural reserve of "Kottenforst" is a large area of protected woods on the hills west of the city centre. It is about in area and part of the Rhineland Nature Park ().
In the very south of the city, on the border with Wachtberg and Rhineland-Palatinate, there is an extinct volcano, the Rodderberg, featuring a popular area for hikes. Also south of the city, there is the Siebengebirge which is part of the lower half of the Middle Rhine region. The nearby upper half of the Middle Rhine from Bingen to Koblenz is a UNESCO World Heritage Site with more than 40 castles and fortresses from the Middle Ages and important German vineyards.
Named after Konrad Adenauer, the first post-war Chancellor of West Germany, Cologne Bonn Airport is situated north-east from the city centre of Bonn. With around 10.3 million passengers passing through it in 2015, it is the seventh-largest passenger airport in Germany and the third-largest in terms of cargo operations. By traffic units, which combines cargo and passengers, the airport is in fifth position in Germany. As of March 2015, Cologne Bonn Airport had services to 115 passenger destinations in 35 countries. The airport is one of Germany's few 24-hour airports, and is a hub for Eurowings and cargo operators FedEx Express and UPS Airlines.
The federal motorway ("Autobahn") A59 connects the airport with the city. Long distance and regional trains to and from the airport stop at Cologne/Bonn Airport station. Other major airports within a one-hour drive by car are Frankurt International Airport and Düsseldorf International Airport.
Bonn's central railway station, Bonn Hauptbahnhof, serves urban (S-Bahn, U-Bahn, sharing the same network with the neighbouring city of Cologne), regional (Regionalbahn), and long-distance destinations (ICE) such as Berlin, Hamburg, Munich, Zurich, Vienna, Brussels, Amsterdam and Paris. Daily, more than 67,000 people travel via Bonn Hauptbahnhof. In late 2016, around 80 long distance and more than 165 regional trains departed to or from Bonn every day. The other major railway station (Siegburg/Bonn) lies on the high-speed rail line between Cologne and Frankfurt.
The bus system of Bonn is composed of roughly 30 lines which operate on a regular basis. During peaks, buses usually run every 5 minutes; off-peak buses run every 20 minutes. Several lines offer night services, especially during the weekends. Bonn is part of the Verkehrsverbund Rhein-Sieg ("Rhine-Sieg Transport Association") which is the public transport association covering the area of the Cologne/Bonn Region.
Four Autobahns run through or are adjacent to Bonn: the A59 (right bank of the Rhine, connecting Bonn with Düsseldorf and Duisburg), the A555 (left bank of the Rhine, connecting Bonn with Cologne), the A562 (connecting the right with the left bank of the Rhine south of Bonn), and the A565 (connecting the A59 and the A555 with the A61 to the southwest). Three Bundesstraßen, which have a general speed limit in contrast to the Autobahn, connect Bonn to its immediate surroundings (Bundesstraßen B9, B42 and B56).
With Bonn being divided into two parts by the Rhine, three bridges are crucial for inner-city road traffic: the Konrad-Adenauer-Brücke (A562), the Friedrich-Ebert-Brücke (A565), and the Kennedybrücke (B56). In addition, regular ferries operate between Bonn-Mehlem and Königswinter, Bonn-Bad Godesberg and Niederdollendorf, and Graurheindorf and Mondorf.
Located in the northern sub-district of Graurheindorf, the inland harbour of Bonn is used for container traffic as well as oversea transport. The annual turnover amounts to around . Regular passenger transport occurs to Cologne and Düsseldorf.
The head offices of Deutsche Telekom, its subsidiary T-Mobile, Deutsche Post, Haribo, German Academic Exchange Service, and SolarWorld are in Bonn.
The Rheinische Friedrich Wilhelms Universität Bonn (University of Bonn) is one of the largest universities in Germany. It is also the location of the German research institute Deutsche Forschungsgemeinschaft (DFG) offices and of the German Academic Exchange Service ("Deutscher Akademischer Austauschdienst" – DAAD).
, Bonn had a population of 327,913. About 70% of the population was entirely of German origin, while about 100,000 people, equating to roughly 30%, were at least partly of non-German origin. The city is one of the fastest-growing municipalities in Germany and the 18th most populous city in the country. Bonn's population is predicted to surpass the populations of Wuppertal and Bochum before the year 2030.
The following list shows the largest groups of origin of minorites with "migration background" in Bonn .
Bonn is home of the Telekom Baskets Bonn, the only basketball club in Germany that owns its arena, the Telekom Dome. The club is a regular participant at international competitions such as the Basketball Champions League.
The city also has an seem-professional football team Bonner SC which was formed in 1965 through the merger of "Bonner FV" and "Tura Bonn". The Bonn Gamecocks American football team play at the 12,000-capacity Stadion Pennenfeld.
The headquarters of the International Paralympic Committee has been located in Bonn since 1999.
Since 1983, the City of Bonn has established friendship relations with the City of Tel Aviv, Israel, and since 1988 Bonn, in former times the residence of the Princes Electors of Cologne, and Potsdam, Germany, the formerly most important residential city of the Prussian rulers, have established a city-to-city partnership.
Central Bonn is surrounded by a number of traditional towns and villages which were independent up to several decades ago. As many of those communities had already established their own contacts and partnerships before the regional and local reorganisation in 1969, the Federal City of Bonn now has a dense network of city district partnerships with European partner towns.
The city district of Bonn is a partner of the English university city of Oxford, England, UK (since 1947), of Budafok, District XXII of Budapest, Hungary (since 1991) and of Opole, Poland (officially since 1997; contacts were established 1954).
The district of Bad Godesberg has established partnerships with Saint-Cloud in France, Frascati in Italy, Windsor and Maidenhead in England, UK and Kortrijk in Belgium; a friendship agreement has been signed with the town of Yalova, Turkey.
The district of Beuel on the right bank of the Rhine and the city district of Hardtberg foster partnerships with towns in France: Mirecourt and Villemomble.
Moreover, the city of Bonn has developed a concept of international co-operation and maintains sustainability oriented project partnerships in addition to traditional city twinning, among others with Minsk in Belarus, Ulaanbaatar in Mongolia, Bukhara in Uzbekistan, Chengdu in China and La Paz in Bolivia.
The city of Bonn is twinned with:
Bonn city district has partnerships with:
Bad Godesberg district has partnerships with:
The city of Bonn also has project partnerships with: | https://en.wikipedia.org/wiki?curid=3295 |
Ballroom dance
Ballroom dance is a set of partner dances, which are enjoyed both socially and competitively around the world. Because of its performance and entertainment aspects, ballroom dance is also widely enjoyed on stage, film, and television.
"Ballroom dance" may refer, at its widest definition, to almost any recreational dance with a partner. However, with the emergence of dance competition (now known as Dancesport), two principal schools have emerged and the term is used more narrowly to refer to the dances recognized by those schools.
Note that dances of the two schools that bear the same name may differ considerably in permitted patterns (figures), technique, and styling.
Exhibitions and social situations that feature ballroom dancing also may include additional partner dances such as Lindy Hop, Night Club Two Step, Night Club Swing, Bachata, Country Two Step, and regional (local or national) favorites that normally are not regarded as part of the ballroom family, and a number of historical dances also may be danced in ballrooms or salons. Additionally, some sources regard Sequence Dancing, in pairs or other formations, to be a style of ballroom dance.
The term 'ballroom dancing' is derived from the word "ball" which in turn originates from the Latin word "ballare" which means 'to dance' (a ball-room being a large room specially designed for such dances). In times past, ballroom dancing was social dancing for the privileged, leaving folk dancing for the lower classes. These boundaries have since become blurred. The definition of ballroom dance also depends on the era: balls have featured popular dances of the day such as the Minuet, Quadrille, Polonaise, Polka, Mazurka, and others, which are now considered to be historical dances.
The first authoritative knowledge of the earliest ballroom dances was recorded toward the end of the 16th century, when Jehan Tabourot, under the pen name "Thoinot-Arbeau", published in 1588 his "Orchésographie", a study of late 16th-century French renaissance social dance. Among the dances described were the solemn basse danse, the livelier branle, pavane, and the galliarde which Shakespeare called the "cinq pace" as it was made of five steps.
In 1650 the Minuet, originally a peasant dance of Poitou, was introduced into Paris and set to music by Jean-Baptiste Lully and danced by the King Louis XIV in public. The Minuet dominated the ballroom from that time until the close of the 18th century.
Toward the later half of the 16th century, Louis XIV founded his 'Académie Royale de Musique et de Danse', where specific rules for the execution of every dance and the "five positions" of the feet were formulated for the first time by members of the Académie. Eventually, the first definite cleavage between ballet and ballroom came when professional dancers appeared in the ballets, and the ballets left the Court and went to the stage. Ballet technique such as the turned out positions of the feet, however, lingered for over two centuries and past the end of the Victoria era.
The waltz with its modern hold took root in England in about 1812; in 1819 Carl Maria von Weber wrote "Invitation to the Dance", which marked the adoption of the waltz form into the sphere of absolute music. The dance was initially met with tremendous opposition due to the semblance of impropriety associated with the closed hold, though the stance gradually softened. In the 1840s several new dances made their appearance in the ballroom, including the polka, mazurka, and the Schottische. In the meantime a strong tendency emerged to drop all 'decorative' steps such as "entrechats" and "ronds de jambes" that had found a place in the Quadrilles and other dances.
Modern ballroom dance has its roots early in the 20th century, when several different things happened more or less at the same time. The first was a movement away from the sequence dances towards dances where the couples moved independently. This had been pre-figured by the waltz, which had already made this transition. The second was a wave of popular music, such as jazz. Since dance is to a large extent tied to music, this led to a burst of newly invented dances. There were many dance crazes in the period 1910–1930.
The third event was a concerted effort to transform some of the dance crazes into dances which could be taught to a wider dance public in the U.S. and Europe. Here Vernon and Irene Castle were important, and so was a generation of English dancers in the 1920s, including Josephine Bradley and Victor Silvester. These professionals analysed, codified, published, and taught a number of standard dances. It was essential, if popular dance was to flourish, for dancers to have some basic movements they could confidently perform with any partner they might meet. Here the huge Arthur Murray organisation in America, and the dance societies in England, such as the Imperial Society of Teachers of Dancing, were highly influential. Finally, much of this happened during and after a period of World War, and the effect of such a conflict in dissolving older social customs was considerable.
Later, in the 1930s, the on-screen dance pairing of Fred Astaire and Ginger Rogers influenced all forms of dance in the U.S. and elsewhere. Although both actors had separate careers, their filmed dance sequences together, which included portrayals of the Castles, have reached iconic status. Much of Astaire and Rogers' work portrayed social dancing, although the performances were highly choreographed (often by Astaire or Hermes Pan) and meticulously staged and rehearsed.
Competitions, sometimes referred to as dancesport, range from world championships, regulated by the World Dance Council (WDC), to less advanced dancers at various proficiency levels. Most competitions are divided into professional and amateur, though in the USA pro-am competitions typically accompany professional competitions. The International Olympic Committee now recognizes competitive ballroom dance. It has recognized another body, the World DanceSport Federation (WDSF), as the sole representative body for dancesport in the Olympic Games. However, it seems doubtful that dance will be included in the Olympic Games, especially in light of efforts to reduce the number of participating sports.
Ballroom dance competitions are regulated by each country in its own way. There are about 30 countries which compete regularly in international competitions. There are another 20 or so countries which have membership of the WDC and/or the WDSF, but whose dancers rarely appear in international competitions. In Britain there is the British Dance Council, which grants national and regional championship titles, such as the British Ballroom Championships, the British Sequence Championships and the United Kingdom Championships. In the United States, the member branches of the WDC (National Dance Council of America) and the WDSF (USA Dance) both grant national and regional championship titles.
Ballroom dancing competitions in the former USSR also included the Soviet Ballroom dances, or "Soviet Programme". Australian New Vogue is danced both competitively and socially. In competition, there are 15 recognized New Vogue dances, which are performed by the competitors in sequence. These dance forms are not recognized internationally, neither are the US variations such as American Smooth, and Rhythm. Such variations in dance and competition methods are attempts to meets perceived needs in the local market-place.
Internationally, the Blackpool Dance Festival, hosted annually at Blackpool, England is considered the most prestigious event a dancesport competitor can attend.
Formation dance is another style of competitive dance recognized by the WDSF. In this style, multiple dancers (usually in couples and typically up to 16 dancers at one time) compete on the same team, moving in and out of various formations while dancing.
In competitive ballroom, dancers are judged by diverse criteria such as poise, the hold or frame, posture, musicality and expression, timing, body alignment and shape, floor craft, foot and leg action, and presentation. Judging in a performance-oriented sport is inevitably subjective in nature, and controversy and complaints by competitors over judging placements are not uncommon. The scorekeepers—called scrutineers—will tally the total number recalls accumulated by each couple through each round until the finals when the Skating system is used to place each couple by ordinals, typically 1–6, though the number of couples in the final may vary. Sometimes, up to 8 couples may be present on the floor during the finals.
Competitors dance at different levels based on their ability and experience. The levels are split into two categories, syllabus and open. The syllabus levels are newcomer/pre-bronze, bronze, silver, and gold—with gold the highest syllabus level and newcomer the lowest. In these levels, moves are restricted to those written in a syllabus, and illegal moves can lead to disqualification. Each level, bronze, silver, and gold, has different moves on their syllabus, increasing in difficulty. There are three levels in the open category; novice, pre-champ, and champ in increasing order of skill. At those levels, dancers no longer have restrictions on their moves, so complex routines are more common.
Medal evaluations for amateurs enable dancers' individual abilities to be recognized according to conventional standards. In medal evaluations, which are run by bodies such as the Imperial Society of Teachers of Dancing (ISTD) and the United Kingdom Alliance (UKA), each dancer performs two or more dances in a certain genre in front of a judge. Genres such as Modern Ballroom or Latin are the most popular. Societies such as the ISTD and UKA also offer medal tests on other dance styles (such as Country & Western, Rock 'n Roll or Tap). In some North American examinations, levels include Newcomer, Bronze, Silver, Gold, Novice, Pre-championship, and Championship; each level may be further subdivided into either two or four separate sections.
There is a part of the ballroom world dedicated to college students. These chapters are typically clubs or teams that have an interest in ballroom dancing. Teams hold fundraisers, social events, and ballroom dance lessons. Ballroom dance teams' goals are to have fun and learn to dance well. There is a strong focus on finding a compatible dance partner and bonding with teammates. There is also a competitive side to collegiate ballroom - collegiate teams often hold competitions and invite other teams to participate. These competitions are often run with many of the same rules are regular amateur competitions as outlined above, but are usually organized entirely by collegiate teams. Examples include the MIT Open Ballroom Dance Competition, Purdue Ballroom Classic, Cardinal Classic, Berkeley Classic, and Harvard Invitational.
"Ballroom dance" refers most often to the ten dances of Standard and Latin, though the term is also often used interchangeably with the five International Ballroom dances. Sequence dancing, which is danced predominantly in the United Kingdom, and its development New Vogue in Australia and New Zealand, are also sometimes included as a type of Ballroom dancing.
In the United States and Canada, the American Style (American Smooth and American Rhythm) also exists. The dance technique used for both International and American styles is similar, but International Ballroom allows only closed dance positions, whereas American Smooth allows closed, open and separated dance movements. In addition, different sets of dance figures are usually taught for the two styles. International Latin and American Rhythm have different styling, and have different dance figures in their respective syllabi.
Other dances sometimes placed under the umbrella "ballroom dance" include nightclub dances such as Lindy Hop, West Coast swing, nightclub two step, hustle, salsa, and merengue. The categorization of dances as "ballroom dances" has always been fluid, with new dances or folk dances being added to or removed from the ballroom repertoire from time to time, so no list of subcategories or dances is any more than a description of current practices. There are other dances historically accepted as ballroom dances, and are revived via the vintage dance movement.
In Europe, Latin Swing dances include Argentine tango, mambo, Lindy Hop, swing boogie (sometimes also known as nostalgic boogie), and discofox. One example of this is the subcategory of cajun dances that originated in Acadiana, with branches reaching both coasts of the United States.
Standard/Smooth dances are normally danced to Western music (often from the mid-twentieth century), and couples dance counter-clockwise around a rectangular floor following the line of dance. In competitions, competitors are costumed as would be appropriate for a white tie affair, with full gowns for the ladies and bow tie and tail coats for the men; though in American Smooth it is now conventional for the men to abandon the tailsuit in favor of shorter tuxedos, vests, and other creative outfits.
Latin/Rhythm dances are commonly danced to contemporary Latin American music and (in case of jive) Western music. With the exception of a few traveling dances like samba and pasodoble, couples do not follow the line of dance but perform their routines more or less in one spot. In competitions, the women are often dressed in short-skirted Latin outfits while the men are outfitted in tight-fitting shirts and pants, the goal being to emphasize the dancers' leg action and body movements.
Waltz began as a country folk dance in Austria and Bavaria in the 17th century. In the early 19th century it was introduced in England. It was the first dance where a man held a woman close to his body. When performing the dance, the upper body is kept to the left throughout all figures, follow's body leaves the right side of the lead while the head is extended to follow the elbow. Figures with rotation have little rise. The start of the rise begins slowly from the first count, peaks on the 2nd count and lowers slowly on the 3rd. Sway is also used on the second step to make the step longer and also to slow down the momentum by bringing the feet together. Waltz is performed for both International Standard and American Smooth.
Viennese waltz originated in Provence area in France in 1559 and is recognized as the oldest of all ballroom dances. It was introduced in England as German waltz in 1812 and became popular throughout the 19th century by the music of Josef and Johann Strauss. It is often referred to as the classic “old-school” ballroom. Viennese Waltz music is quite fast. Slight shaping of the body moves towards the inside of the turn and shaping forward and up to lengthen the opposite side from direction. Reverse turn is used to travel down long side and is overturned. While natural turn is used to travel short side and is underturned to go around the corners. Viennese waltz is performed for both International Standard and American Smooth.
Tango originated in Buenos Aires in the late 19th century. Modern Argentine tango is danced in both open and closed embraces which focuses on the lead and follow moving in harmony of the tango's passionate charging music. The tango's technique is like walking to the music while keeping feet grounded and allowing ankles and knees to brush against one another during each step taken. Tango is a flat-footed dance and unlike the other dances, has no rise and fall. Body weight is kept over the toes and the connection is held between the dancers in the hips.
Ballroom tango, however, is a dance with a far more open frame, often utilising strong and staccato movements. Ballroom tango, rather than Argentine tango, is performed in international competition.
The foxtrot is an American dance, believed to be of African-American origin. It was named by a vaudeville performer Harry Fox in 1914. Fox was rapidly trotting step to ragtime music. The dance therefore was originally named as the “Fox’s trot”. The foxtrot can be danced at slow, medium, or fast tempos depending on the speed of the jazz or big band music. The partners are facing one another and frame rotates from one side to another, changing direction after a measure. The dance is flat, with no rise and fall. The walking steps are taken as slow for the two beats per steps and quick for one beat per step. Foxtrot is performed for both International Standard and American Smooth.
The quickstep is an English dance and was invented in the 1920s as a combination of faster tempo of foxtrot and the Charleston. It is a fast moving dance, so men are allowed to close their feet and the couples move in short syncopated steps. Quickstep includes the walks, runs, chasses, and turns of the original foxtrot dance, with some other fast figures such as locks, hops, run, quick step, jump and skips. Quick step is performed as an International Standard dance.
The pasodoble originated from Spain and its dramatic bullfights. The dance is mostly performed only in competitions and rarely socially because of its many choreographic rules. The lead plays the role of the matador while the follow takes the role of the matador's cape, the bull, or even the matador. The chassez cape refers to the lead using the follow to turn them as if they are the cape, and the apel is when the lead stomps their foot to get the bull's attention. Pasodoble is performed as an International Latin dance.
The Spanish bolero was developed in the late 18th century out of the "seguidilla", and its popularization is attributed to court dancers such as Sebastián Cerezo. It became one of the most popular ballroom dances of the 19th century and saw many classical adaptations. However, by the 20th century it had become old-fashioned. A Cuban music genre of the same name, bolero, which became popular in the early 20th century, is unrelated to the Spanish dance.
Although Cuban bolero was born as a form of "trova", traditional singer/songwriter tradition from eastern Cuba, with no associated dance, it soon became a ballroom favorite in Cuba and all of Latin America. The dance most commonly represents the couple falling in love. Modern bolero is seen as a combination of many dances: like a slow salsa with contra-body movement of tango, patterns of rhumba, and rise and fall technique and personality of waltz and foxtrot. Bolero can be danced in a closed hold or singly and then coming back together. It is performed as an American Rhythm dance.
Samba is the national dance of Brazil. The rhythm of samba and its name originated from the language and culture of West African slaves. In 1905, samba became known to other countries during an exhibition in Paris. In the 1940s, samba was introduced in America through Carmen Miranda. The international version of Ballroom Samba has been based on an early version of Brazilian Samba called Maxixe, but has since developed away and differs strongly from Brazilian Ballroom Samba, which is called Samba de Gafieira. International Ballroom Samba is danced with a slight bounce which is created through the bending and straightening the knee. It is performed as an International Latin dance, although most of its modern development has occurred outside Latin America.
Rhumba came to the United States from Cuba in the 1920s and became a popular cabaret dance during prohibition. Rhumba is a ballroom adaptation of son cubano and bolero (the Cuban genre) and, despite its name, it rarely included elements of Cuban rumba. It includes Cuban motions through knee-strengthening, figure-eight hip rotation, and swiveling foot action. An important characteristic of rhumba is the powerful and direct lead achieved through the ball of the foot. Rhumba is performed for both International Latin and American Rhythm.
Mambo was developed as an offshoot of danzón, the national dance of Cuba, in the late 1930s by Orestes López and his brother Cachao, of Arcaño y sus Maravillas. They conceived a new form of danzón influenced by son cubano, with a faster, improvised final section, which allowed dancers to more freely express themselves, given that danzón had traditionally a very rigid structure. In the 1940s, Dámaso Pérez Prado transformed the mambo from the charanga into the big band format, and took it to Mexico and the United States, where it became a "dance craze".
Cha Cha (sometimes wrongly called Cha Cha Cha based on a "street version" of the dance with shifted timing) was delevoped by Enrique Jorrín in the early 1950s, as a slower alternative to Mambo—and, in fact, was originally called Triple Mambo. The Cha Cha is a flirtatious dance with many hip rotations and partners synchronising their movements. The dance includes bending and straightening of the knee giving it a touch of Cuban motion. Cha-cha is performed for both International Latin and American Rhythm.
Swing in 1927 was originally named the Lindy Hop named by Shorty George Snowden. There have been 40 different versions documented over the years; most common is the East Coast swing which is performed in the American Smooth (or American Rhythm) only in the U.S. or Canada. The East Coast swing was established by Arthur Murray and others only shortly after World War II. Swing music is very lively and upbeat and can be danced to jazz or big band music. The swing dancing is a style with lots of bounce and energy. Swing also includes many spins and underarm turns. East Coast swing is performed as an American Rhythm dance.
The jive is part of the swing dance group and is a very lively variation of the jitterbug. Jive originated from African American clubs in the early 1940s. During World War II, American soldiers introduced the jive in England where it was adapted to today's competitive jive. In jive, the man leads the dance while the woman encourages the man to ask them to dance. It is danced to big band music, and some technique is taken from salsa, swing and tango. Jive is performed as an International Latin dance.
According to World Dance Council.
Waltz:
28 bars per minute, time, also known as "Slow Waltz" or "English Waltz" depending on locality
Tango:
32 bars per minute, time
Viennese Waltz:
60 bars per minute, time.
On the European continent, the Viennese waltz is known simply as "waltz", while the waltz is recognized as "English waltz" or "Slow Waltz".
Foxtrot:
28 bars per minute, time
Quickstep:
50 bars per minute, time
Cha-cha-cha:
30 bars per minute, time
Samba:
48 bars per minute, time
Rumba:
24 bars per minute, time
Paso Doble:
56 bars per minute, time
Jive:
42 bars per minute, time
Waltz:
29–30 bars per minute.
30–32 bars per minute for Bronze
Tango:
60 bars per minute
30–32 bars per minute for Bronze
Foxtrot:
30 bars per minute
32–34 bars per minute for Bronze
Viennese Waltz:
53–54 bars per minute
54 bars per minute for Bronze
Cha Cha:
30 bars per minute
Rumba:
30–32 bars per minute
32–36 bars per minute for Bronze
East Coast Swing:
36 bars per minute
34–36 bars per minute for Bronze
Bolero:
24 bars per minute
24–26 bars per minute for Bronze
Mambo:
47 bars per minute
48–51 bars per minute for Bronze
Historical/Vintage Ballroom dance:
Other dances occasionally categorized as ballroom: | https://en.wikipedia.org/wiki?curid=3332 |
The Birth of a Nation
The Birth of a Nation (originally called The Clansman) is a 1915 American silent epic drama film directed by D. W. Griffith and starring Lillian Gish. The screenplay is adapted from the 1905 novel and play "", by Thomas Dixon Jr. Griffith co-wrote the screenplay with Frank E. Woods and produced the film with Harry Aitken.
"The Birth of a Nation" is a landmark of film history. It was the first 12-reel film ever made and, at three hours, also the longest up to that point. Its plot, part fiction and part history, chronicling the assassination of Abraham Lincoln by John Wilkes Booth and the relationship of two families in the Civil War and Reconstruction eras over the course of several years—the pro-Union (Northern) Stonemans and the pro-Confederacy (Southern) Camerons—was by far the most complex of any movie made up to that date. It was originally shown in two parts separated by another movie innovation, an intermission, and it was the first to have a musical score for an orchestra. It pioneered close-ups, fade-outs, and a carefully staged battle sequence with hundreds of extras (another first) made to look like thousands. It came with a 13-page "Souvenir Program". It was the first American motion picture to be screened in the White House, viewed there by President Woodrow Wilson.
The film was controversial even before its release and has remained so ever since; it has been called "the most controversial film ever made in the United States". Lincoln is portrayed positively, unusual for a narrative that promotes the Lost Cause ideology. On the other hand, the film portrays African-Americans (many of whom are played by white actors in blackface) as unintelligent and sexually aggressive toward white women. The film presents the Ku Klux Klan (KKK) as a heroic force necessary to preserve American values and a white supremacist social order.
In response to the film's depictions of black people and Civil War history, African-Americans across the nation organized and participated in protests against "The Birth of a Nation." In places such as in Boston where thousands of white people viewed the film, black leaders tried to have it banned on the basis that it inflamed racial tensions and could incite violence. The NAACP spearheaded an unsuccessful campaign to ban the film. Griffith's indignation at efforts to censor or ban the film motivated him to produce "Intolerance" the following year.
In spite of its divisiveness, "The Birth of a Nation" was a huge commercial success and profoundly influenced both the film industry as well as American culture. The film has been acknowledged as an inspiration for , which took place only a few months after its release. In 1992, the Library of Congress deemed the film "culturally, historically, or aesthetically significant" and selected it for preservation in the National Film Registry.
The film consists of two parts of similar length. The first part closes with the assassination of Abraham Lincoln, after which there is an intermission. At the New York premiere, Dixon spoke on stage between the parts, reminding the audience that the dramatic version of "The Clansman" appeared in that venue nine years previously. "Mr. Dixon also observed that he would have allowed none but the son of a Confederate soldier to direct the film version of "The Clansman.""
The film follows two juxtaposed families. One is the Northern Stonemans: abolitionist U.S. Representative Austin Stoneman (based on the Reconstruction-era Representative Thaddeus Stevens of Pennsylvania), his daughter, and two sons. The other is the Southern Camerons: Dr. Cameron, his wife, their three sons and two daughters. Phil, the elder Stoneman son, falls in love with Margaret Cameron, during the brothers' visit to the Cameron estate in South Carolina, representing the Old South. Meanwhile, young Ben Cameron (modeled after Leroy McAfee) idolizes a picture of Elsie Stoneman. When the Civil War arrives, the young men of both families enlist in their respective armies. The younger Stoneman and two of the Cameron brothers are killed in combat. Meanwhile, the Cameron women are rescued by Confederate soldiers who rout a black militia after an attack on the Cameron home. Ben Cameron leads a heroic final charge at the Siege of Petersburg, earning the nickname of "the Little Colonel", but he is also wounded and captured. He is then taken to a Union military hospital in Washington, D.C.
During his stay at the hospital, he is told that he will be hanged. Also at the hospital, he meets Elsie Stoneman, whose picture he has been carrying; she is working there as a nurse. Elsie takes Cameron's mother, who had traveled to Washington to tend her son, to see Abraham Lincoln, and Mrs. Cameron persuades the President to pardon Ben. When Lincoln is assassinated at Ford's Theatre, his conciliatory postwar policy expires with him. In the wake of the president's death, Austin Stoneman and other Radical Republicans are determined to punish the South, employing harsh measures that Griffith depicts as having been typical of the Reconstruction Era.
Stoneman and his protégé Silas Lynch, a psychopathic mulatto (modeled after Alonzo J. Ransier and Richard Howell Gleaves), head to South Carolina to observe the implementation of Reconstruction policies firsthand. During the election, in which Lynch is elected lieutenant governor, blacks are observed stuffing the ballot boxes, while many whites are denied the vote. The newly elected, mostly black members of the South Carolina legislature are shown at their desks displaying extremely racist stereotypical behavior, such as one member taking off his shoe and putting his feet up on his desk, and others drinking liquor and feasting on fried chicken.
Meanwhile, inspired by observing white children pretending to be ghosts to scare black children, Ben fights back by forming the Ku Klux Klan. As a result, Elsie, out of loyalty to her father, breaks off her relationship with Ben. Later, Flora Cameron goes off alone into the woods to fetch water and is followed by Gus, a freedman and soldier who is now a captain. He confronts Flora and tells her that he desires to get married. Frightened, she flees into the forest, pursued by Gus. Trapped on a precipice, Flora warns Gus she will jump if he comes any closer. When he does, she leaps to her death. Having run through the forest looking for her, Ben has seen her jump; he holds her as she dies, then carries her body back to the Cameron home. In response, the Klan hunts down Gus, tries him, finds him guilty, and lynches him.
Lynch then orders a crackdown on the Klan after discovering Gus's murder. He also secures the passing of legislation allowing mixed-race marriages. Dr. Cameron is arrested for possessing Ben's Klan regalia, now considered a capital crime. He is rescued by Phil Stoneman and a few of his black servants. Together with Margaret Cameron, they flee. When their wagon breaks down, they make their way through the woods to a small hut that is home to two sympathetic former Union soldiers who agree to hide them. An intertitle states, "The former enemies of North and South are united again in common defense of their Aryan birthright."
Congressman Stoneman leaves to avoid being connected with Lt. Gov. Lynch's crackdown. Elsie, learning of Dr. Cameron's arrest, goes to Lynch to plead for his release. Lynch, who had been lusting after Elsie, tries to force her to marry him, which causes her to faint. Stoneman returns, causing Elsie to be placed in another room. At first Stoneman is happy when Lynch tells him he wants to marry a white woman, but he is then angered when Lynch tells him that it is Stoneman's daughter. Undercover Klansman spies go to get help when they discover Elsie's plight after she breaks a window and cries out for help. Elsie falls unconscious again and revives while gagged and being bound. The Klan gathered together, with Ben leading them, ride in to gain control of the town. When news about Elsie reaches Ben, he and others go to her rescue. Elsie frees her mouth and screams for help. Lynch is captured. Victorious, the Klansmen celebrate in the streets. Meanwhile, Lynch's militia surrounds and attacks the hut where the Camerons are hiding. The Klansmen, with Ben at their head, race in to save them just in time. The next election day, blacks find a line of mounted and armed Klansmen just outside their homes and are intimidated into not voting.
The film concludes with a double wedding as Margaret Cameron marries Phil Stoneman and Elsie Stoneman marries Ben Cameron. The masses are shown oppressed by a giant warlike figure who gradually fades away. The scene shifts to another group finding peace under the image of Jesus Christ. The penultimate title is: "Dare we dream of a golden day when the bestial War shall rule no more. But instead — the gentle Prince in the Hall of Brotherly Love in the City of Peace."
Credited
Uncredited
There was an uncompleted, now lost, 1911 version, titled "The Clansman". It used Kinemacolor and a new sound process; one reason for this version's failure is the unwillingness of theater owners to purchase the equipment to show it. The director was William F. Haddock, and the producer was George Brennan. Some scenes were filmed on the porches and lawns of Homewood Plantation, in Natchez, Mississippi. One and a half reels were completed.
Kinemacolor received a settlement from the producers of "Birth" when they proved that they had an earlier right to film the work.
The footage was shown to the trade in an attempt to arouse interest. Early movie critic Frank E. Woods attended; Griffith always credited Woods with bringing "The Clansman" to his attention.
After the failure of the Kinemacolor project, in which Dixon was willing to invest his own money, he began visiting other studios to see if they were interested. In late 1913, Dixon met the film producer Harry Aitken, who was interested in making a film out of "The Clansman"; through Aitken, Dixon met Griffith. Like Dixon, Griffith was a Southerner, a fact that Dixon points out; Griffith's father served as a colonel in the Confederate States Army and, like Dixon, viewed Reconstruction negatively. Griffith believed that a passage from "The Clansman" where Klansmen ride "to the rescue of persecuted white Southerners" could be adapted into a great cinematic sequence. Griffith first announced his intent to adapt Dixon's play to Gish and Walthall after filming "Home Sweet Home" in 1914.
"Birth of a Nation" "follows "The Clansman" [the play] nearly scene by scene". While some sources also credit "The Leopard's Spots" as source material, Russell Merritt attributes this to "the original 1915 playbills and program for "Birth" which, eager to flaunt the film's literary pedigree, cited both "The Clansman" and "The Leopard's Spots" as sources." According to Karen Crowe, "[t]here is not a single event, word, character, or circumstance taken from "The Leopard's Spots"... Any likenesses between the film and "The Leopard's Spots" occur because some similar scenes, circumstances, and characters appear in both books."
Griffith agreed to pay Thomas Dixon $10,000 (equivalent to $ in ) for the rights to his play "The Clansman". Since he ran out of money and could afford only $2,500 of the original option, Griffith offered Dixon 25 percent interest in the picture. Dixon reluctantly agreed, and the unprecedented success of the film made him rich. Dixon's proceeds were the largest sum any author had received [up to 2007] for a motion picture story and amounted to several million dollars. The American historian John Hope Franklin suggested that many aspects of the script for "The Birth of a Nation" appeared to reflect Dixon's concerns more than Griffith's, as Dixon had an obsession in his novels of describing in loving detail the lynchings of black men, which did not reflect Griffith's interests.
Griffith began filming on July 4, 1914 and was finished by October 1914. Some filming took place in Big Bear Lake, California. D. W. Griffith took over the Hollywood studio of Kinemacolor. West Point engineers provided technical advice on the American Civil War battle scenes, providing Griffith with the artillery used in the film. Much of the filming was done on the Griffith Ranch in San Fernando Valley, with the Petersburg scenes being shot at what is today Forest Lawn Memorial Park and other scenes being shot in Whittier and Ojai Valley. The film's war scenes were influenced after Robert Underwood Johnson's book "Battles and Leaders of the Civil War", "Harper's Pictorial History of the Civil War", "The Soldier in Our Civil War", and Mathew Brady's photography.
Many of the African Americans in the film were portrayed by white actors in blackface. Griffith initially claimed this was deliberate, stating "“on careful weighing of every detail concerned, the decision was to have no black blood among the principals; it was only in the legislative scene that Negroes were used, and then only as ‘extra people.’" However black extras who had been housed in segregated quarters, including Griffith's acquaintance and frequent collaborator Madame Sul-Te-Wan, can be seen in many other shots of the film.
Griffith's budget started at US$40,000 (equivalent to $ in ) but rose to over $100,000 (equivalent to $ in ).
By the time he finished filming, Griffith shot approximately 150,000 feet of footage (or about 36 hours worth of film), which he edited down to 13,000 feet (just over 3 hours). The film was edited after early screenings in reaction to audience reception, and existing prints of the film are missing footage from the standard version of the film. Evidence exists that the film originally included scenes of white slave traders seizing blacks from West Africa and detaining them aboard a slave ship, Southern congressmen in the House of Representatives, Northerners reacting to the results of the 1860 presidential election, the passage of the Fourteenth Amendment, a Union League meeting, depictions of martial law in South Carolina, and a battle sequence. In addition, several scenes were cut at the insistence of New York Mayor John Purroy Mitchel due to their highly racist content before its release in New York City, including a female abolitionist activist recoiling from the body odor of a black boy, black men seizing white women on the streets of Piedmont, and deportations of blacks with the title "Lincoln's Solution." It was also long rumored, including by Griffith's biographer Seymour Stern, that the original film included a rape scene between Gus and Flora before her suicide, but in 1974 the cinematographer Karl Brown denied that such a scene had been filmed.
Although "The Birth of a Nation" is commonly regarded as a landmark for its dramatic and visual innovations, its use of music was arguably no less revolutionary. Though film was still silent at the time, it was common practice to distribute musical cue sheets, or less commonly, full scores (usually for organ or piano accompaniment) along with each print of a film.
For "The Birth of a Nation", composer Joseph Carl Breil created a three-hour-long musical score that combined all three types of music in use at the time: adaptations of existing works by classical composers, new arrangements of well-known melodies, and original composed music. Though it had been specifically composed for the film, Breil's score was not used for the Los Angeles première of the film at Clune's Auditorium; rather, a score compiled by Carli Elinor was performed in its stead, and this score was used exclusively in West Coast showings. Breil's score was not used until the film debuted in New York at the Liberty Theatre but it was the score featured in all showings save those on the West Coast.
Outside of original compositions, Breil adapted classical music for use in the film, including passages from "Der Freischütz" by Carl Maria von Weber, "Leichte Kavallerie" by Franz von Suppé, Symphony No. 6 by Ludwig van Beethoven, and "Ride of the Valkyries" by Richard Wagner, the latter used as a leitmotif during the ride of the KKK. Breil also arranged several traditional and popular tunes that would have been recognizable to audiences at the time, including many Southern melodies; among these songs were "Maryland, My Maryland", "Dixie", "Old Folks at Home", "The Star-Spangled Banner", "America the Beautiful", "The Battle Hymn of the Republic", "Auld Lang Syne", and "Where Did You Get That Hat?". DJ Spooky has called Breil's score, with its mix of Dixieland songs, classical music and "vernacular heartland music" "an early, pivotal accomplishment in remix culture." He has also cited Breil's use of music by Richard Wagner as influential on subsequent Hollywood films, including "Star Wars" (1977) and "Apocalypse Now" (1979).
In his original compositions for the film, Breil wrote numerous leitmotifs to accompany the appearance of specific characters. The principal love theme that was created for the romance between Elsie Stoneman and Ben Cameron was published as "The Perfect Song" and is regarded as the first marketed "theme song" from a film; it was later used as the theme song for the popular radio and television sitcom "Amos 'n' Andy".
The first public showing of the film, then called "The Clansman", was on January 1 and 2, 1915, at the Loring Opera House in Riverside, California. The second night, it was sold out and people were turned away. It was shown on February 8, 1915, to an audience of 3,000 persons at Clune's Auditorium in downtown Los Angeles.
The film's backers understood that the film needed a massive publicity campaign if they were to cover the immense cost of producing it. A major part of this campaign was the release of the film in a roadshow theatrical release. This allowed Griffith to charge premium prices for tickets, sell souvenirs, and build excitement around the film before giving it a wide release. For several months, Griffith's team traveled to various cities to show the film for one or two nights before moving on. This strategy was immensely successful.
The title was changed to "The Birth of a Nation" before the March 2 New York opening. However, Dixon copyrighted the title "The Birth of a Nation" in 1905, and it was used in the press as early as January 2, 1915, while it was still referred to as "The Clansman" in October.
"Birth of a Nation" was the first movie shown in the White House, in the East Room, on February 18, 1915. (An earlier movie, the Italian "Cabiria" (1914), was shown on the lawn.) It was attended by President Woodrow Wilson, members of his family, and members of his Cabinet. Both Dixon and Griffiths were present. As put by Dixon, not an impartial source, "it repeated the triumph of the first showing".
There is dispute about Wilson's attitude toward the movie. A newspaper reported that he "received many letters protesting against his alleged action in Indorsing the pictures ", including a letter from Massachusetts Congressman Thomas Chandler Thacher. The showing of the movie had caused "several near-riots". When Assistant Attorney General William H. Lewis and A. Walters, a bishop of the African Methodist Episcopal Zion Church, called at the White House "to add their protests", President Wilson's private secretary, Joseph Tumulty, showed them a letter he had written to Thacher on Wilson's behalf. According to the letter, Wilson had been "entirely unaware of the character of the play [movie] before it was presented and has at no time expressed his approbation of it. Its exhibition at the White House was a courtesy extended to an old acquaintance." Dixon, in his autobiography, quotes Wilson as saying, when Dixon proposed showing the movie at the White House, that "I am pleased to be able to do this little thing for you, because a long time ago you took a day out of your busy life to do something for me." What Dixon had done for Wilson was to suggest him for an honorary degree, which Wilson received, from Dixon's "alma mater", Wake Forest College.
Dixon had been a fellow graduate student in history with Wilson at Johns Hopkins University and, in 1913, dedicated his historical novel about Lincoln, "The Southerner", to "our first Southern-born president since Lincoln, my friend and collegemate Woodrow Wilson".
The evidence that Wilson knew "the character of the play" in advance of seeing it is circumstantial but very strong: "Given Dixon's career and the notoriety attached to the play "The Clansman", it is not unreasonable to assume that Wilson must have had some idea of at least the general tenor of the film." The movie was based on a best-selling novel and was preceded by a stage version (play) which was received with protests in several cities — in some cities it was prohibited — and received a great deal of news coverage. Wilson issued no protest when the "Evening Star", at that time Washington's "newspaper of record", reported in advance of the showing, in language suggesting a press release from Dixon and Griffiths, that Dixon was "a schoolmate of President Wilson and is an intimate friend", and that Wilson's interest in it "is due to the great lesson of peace it teaches". Wilson, and only Wilson, is quoted by name in the movie for his observations on American history, and the title of Wilson's book ("History of the American People") is mentioned as well. The three title cards with quotations from Wilson's book read:
"Adventurers swarmed out of the North, as much the enemies of one race as of the other, to cozen, beguile and use the negroes... [Ellipsis in the original.] In the villages the negroes were the office holders, men who knew none of the uses of authority, except its insolences."
"...The policy of the congressional leaders wrought…a veritable overthrow of civilization in the South...in their determination to 'put the white South under the heel of the black South.'" [Ellipses and underscore in the original.]
"The white men were roused by a mere instinct of self-preservation...until at last there had sprung into existence a great Ku Klux Klan, a veritable empire of the South, to protect the southern country." [Ellipsis in the original.]
In the same book, Wilson has harsh words about the abyss between the original goals of the Klan and what it evolved into. Dixon has been accused of misquoting Wilson.
In 1937 a popular magazine reported that Wilson said of the film, "It is like writing history with lightning. And my only regret is that it is all so terribly true." Wilson over the years had several times used the metaphor of illuminating history as if by lightning and he may well have said it at the time. The accuracy of his saying it was "terribly true" is disputed by historians; there is no contemporary documentation of the remark. Vachel Lindsay, a popular poet of the time, is known to have referred to the film as "art by lightning flash."
The next day, February 19, 1915, Griffiths and Dixon held a showing of the film in the Raleigh Hotel ballroom, which they had hired for the occasion. Early that morning, Dixon called on a North Carolina friend, the white-supremacist Josephus Daniels, Secretary of the Navy. Daniels set up a meeting that morning for Dixon with Edward Douglass White, Chief Justice of the Supreme Court. Initially Justice White was not interested in seeing the film, but when Dixon told him it was the "true story" of Reconstruction and the Klan's role in "saving the South", White, recalling his youth in Louisiana, jumped to attention and said: "I was a member of the Klan, sir". With White agreeing to see the film, the rest of the Supreme Court followed. In addition to the entire Supreme Court, in the audience were "many members of Congress and members of the diplomatic corps", the Secretary of the Navy, 38 members of the Senate, and about 50 members of the House of Representatives. The audience of 600 "cheered and applauded throughout."
In Griffith's words, the showings to the president and the entire Supreme Court conferred an "honor" upon "Birth of a Nation". Dixon and Griffiths used this commercially.
The following day, Griffiths and Dixon transported the film to New York City for review by the National Board of Censorship. They presented the film as "endorsed" by the President and the cream of Washington society. The Board approved the film by 15 to 8.
A warrant to close the theater in which the movie was to open was dismissed after a long-distance call to the White House confirmed that the film had been shown there.
Justice White was very angry when advertising for the film stated that he approved it, and he threatened to denounce it publicly.
Dixon clearly was rattled and upset by criticism by African Americans that the movie encouraged hatred against them, and he wanted the endorsement of as many powerful men as possible to offset such criticism. Dixon always vehemently denied having anti-black prejudices—despite the way his books promoted white supremacy—and stated: "My books are hard reading for a Negro, and yet the Negroes, in denouncing them, are unwittingly denouncing one of their greatest friends".
In a letter sent on May 1, 1915, to Joseph P. Tumulty, Wilson's secretary, Dixon wrote: "The real purpose of my film was to revolutionize Northern sentiments by a presentation of history that would transform every man in the audience into a good Democrat...Every man who comes out of the theater is a Southern partisan for life!" In a letter to President Wilson sent on September 5, 1915, Dixon boasted: "This play is transforming the entire population of the North and the West into sympathetic Southern voters. There will never be an issue of your segregation policy". Dixon was alluding to the fact that Wilson, upon becoming president in 1913, had allowed cabinet members to impose segregation on federal workplaces in Washington, D.C. by reducing the number of black employees through demotion or dismissal.
One famous part of the film was added by Griffith only on the second run of the film and is missing from most online versions of the film (presumably taken from first run prints).
These are the second and third of three opening title cards which defend the film. The added titles read:
A PLEA FOR THE ART OF THE MOTION PICTURE:
We do not fear censorship, for we have no wish to offend with improprieties or obscenities, but we do demand, as a right, the liberty to show the dark side of wrong, that we may illuminate the bright side of virtue – the same liberty that is conceded to the art of the written word – that art to which we owe the Bible and the works of Shakespeare
and
If in this work we have conveyed to the mind the ravages of war to the end that war may be held in abhorrence, this effort will not have been in vain.
Various film historians have expressed a range of views about these titles. To Nicholas Andrew Miller, this shows that "Griffith's greatest achievement in "The Birth of a Nation" was that he brought the cinema's capacity for spectacle... under the rein of an outdated, but comfortably literary form of historical narrative. Griffith's models... are not the pioneers of film spectacle... but the giants of literary narrative". On the other hand, S. Kittrell Rushing complains about Griffith's "didactic" title-cards, while Stanley Corkin complains that Griffith "masks his idea of fact in the rhetoric of high art and free expression" and creates film which "erodes the very ideal" of liberty which he asserts.
"The New York Times" gave it a quite brief review, calling it "melodramatic" and "inflammatory", adding that: "A great deal might be said concerning the spirit revealed in Mr. Dixon's review of the unhappy chapter of Reconstruction and concerning the sorry service rendered by its plucking at old wounds."
The box office gross of "The Birth of a Nation" is not known and has been the subject of exaggeration. When the film opened, the tickets were sold at premium prices. The film played at the Liberty Theater at Times Square in New York City for 44 weeks with tickets priced at $2.20 (). By the end of 1917, Epoch reported to its shareholders cumulative receipts of $4.8 million, and Griffith's own records put Epoch's worldwide earnings from the film at $5.2 million as of 1919, although the distributor's share of the revenue at this time was much lower than the exhibition gross. In the biggest cities, Epoch negotiated with individual theater owners for a percentage of the box office; elsewhere, the producer sold all rights in a particular state to a single distributor (an arrangement known as "state's rights" distribution). The film historian Richard Schickel says that under the state's rights contracts, Epoch typically received about 10% of the box office gross—which theater owners often underreported—and concludes that ""Birth" certainly generated more than $60 million in box-office business in its first run".
The film held the mantle of the highest-grossing film until it was overtaken by "Gone with the Wind" (1939), another film about the Civil War and Reconstruction era. By 1940 "Time" magazine estimated the film's cumulative gross rental (the distributor's earnings) at approximately $15 million. For years "Variety" had the gross rental listed as $50 million, but in 1977 repudiated the claim and revised its estimate down to $5 million. It is not known for sure how much the film has earned in total, but producer Harry Aitken put its estimated earnings at $15–18 million in a letter to a prospective investor in a proposed sound version. It is likely the film earned over $20 million for its backers and generated $50–100 million in box office receipts. In a 2015 "Time" article, Richard Corliss estimated the film had earned the equivalent of $1.8 billion adjusted for inflation, a milestone that at the time had only been surpassed by "Titanic" (1997) and "Avatar" (2009) in nominal earnings.
Like Dixon's novels and play, "Birth of a Nation" received considerable criticism, both before and after its premiere. Dixon, who believed it entirely truthful, attributed this to "Sectionalists", i.e. non-Southerners who in Dixon's opinion were hostile to the truth about the South. It was to counter these "sinister forces" and the "dangerous...menace" that Dixon and Griffiths sought "the backing" of President Wilson and the Supreme Court.
The National Association for the Advancement of Colored People (NAACP) protested at premieres of the film in numerous cities. According to the historian David Copeland, "by the time of the movie's March 3 [1915] premiere in New York City, its subject matter had embroiled the film in charges of racism, protests, and calls for censorship, which began after the Los Angeles branch of the NAACP requested the city's film board ban the movie. Since film boards were composed almost entirely of whites, few review boards initially banned Griffith's picture". The NAACP also conducted a public education campaign, publishing articles protesting the film's fabrications and inaccuracies, organizing petitions against it, and conducting education on the facts of the war and Reconstruction. Because of the lack of success in NAACP's actions to ban the film, on April 17, 1915, NAACP secretary Mary Childs Nerney wrote to NAACP Executive Committee member George Packard: "I am utterly disgusted with the situation in regard to "The Birth of a Nation" ... kindly remember that we have put six weeks of constant effort of this thing and have gotten nowhere."
Jane Addams, an American social worker and social reformer, and the founder of Hull House, voiced her reaction to the film in an interview published by the "New York Post" on March 13, 1915, just ten days after the film was released. She stated that "One of the most unfortunate things about this film is that it appeals to race prejudice upon the basis of conditions of half a century ago, which have nothing to do with the facts we have to consider to-day. Even then it does not tell the whole truth. It is claimed that the play is historical: but history is easy to misuse." In New York, Rabbi Stephen Samuel Wise told the press after seeing "The Birth of a Nation" that the film was "an indescribable foul and loathsome libel on a race of human beings". In Boston, Booker T. Washington wrote a newspaper column asking readers to boycott the film, while the civil rights activist William Monroe Trotter organized demonstrations against the film, which he predicted was going to worsen race relations. On Saturday, April 10, and again on April 17, Trotter and a group of other blacks tried to buy tickets for the show's premiere at the Tremont Theater and were refused. They stormed the box office in protest, 260 police on standby rushed in, and a general melee ensued. Trotter and ten others were arrested. The following day a huge demonstration was staged at Faneuil Hall. In Washington D.C, the Reverend Francis James Grimké published a pamphlet entitled "Fighting a Vicious Film" that challenged the historical accuracy of "The Birth of a Nation" on a scene-by-scene basis.
When the film was released, riots also broke out in Philadelphia and other major cities in the United States. The film's inflammatory nature was a catalyst for gangs of whites to attack blacks. On April 24, 1916, the "Chicago American" reported that a white man murdered a black teenager in Lafayette, Indiana, after seeing the film, although there has been some controversy as to whether the murderer had actually seen "The Birth of a Nation". The mayor of Cedar Rapids, Iowa was the first of twelve mayors to ban the film in 1915 out of concern that it would promote race prejudice, after meeting with a delegation of black citizens. The NAACP set up a precedent-setting national boycott of the film, likely seen as the most successful effort. Additionally, they organized a mass demonstration when the film was screened in Boston, and it was banned in three states and several cities.
Both Griffith and Dixon in letters to the press dismissed African-American protests against "The Birth of a Nation". In a letter to "The New York Globe", Griffith wrote that his film was "an influence against the intermarriage of blacks and whites". Dixon likewise called the NAACP "the Negro Intermarriage Society" and said it was against "The Birth of a Nation" "for one reason only—because it opposes the marriage of blacks to whites". Griffith—indignant at the film's negative critical reception—wrote letters to newspapers and published a pamphlet in which he accused his critics of censoring unpopular opinions.
When Sherwin Lewis of "The New York Globe" wrote a piece that expressed criticism of the film's distorted portrayal of history and said that it was not worthy of constitutional protection because its purpose was to make a few "dirty dollars", Griffith responded that "the public should not be afraid to accept the truth, even though it might not like it". He also added that the man who wrote the editorial was "damaging my reputation as a producer" and "a liar and a coward".
"The Birth of a Nation" was very popular, despite the film's controversy; it was unlike anything that American audiences had ever seen before. The "Los Angeles Times" called it "the greatest picture ever made and the greatest drama ever filmed". Mary Pickford said: ""Birth of a Nation" was the first picture that really made people take the motion picture industry seriously". It became a national cultural phenomenon: merchandisers made Ku-Klux hats and kitchen aprons, and ushers dressed in white Klan robes for openings. In New York there were Klan-themed balls and, in Chicago that Halloween, thousands of college students dressed in robes for a massive Klan-themed party. The producers had 15 "detectives" at the Liberty Theater in New York City "to prevent disorder on the part of those who resent the 'reconstruction period' episodes depicted."
The Reverend Charles Henry Parkhurst defended the film against the charge of racism by saying that it "was exactly true to history" by depicting freedmen as they were and, therefore, it was a "compliment to the black man" by showing how far black people had "advanced" since Reconstruction. Critic Dolly Dalrymple wrote that, "when I saw it, it was far from silent ... incessant murmurs of approval, roars of laughter, gasps of anxiety, and outbursts of applause greeted every new picture on the screen". One man viewing the film was so moved by the scene where Flora Cameron flees Gus to avoid being raped that he took out his handgun and began firing at the screen in an effort to help her. Katharine DuPre Lumpkin recalled watching the film as an 18-year-old in 1915 in her 1947 autobiography "The Making of a Southerner": "Here was the black figure—and the fear of the white girl—though the scene blanked out just in time. Here were the sinister men the South scorned and the noble men the South revered. And through it all the Klan rode. All around me people sighed and shivered, and now and then shouted or wept, in their intensity."
D. W. Griffith made a film in 1916, called "Intolerance", partly in response to the criticism that "The Birth of a Nation" received. Griffith made clear within numerous interviews that the film's title and main themes were chosen in response to those who he felt had been intolerant to "The Birth of a Nation". A sequel called "The Fall of a Nation" was released in 1916, depicting the invasion of the United States by a German-led confederation of European monarchies and criticizing pacifism in the context of the First World War. It was the first sequel in film history. The film was directed by Thomas Dixon Jr., who adapted it from his novel of the same name. Despite its success in the foreign market, the film was not a success among American audiences, and is now a lost film.
In 1918, an American silent drama film directed by John W. Noble called "The Birth of a Race" was released as a direct response to "The Birth of a Nation". The film was an ambitious project by producer Emmett Jay Scott to challenge Griffith's film and tell another side of the story, but was ultimately unsuccessful. In 1920, African-American filmmaker Oscar Micheaux released "Within Our Gates", a response to "The Birth of a Nation". "Within Our Gates" depicts the hardships faced by African Americans during the era of Jim Crow laws. Griffith's film was remixed in 2004 as "Rebirth of a Nation" by DJ Spooky. Quentin Tarantino has said that he made his film "Django Unchained" (2012) to counter the falsehoods of "The Birth of a Nation".
In November 1915, William Joseph Simmons revived the Klan in Atlanta, Georgia, holding a cross burning at Stone Mountain. The historian John Hope Franklin observed that, had it not been for "The Birth of a Nation", the Klan might not have been reborn.
Franklin wrote in 1979 that "The influence of "Birth of a Nation" on the current view of Reconstruction has been greater than any other single force", but that "It is not at all difficult to find inaccuracies and distortions" in the movie.
Released in 1915, "The Birth of a Nation" has been credited as groundbreaking among its contemporaries for its innovative application of the medium of film. According to the film historian Kevin Brownlow, the film was "astounding in its time" and initiated "so many advances in film-making technique that it was rendered obsolete within a few years". The content of the work, however, has received widespread criticism for its blatant racism. Film critic Roger Ebert wrote:
Certainly "The Birth of a Nation" (1915) presents a challenge for modern audiences. Unaccustomed to silent films and uninterested in film history, they find it quaint and not to their taste. Those evolved enough to understand what they are looking at find the early and wartime scenes brilliant, but cringe during the postwar and Reconstruction scenes, which are racist in the ham-handed way of an old minstrel show or a vile comic pamphlet.
Despite its controversial story, the film has been praised by film critics, with Ebert mentioning its use as a historical tool: ""The Birth of a Nation" is not a bad film because it argues for evil. Like Riefenstahl's "Triumph of the Will", it is a great film that argues for evil. To understand how it does so is to learn a great deal about film, and even something about evil."
According to a 2002 article in the "Los Angeles Times", the film facilitated the refounding of the Ku Klux Klan in 1915. History.com similarly states that "There is no doubt that "Birth of a Nation" played no small part in winning wide public acceptance" for the KKK, and that throughout the film "African Americans are portrayed as brutish, lazy, morally degenerate, and dangerous." David Duke used the film to recruit Klansmen in the 1970s.
In 2013, the American critic Richard Brody wrote "The Birth of a Nation" was :
...a seminal commercial spectacle but also a decisively original work of art—in effect, the founding work of cinematic realism, albeit a work that was developed to pass lies off as reality. It's tempting to think of the film's influence as evidence of the inherent corruption of realism as a cinematic mode—but it's even more revealing to acknowledge the disjunction between its beauty, on the one hand, and, on the other, its injustice and falsehood. The movie's fabricated events shouldn't lead any viewer to deny the historical facts of slavery and Reconstruction. But they also shouldn't lead to a denial of the peculiar, disturbingly exalted beauty of "Birth of a Nation", even in its depiction of immoral actions and its realization of blatant propaganda. The worst thing about "The Birth of a Nation" is how good it is. The merits of its grand and enduring aesthetic make it impossible to ignore and, despite its disgusting content, also make it hard not to love. And it's that very conflict that renders the film all the more despicable, the experience of the film more of a torment—together with the acknowledgment that Griffith, whose short films for Biograph were already among the treasures of world cinema, yoked his mighty talent to the cause of hatred (which, still worse, he sincerely depicted as virtuous).
Brody also argued that Griffith unintentionally undercut his own thesis in the film, citing the scene before the Civil War when the Cameron family offers up lavish hospitality to the Stoneman family who travel past mile after mile of slaves working the cotton fields of South Carolina to reach the Cameron home-maintaining that a modern audience can see that the wealth of the Camerons comes from the slaves forced to do back-breaking work picking the cotton. Likewise, Brody argued that the scene where people in South Carolina celebrate the Confederate victory at the Battle of Bull Run by dancing around the "eerie flare of a bonfire" which imply "a dance of death", foreshadowing the destruction of Sherman's March that was to come. In the same way, Brody wrote that the scene where the Klan dumps Gus's body off at the doorstep of Lynch is meant to have the audience cheering, but modern audiences find the scene "obscene and horrifying". Finally, Brody argued that the end of the film, where the Klan prevents defenseless African-Americans from exercising their right to vote by pointing guns at them, today seems "unjust and cruel".
In an article for "The Atlantic", film critic Ty Burr deemed "The Birth of a Nation" the most influential film in history while criticizing its portrayal of black men as savage. Richard Corliss of "Time" wrote that Griffith "established in the hundreds of one- and two-reelers he directed a cinematic textbook, a fully formed visual language, for the generations that followed. More than anyone else—more than all others combined—he invented the film art. He brought it to fruition in "The Birth of a Nation"." Corliss praised the film's "brilliant storytelling technique" and noted that ""The Birth of a Nation" is nearly as antiwar as it is antiblack. The Civil War scenes, which consume only 30 minutes of the extravaganza, emphasize not the national glory but the human cost of combat. ... Griffith may have been a racist politically, but his refusal to find uplift in the South's war against the Union—and, implicitly, in any war at all—reveals him as a cinematic humanist."
In 1992, the U.S. Library of Congress deemed the film "culturally, historically, or aesthetically significant" and selected it for preservation in the National Film Registry. The American Film Institute recognized the film by ranking it #44 within the AFI's 100 Years...100 Movies list in 1998.
The film remains controversial due to its interpretation of American history. University of Houston historian Steven Mintz summarizes its message as follows: "Reconstruction was an unmitigated disaster, blacks could never be integrated into white society as equals, and the violent actions of the Ku Klux Klan were justified to reestablish honest government". The South is portrayed as a victim. The first overt mentioning of the war is the scene in which Abraham Lincoln signs the call for the first 75,000 volunteers. However, the first aggression in the Civil War, made when the Confederate troops fired on Fort Sumter in 1861, is not mentioned in the film. The film suggested that the Ku Klux Klan restored order to the postwar South, which was depicted as endangered by abolitionists, freedmen, and carpetbagging Republican politicians from the North. This is similar to the Dunning School of historiography which was current in academe at the time. The film is slightly less extreme than the books upon which it is based, in which Dixon misrepresented Reconstruction as a nightmarish time when black men ran amok, storming into weddings to rape white women with impunity.
The film portrayed President Abraham Lincoln as a friend of the South and refers to him as "the Great Heart". The two romances depicted in the film, Phil Stoneman with Margaret Cameron and Ben Cameron with Elsie Stoneman, reflect Griffith's retelling of history. The couples are used as a metaphor, representing the film's broader message of the need for the reconciliation of the North and South to defend white supremacy. Among both couples, there is an attraction that forms before the war, stemming from the friendship between their families. With the war, however, both families are split apart, and their losses culminate in the end of the war with the defense of white supremacy. One of the intertitles clearly sums up the message of unity: "The former enemies of North and South are united again in defense of their Aryan birthright."
The film further reinforced the popular belief held by whites, especially in the South, of Reconstruction as a disaster. In his 1929 book "The Tragic Era: The Revolution After Lincoln", the respected historian Claude Bowers treated "The Birth of a Nation" as a factually accurate account of Reconstruction. In "The Tragic Era", Bowers presented every black politician in the South as corrupt, portrayed Republican Representative Thaddeus Stevens as a vicious "race traitor" intent upon making blacks the equal of whites, and praised the Klan for "saving civilization" in the South. Bowers wrote about black empowerment that the worst sort of "scum" from the North like Stevens "inflamed the Negro's egoism and soon the lustful assaults began. Rape was the foul daughter of Reconstruction!"
The American historian John Hope Franklin wrote that not only did Claude Bowers treat "The Birth of a Nation" as accurate history, but his version of history seemed to be drawn from "The Birth of a Nation". Historian E. Merton Coulter treated "The Birth of a Nation" as historically correct and painted a vivid picture of "black beasts" running amok, encouraged by alcohol-sodden, corrupt and vengeful black Republican politicians. Franklin wrote that as recently as the 1970s that the popular journalist Alistair Cooke in his books and TV shows was still essentially following the version of history set out by "The Birth of a Nation", noting that Cooke had much sympathy with the suffering of whites in Reconstruction while having almost nothing to say about the suffering of blacks or about how blacks were stripped of almost all their rights after 1877.
Veteran film reviewer Roger Ebert wrote:
... stung by criticisms that the second half of his masterpiece was racist in its glorification of the Ku Klux Klan and its brutal images of blacks, Griffith tried to make amends in "Intolerance" (1916), which criticized prejudice. And in "Broken Blossoms" he told perhaps the first interracial love story in the movies—even though, to be sure, it's an idealized love with no touching.
Despite some similarities between the Congressman Stoneman character and Rep. Thaddeus Stevens of Pennsylvania, Rep. Stevens did not have the family members described and did not move to South Carolina during Reconstruction. He died in Washington, D.C. in 1868. However, Stevens' biracial housekeeper, Lydia Hamilton Smith, was considered his common-law wife, and was generously provided for in his will.
In the film, Abraham Lincoln is portrayed in a positive light due to his belief in conciliatory postwar policies toward Southern whites. The president's views are opposite those of Austin Stoneman, a character presented in a negative light, who acts as an antagonist. The assassination of Lincoln marks the transition from war to Reconstruction, each of which periods has one of the two "acts" of the film. In including the assassination, the film also establishes to the audience that the plot of the movie has historical basis. Franklin wrote the film's depiction of Reconstruction as a hellish time when black freedmen ran amok, raping and killing whites with impunity until the Klan stepped in is not supported by the facts. Franklin wrote that most freed slaves continued to work for their former masters in Reconstruction for the want of a better alternative and, though relations between freedmen and their former masters were not friendly, very few freedmen sought revenge against the people who had enslaved them. The character of Silas Lynch has no basis in fact, and during the Reconstruction no black or mulatto men served as the lieutenant governor of South Carolina.
The depictions of mass Klan paramilitary actions do not seem to have historical equivalents, although there were incidents in 1871 where Klan groups traveled from other areas in fairly large numbers to aid localities in disarming local companies of the all-black portion of the state militia under various justifications, prior to the eventual Federal troop intervention, and the organized Klan continued activities as small groups of "night riders".
The civil rights movement and other social movements created a new generation of historians, such as scholar Eric Foner, who led a reassessment of Reconstruction. Building on W. E. B. DuBois' work but also adding new sources, they focused on achievements of the African-American and white Republican coalitions, such as establishment of universal public education and charitable institutions in the South and extension of suffrage to black men. In response, the Southern-dominated Democratic Party and its affiliated white militias had used extensive terrorism, intimidation and outright assassinations to suppress African-American leaders and voting in the 1870s and to regain power.
In his review of "The Birth of a Nation" in "1001 Movies You Must See Before You Die", Jonathan Kline writes that "with countless artistic innovations, Griffith essentially created contemporary film language ... virtually every film is beholden to ["The Birth of a Nation"] in one way, shape or form. Griffith introduced the use of dramatic close-ups, tracking shots, and other expressive camera movements; parallel action sequences, crosscutting, and other editing techniques". He added that "the fact that "The Birth of a Nation" remains respected and studied to this day-despite its subject matter-reveals its lasting importance."
Griffith pioneered such camera techniques as close-ups, fade-outs, and a carefully staged battle sequence with hundreds of extras made to look like thousands. "The Birth of a Nation" also contained many new artistic techniques, such as color tinting for dramatic purposes, building up the plot to an exciting climax, dramatizing history alongside fiction, and featuring its own musical score written for an orchestra.
For many years, "The Birth of a Nation" was poorly represented in home media and restorations. This stemmed from several factors, one of which was the fact that Griffith and others had frequently reworked the film, leaving no definitive version. According to the silent film website "Brenton Film", many home media releases of the film consisted of "poor quality DVDs with different edits, scores, running speeds and usually in "definitely unoriginal" black and white".
One of the earliest high-quality home versions was film preservationist David Shepard's 1992 transfer of a 16mm print for VHS and LaserDisc release via Image Entertainment. A short documentary, "The Making of The Birth of a Nation", newly produced and narrated by Shepard, was also included. Both were released on DVD by Image in 1998 and the United Kingdom's Eureka Entertainment in 2000.
In the UK, Photoplay Productions restored the Museum of Modern Art's 35mm print that was the source of Shepard's 16 mm print, though they also augmented it with extra material from the British Film Institute. It was also given a full orchestral recording of the original Breil score. Though broadcast on Channel 4 television and theatrically screened many times, Photoplay's 1993 version was never released on home video.
Shepard's transfer and documentary were reissued in the US by Kino Video in 2002, this time in a 2-DVD set with added extras on the second disc. These included several Civil War shorts also directed by D. W. Griffith. In 2011, Kino prepared a HD transfer of a 35 mm negative from the Paul Killiam Collection. They added some material from the Library of Congress and gave it a new compilation score. This version was released on Blu-ray by Kino in the US, Eureka in the UK (as part of their "Masters of Cinema" collection) and Divisa Home Video in Spain.
In 2015, the year of the film's centenary, Photoplay Productions' Patrick Stanbury, in conjunction with the British Film Institute, carried out the first full restoration. It mostly used new 4K scans of the LoC's original camera negative, along with other early generation material. It, too, was given the original Breil score and featured the film's original tinting for the first time since its 1915 release. The restoration was released on a 2-Blu-ray set by the BFI, alongside a host of extras, including many other newly restored Civil War-related films from the period.
I've reclaimed this title and re-purposed it as a tool to challenge racism and white supremacy in America, to inspire a riotous disposition toward any and all injustice in this country (and abroad) and to promote the kind of honest confrontation that will galvanize our society toward healing and sustained systemic change. | https://en.wikipedia.org/wiki?curid=3333 |
Baltic Sea
The Baltic Sea is a mediterranean sea of the Atlantic Ocean, enclosed by Denmark, Estonia, Finland, Latvia, Lithuania, Sweden, northeast Germany, Poland, Russia and the North and Central European Plain.
The sea stretches from 53°N to 66°N latitude and from 10°E to 30°E longitude. A marginal sea of the Atlantic, with limited water exchange between the two water bodies, the Baltic Sea drains through the Danish Straits into the Kattegat by way of the Øresund, Great Belt and Little Belt. It includes the Gulf of Bothnia, the Bay of Bothnia, the Gulf of Finland, the Gulf of Riga and the Bay of Gdańsk.
The Baltic Proper is bordered on its northern edge, at the latitude 60°N, by the Åland islands and the Gulf of Bothnia, on its northeastern edge by the Gulf of Finland, on its eastern edge by the Gulf of Riga, and in the west by the Swedish part of the southern Scandinavian Peninsula.
The Baltic Sea is connected by artificial waterways to the White Sea via the White Sea–Baltic Canal and to the German Bight of the North Sea via the Kiel Canal.
Administration
The Helsinki Convention on the Protection of the Marine Environment of the Baltic Sea Area includes the Baltic Sea and the Kattegat, without calling Kattegat a part of the Baltic Sea, "For the purposes of this Convention the 'Baltic Sea Area' shall be the Baltic Sea and the Entrance to the Baltic Sea, bounded by the parallel of the Skaw in the Skagerrak at 57°44.43'N."
Traffic history
Historically, the Kingdom of Denmark collected Sound Dues from ships at the border between the ocean and the land-locked Baltic Sea, in tandem: in the Øresund at Kronborg castle near Helsingør; in the Great Belt at Nyborg; and in the Little Belt at its narrowest part then Fredericia, after that stronghold was built. The narrowest part of Little Belt is the "Middelfart Sund" near Middelfart.
Oceanography
Geographers widely agree that the preferred physical border of the Baltic is a line drawn through the southern Danish islands, Drogden-Sill and Langeland. The Drogden Sill is situated north of Køge Bugt and connects Dragør in the south of Copenhagen to Malmö; it is used by the Øresund Bridge, including the "Drogden Tunnel". By this definition, the Danish Straits are part of the entrance, but the Bay of Mecklenburg and the Bay of Kiel are parts of the Baltic Sea.
Another usual border is the line between Falsterbo, Sweden and Stevns Klint, Denmark, as this is the southern border of Øresund. It's also the border between the shallow southern Øresund (with a typical depth of 5–10 meters only) and notably deeper water.
Hydrography and biology
Drogden Sill (depth of ) sets a limit to Øresund and Darss Sill (depth of ), and a limit to the Belt Sea. The shallow sills are obstacles to the flow of heavy salt water from the Kattegat into the basins around Bornholm and Gotland.
The Kattegat and the southwestern Baltic Sea are well oxygenated and have a rich biology. The remainder of the Sea is brackish, poor in oxygen and in species. Thus, statistically, the more of the entrance that is included in its definition, the healthier the Baltic appears; conversely, the more narrowly it is defined, the more endangered its biology appears.
Tacitus called it "Mare Suebicum" after the Germanic people of the Suebi, and Ptolemy "Sarmatian Ocean" after the Sarmatians, but the first to name it the "Baltic Sea" ("Mare Balticum") was the eleventh-century German chronicler Adam of Bremen. The origin of the latter name is speculative and it was adopted into Slavic and Finnic languages spoken around the sea, very likely due to the role of Medieval Latin in cartography. It might be connected to the Germanic word "belt", a name used for two of the Danish straits, the Belts, while others claim it to be directly derived from the source of the Germanic word, Latin "balteus" "belt". Adam of Bremen himself compared the sea with a belt, stating that it is so named because it stretches through the land as a belt ("Balticus, eo quod in modum baltei longo tractu per Scithicas regiones tendatur usque in Greciam").
He might also have been influenced by the name of a legendary island mentioned in the "Natural History" of Pliny the Elder. Pliny mentions an island named Baltia (or Balcia) with reference to accounts of Pytheas and Xenophon. It is possible that Pliny refers to an island named Basilia ("the royal") in "On the Ocean" by Pytheas. "Baltia" also might be derived from "belt" and mean "near belt of sea, strait".
Meanwhile, others have suggested that the name of the island originates from the Proto-Indo-European root "*bhel" meaning "white, fair". This root and its basic meaning were retained in Lithuanian (as "baltas"), Latvian (as "balts") and Slavic (as "bely"). On this basis, a related hypothesis holds that the name originated from this Indo-European root via a Baltic language such as Lithuanian. Another explanation is that, while derived from the aforementioned root, the name of the sea is related to names for various forms of water and related substances in several European languages, that might have been originally associated with colors found in swamps (compare Proto-Slavic "*bolto" "swamp"). Yet another explanation is that the name originally meant "enclosed sea, bay" as opposed to open sea. Some Swedish historians believe the name derives from the god Baldr of Nordic mythology.
In the Middle Ages the sea was known by a variety of names. The name Baltic Sea became dominant only after 1600. Usage of "Baltic" and similar terms to denote the region east of the sea started only in 19th century.
The Baltic Sea was known in ancient Latin language sources as "Mare Suebicum" or even "Mare Germanicum". Older native names in languages that used to be spoken on the shores of the sea or near it usually indicate the geographical location of the sea (in Germanic languages), or its size in relation to smaller gulfs (in Old Latvian), or tribes associated with it (in Old Russian the sea was known as the Varanghian Sea). In modern languages it is known by the equivalents of "East Sea", "West Sea", or "Baltic Sea" in different languages:
At the time of the Roman Empire, the Baltic Sea was known as the "Mare Suebicum" or "Mare Sarmaticum". Tacitus in his AD 98 "Agricola" and "Germania" described the Mare Suebicum, named for the Suebi tribe, during the spring months, as a brackish sea where the ice broke apart and chunks floated about. The Suebi eventually migrated southwest to temporarily reside in the Rhineland area of modern Germany, where their name survives in the historic region known as Swabia. Jordanes called it the "Germanic Sea" in his work, the "Getica".
In the early Middle Ages, Norse (Scandinavian) merchants built a trade empire all around the Baltic. Later, the Norse fought for control of the Baltic against Wendish tribes dwelling on the southern shore. The Norse also used the rivers of Russia for trade routes, finding their way eventually to the Black Sea and southern Russia. This Norse-dominated period is referred to as the Viking Age.
Since the Viking Age, the Scandinavians have referred to the Baltic Sea as "Austmarr" ("Eastern Lake"). "Eastern Sea", appears in the "Heimskringla" and "Eystra salt" appears in "Sörla þáttr". Saxo Grammaticus recorded in "Gesta Danorum" an older name, "Gandvik", "-vik" being Old Norse for "bay", which implies that the Vikings correctly regarded it as an inlet of the sea. Another form of the name, "Grandvik", attested in at least one English translation of "Gesta Danorum", is likely to be a misspelling.)
In addition to fish the sea also provides amber, especially from its southern shores within today's borders of Poland, Russia and Lithuania. First mentions of amber deposits on the South coast of the Baltic Sea date back to the 12th century. The bordering countries have also traditionally exported lumber, wood tar, flax, hemp and furs by ship across the Baltic. Sweden had from early medieval times exported iron and silver mined there, while Poland had and still has extensive salt mines. Thus the Baltic Sea has long been crossed by much merchant shipping.
The lands on the Baltic's eastern shore were among the last in Europe to be converted to Christianity. This finally happened during the Northern Crusades: Finland in the twelfth century by Swedes, and what are now Estonia and Latvia in the early thirteenth century by Danes and Germans (Livonian Brothers of the Sword). The Teutonic Order gained control over parts of the southern and eastern shore of the Baltic Sea, where they set up their monastic state. Lithuania was the last European state to convert to Christianity.
In the period between the 8th and 14th centuries, there was much piracy in the Baltic from the coasts of Pomerania and Prussia, and the Victual Brothers held Gotland.
Starting in the 11th century, the southern and eastern shores of the Baltic were settled by migrants mainly from Germany, a movement called the "Ostsiedlung" ("east settling"). Other settlers were from the Netherlands, Denmark, and Scotland. The Polabian Slavs were gradually assimilated by the Germans. Denmark gradually gained control over most of the Baltic coast, until she lost much of her possessions after being defeated in the 1227 Battle of Bornhöved.
In the 13th to 16th centuries, the strongest economic force in Northern Europe was the Hanseatic League, a federation of merchant cities around the Baltic Sea and the North Sea. In the sixteenth and early seventeenth centuries, Poland, Denmark, and Sweden fought wars for "Dominium maris baltici" ("Lordship over the Baltic Sea"). Eventually, it was Sweden that virtually encompassed the Baltic Sea. In Sweden the sea was then referred to as "Mare Nostrum Balticum" ("Our Baltic Sea"). The goal of Swedish warfare during the 17th century was to make the Baltic Sea an all-Swedish sea ("Ett Svenskt innanhav"), something that was accomplished except the part between Riga in Latvia and Stettin in Pomerania. However, the Dutch dominated Baltic trade in the seventeenth century.
In the eighteenth century, Russia and Prussia became the leading powers over the sea. Sweden's defeat in the Great Northern War brought Russia to the eastern coast. Russia became and remained a dominating power in the Baltic. Russia's Peter the Great saw the strategic importance of the Baltic and decided to found his new capital, Saint Petersburg, at the mouth of the Neva river at the east end of the Gulf of Finland. There was much trading not just within the Baltic region but also with the North Sea region, especially eastern England and the Netherlands: their fleets needed the Baltic timber, tar, flax and hemp.
During the Crimean War, a joint British and French fleet attacked the Russian fortresses in the Baltic. They bombarded Sveaborg, which guards Helsinki; and Kronstadt, which guards Saint Petersburg; and they destroyed Bomarsund in the Åland Islands. After the unification of Germany in 1871, the whole southern coast became German. World War I was partly fought in the Baltic Sea. After 1920 Poland was connected to the Baltic Sea by the Polish Corridor and enlarged the port of Gdynia in rivalry with the port of the Free City of Danzig.
During World War II, Germany reclaimed all of the southern and much of the eastern shore by occupying Poland and the Baltic states. In 1945, the Baltic Sea became a mass grave for retreating soldiers and refugees on torpedoed troop transports. The sinking of the "Wilhelm Gustloff" remains the worst maritime disaster in history, killing (very roughly) 9,000 people. In 2005, a Russian group of scientists found over five thousand airplane wrecks, sunken warships, and other material, mainly from World War II, on the bottom of the sea.
Since the end of World War II, various nations, including the Soviet Union, the United Kingdom and the United States have disposed of chemical weapons in the Baltic Sea, raising concerns of environmental contamination. Today, fishermen occasionally find some of these materials: the most recent available report from the Helsinki Commission notes that four small scale catches of chemical munitions representing approximately of material were reported in 2005. This is a reduction from the 25 incidents representing of material in 2003. Until now, the U.S. Government refuses to disclose the exact coordinates of the wreck sites. Deteriorating bottles leak mustard gas and other substances, thus slowly poisoning a substantial part of the Baltic Sea.
After 1945, the German population was expelled from all areas east of the Oder-Neisse line, making room for displaced Poles and Russians. Poland gained most of the southern shore. The Soviet Union gained another access to the Baltic with the Kaliningrad Oblast. The Baltic states on the eastern shore were annexed by the Soviet Union. The Baltic then separated opposing military blocs: NATO and the Warsaw Pact. Had war broken out, the Polish navy was prepared to invade the Danish isles. Neutral Sweden developed incident weapons to defend its territorial waters after the Swedish submarine incidents. This border status restricted trade and travel. It ended only after the collapse of the Communist regimes in Central and Eastern Europe in the late 1980s.
Since May 2004, with the accession of the Baltic states and Poland, the Baltic Sea has been almost entirely surrounded by countries of the European Union (EU). The remaining non-EU shore areas are Russian: the Saint Petersburg area and the Kaliningrad Oblast exclave.
Winter storms begin arriving in the region during October. These have caused numerous shipwrecks, and contributed to the extreme difficulties of rescuing passengers of the ferry "M/S Estonia" en route from Tallinn, Estonia, to Stockholm, Sweden, in September 1994, which claimed the lives of 852 people. Older, wood-based shipwrecks such as the "Vasa" tend to remain well-preserved, as the Baltic's cold and brackish water does not suit the shipworm.
Storm surge floodings are generally taken to occur when the water level is more than one metre above normal. In Warnemünde about 110 floods occurred from 1950 to 2000, an average of just over two per year.
Historic flood events were the All Saints' Flood of 1304 and other floods in the years 1320, 1449, 1625, 1694, 1784 and 1825. Little is known of their extent. From 1872, there exist regular and reliable records of water levels in the Baltic Sea. The highest was the flood of 1872 when the water was an average of above sea level at Warnemünde and a maximum of above sea level in Warnemünde. In the last very heavy floods the average water levels reached above sea level in 1904, in 1913, in January 1954, on 2–4 November 1995 and on 21 February 2002.
An arm of the North Atlantic Ocean, the Baltic Sea is enclosed by Sweden and Denmark to the west, Finland to the northeast, the Baltic countries to the southeast, and the North European Plain to the southwest.
It is about long, an average of wide, and an average of deep. The maximum depth is which is on the Swedish side of the center. The surface area is about and the volume is about . The periphery amounts to about of coastline.
The Baltic Sea is one of the largest brackish inland seas by area, and occupies a basin (a "zungenbecken") formed by glacial erosion during the last few ice ages.
Physical characteristics of the Baltic Sea, its main sub-regions, and the transition zone to the Skagerrak/North Sea area
The International Hydrographic Organization defines the limits of the Baltic Sea as follows:
The northern part of the Baltic Sea is known as the Gulf of Bothnia, of which the northernmost part is the Bay of Bothnia or Bothnian Bay. The more rounded southern basin of the gulf is called Bothnian Sea and immediately to the south of it lies the Sea of Åland. The Gulf of Finland connects the Baltic Sea with Saint Petersburg. The Gulf of Riga lies between the Latvian capital city of Riga and the Estonian island of Saaremaa.
The Northern Baltic Sea lies between the Stockholm area, southwestern Finland and Estonia. The Western and Eastern Gotland basins form the major parts of the Central Baltic Sea or Baltic proper. The Bornholm Basin is the area east of Bornholm, and the shallower Arkona Basin extends from Bornholm to the Danish isles of Falster and Zealand.
In the south, the Bay of Gdańsk lies east of the Hel Peninsula on the Polish coast and west of the Sambia Peninsula in Kaliningrad Oblast. The Bay of Pomerania lies north of the islands of Usedom and Wolin, east of Rügen. Between Falster and the German coast lie the Bay of Mecklenburg and Bay of Lübeck. The westernmost part of the Baltic Sea is the Bay of Kiel. The three Danish straits, the Great Belt, the Little Belt and The Sound ("Öresund"/"Øresund"), connect the Baltic Sea with the Kattegat and Skagerrak strait in the North Sea.
The water temperature of the Baltic Sea varies significantly depending on exact location, season and depth. At the Bornholm Basin, which is located directly east of the island of the same name, the surface temperature typically falls to during the peak of the winter and rises to during the peak of the summer, with an annual average of around . A similar pattern can be seen in the Gotland Basin, which is located between the island of Gotland and Latvia. In the deep of these basins the temperature variations are smaller. At the bottom of the Bornholm Basin, deeper than , the temperature typically is , and at the bottom of the Gotland Basin, at depths greater than , the temperature typically is .
On the long-term average, the Baltic Sea is ice-covered at the annual maximum for about 45% of its surface area. The ice-covered area during such a typical winter includes the Gulf of Bothnia, the Gulf of Finland, the Gulf of Riga, the archipelago west of Estonia, the Stockholm archipelago, and the Archipelago Sea southwest of Finland. The remainder of the Baltic does not freeze during a normal winter, except sheltered bays and shallow lagoons such as the Curonian Lagoon. The ice reaches its maximum extent in February or March; typical ice thickness in the northernmost areas in the Bothnian Bay, the northern basin of the Gulf of Bothnia, is about for landfast sea ice. The thickness decreases farther south.
Freezing begins in the northern extremities of the Gulf of Bothnia typically in the middle of November, reaching the open waters of the Bothnian Bay in early January. The Bothnian Sea, the basin south of Kvarken, freezes on average in late February. The Gulf of Finland and the Gulf of Riga freeze typically in late January. In 2011, the Gulf of Finland was completely frozen on 15 February.
The ice extent depends on whether the winter is mild, moderate, or severe. In severe winters ice can form around southern Sweden and even in the Danish straits. According to the 18th-century natural historian William Derham, during the severe winters of 1703 and 1708, the ice cover reached as far as the Danish straits. Frequently, parts of the Gulf of Bothnia and Gulf of Finland are frozen, in addition to coastal fringes in more southerly locations such as the Gulf of Riga. This description meant that the whole of the Baltic Sea was covered with ice.
Since 1720, the Baltic Sea has frozen over entirely 20 times, most recently in early 1987, which was the most severe winter in Scandinavia since 1720. The ice then covered . During the winter of 2010–11, which was quite severe compared to those of the last decades, the maximum ice cover was , which was reached on 25 February 2011. The ice then extended from the north down to the northern tip of Gotland, with small ice-free areas on either side, and the east coast of the Baltic Sea was covered by an ice sheet about wide all the way to Gdańsk. This was brought about by a stagnant high-pressure area that lingered over central and northern Scandinavia from around 10 to 24 February. After this, strong southern winds pushed the ice further into the north, and much of the waters north of Gotland were again free of ice, which had then packed against the shores of southern Finland. The effects of the afore-mentioned high-pressure area did not reach the southern parts of the Baltic Sea, and thus the entire sea did not freeze over. However, floating ice was additionally observed near Świnoujście harbour in January 2010.
In recent years before 2011, the Bothnian Bay and the Bothnian Sea were frozen with solid ice near the Baltic coast and dense floating ice far from it. In 2008, almost no ice formed except for a short period in March.
During winter, fast ice, which is attached to the shoreline, develops first, rendering ports unusable without the services of icebreakers. Level ice, ice sludge, pancake ice, and rafter ice form in the more open regions. The gleaming expanse of ice is similar to the Arctic, with wind-driven pack ice and ridges up to . Offshore of the landfast ice, the ice remains very dynamic all year, and it is relatively easily moved around by winds and therefore forms pack ice, made up of large piles and ridges pushed against the landfast ice and shores.
In spring, the Gulf of Finland and the Gulf of Bothnia normally thaw in late April, with some ice ridges persisting until May in the eastern extremities of the Gulf of Finland. In the northernmost reaches of the Bothnian Bay, ice usually stays until late May; by early June it is practically always gone. However, in the famine year of 1867 remnants of ice were observed as late as 17 July near Uddskär. Even as far south as Øresund, remnants of ice have been observed in May on several occasions; near Taarbaek on 15 May 1942 and near Copenhagen on 11 May 1771. Drift ice was also observed on 11 May 1799.
The ice cover is the main habitat for two large mammals, the grey seal ("Halichoerus grypus") and the Baltic ringed seal ("Pusa hispida botnica"), both of which feed underneath the ice and breed on its surface. Of these two seals, only the Baltic ringed seal suffers when there is not adequate ice in the Baltic Sea, as it feeds its young only while on ice. The grey seal is adapted to reproducing also with no ice in the sea. The sea ice also harbours several species of algae that live in the bottom and inside unfrozen brine pockets in the ice.
The Baltic Sea flows out through the Danish straits; however, the flow is complex. A surface layer of brackish water discharges per year into the North Sea. Due to the difference in salinity, by salinity permeation principle, a sub-surface layer of more saline water moving in the opposite direction brings in per year. It mixes very slowly with the upper waters, resulting in a salinity gradient from top to bottom, with most of the salt water remaining below deep. The general circulation is anti-clockwise: northwards along its eastern boundary, and south along the western one .
The difference between the outflow and the inflow comes entirely from fresh water. More than 250 streams drain a basin of about , contributing a volume of per year to the Baltic. They include the major rivers of north Europe, such as the Oder, the Vistula, the Neman, the Daugava and the Neva. Additional fresh water comes from the difference of precipitation less evaporation, which is positive.
An important source of salty water are infrequent inflows of North Sea water into the Baltic. Such inflows, are important to the Baltic ecosystem because of the oxygen they transport into the Baltic deeps, used to happen regularly until the 1980s. In recent decades they have become less frequent. The latest four occurred in 1983, 1993, 2003 and 2014 suggesting a new inter-inflow period of about ten years.
The water level is generally far more dependent on the regional wind situation than on tidal effects. However, tidal currents occur in narrow passages in the western parts of the Baltic Sea.
The significant wave height is generally much lower than that of the North Sea. Quite violent, sudden storms sweep the surface ten or more times a year, due to large transient temperature differences and a long reach of wind. Seasonal winds also cause small changes in sea level, of the order of .
The Baltic Sea is the world's largest inland brackish sea. Only two other brackish waters are larger on some measurements: The Black Sea is larger in both surface area and water volume, but most of it is located outside the continental shelf (only a small percentage is inland). The Caspian Sea is larger in water volume, but—despite its name—it is a lake rather than a sea.
The Baltic Sea's salinity is much lower than that of ocean water (which averages 3.5%), as a result of abundant freshwater runoff from the surrounding land (rivers, streams and alike), combined with the shallowness of the sea itself; runoff contributes roughly one-fortieth its total volume per year, as the volume of the basin is about and yearly runoff is about .
The open surface waters of the Baltic Sea "proper" generally have a salinity of 0.3 to 0.9%, which is border-line freshwater. The flow of fresh water into the sea from approximately two hundred rivers and the introduction of salt from the southwest builds up a gradient of salinity in the Baltic Sea. The highest surface salinities, generally 0.7–0.9%, is in the southwestern-most part of the Baltic, in the Arkona and Bornholm basins (the former located roughly between southeast Zealand and Bornholm, and the latter directly east of Bornholm). It gradually falls further east and north, reaching the lowest in the Bothnian Bay at around 0.3%. Drinking the surface water of the Baltic as a means of survival would actually hydrate the body instead of dehydrating, as is the case with ocean water.
As salt water is denser than fresh water, the bottom of the Baltic Sea is saltier than the surface. This creates a vertical stratification of the water column, a halocline, that represents a barrier to the exchange of oxygen and nutrients, and fosters completely separate maritime environments. The difference between the bottom and surface salinities vary depending on location. Overall it follows the same southwest to east and north pattern as the surface. At the bottom of the Arkona Basin (equalling depths greater than ) and Bornholm Basin (depths greater than ) it is typically 1.4–1.8%. Further east and north the salinity at the bottom is consistently lower, being the lowest in Bothnian Bay (depths greater than ) where it is slightly below 0.4%, or only marginally higher than the surface in the same region.
In contrast, the salinity of the Danish straits, which connect the Baltic Sea and Kattegat, tends to be significantly higher, but with major variations from year to year. For example, the surface and bottom salinity in the Great Belt is typically around 2.0% and 2.8% respectively, which is only somewhat below that of the Kattegat. The water surplus caused by the continuous inflow of rivers and streams to the Baltic Sea means that there generally is a flow of brackish water out though the Danish straits to the Kattegat (and eventually the Atlantic). Significant flows in the opposite direction, salt water from the Kattegat through the Danish straits to the Baltic Sea, are less regular. From 1880 to 1980 inflows occurred on average six to seven times per decade. Since 1980 it has been much less frequent, although a very large inflow occurred in 2014.
The rating of mean discharges differs from the ranking of hydrological lengths (from the most distant source to the sea) and the rating of the nominal lengths. Göta älv, a tributary of the Kattegat, is not listed, as due to the northward upper low-salinity-flow in the sea, its water hardly reaches the Baltic proper:
Countries that border the sea:
Countries lands in the outer drainage basin:
The Baltic sea drainage basin is roughly four times the surface area of the sea itself. About 48% of the region is forested, with Sweden and Finland containing the majority of the forest, especially around the Gulfs of Bothnia and Finland.
About 20% of the land is used for agriculture and pasture, mainly in Poland and around the edge of the Baltic Proper, in Germany, Denmark and Sweden. About 17% of the basin is unused open land with another 8% of wetlands. Most of the latter are in the Gulfs of Bothnia and Finland.
The rest of the land is heavily populated. About 85 million people live in the Baltic drainage basin, 15 million within of the coast and 29 million within of the coast. Around 22 million live in population centers of over 250,000. 90% of these are concentrated in the band around the coast. Of the nations containing all or part of the basin, Poland includes 45% of the 85 million, Russia 12%, Sweden 10% and the others less than 6% each.
The biggest coastal cities (by population):
Other important ports:
The Baltic Sea somewhat resembles a riverbed, with two tributaries, the Gulf of Finland and Gulf of Bothnia. Geological surveys show that before the Pleistocene, instead of the Baltic Sea, there was a wide plain around a great river that paleontologists call the Eridanos. Several Pleistocene glacial episodes scooped out the river bed into the sea basin. By the time of the last, or Eemian Stage (MIS 5e), the Eemian Sea was in place. Instead of a true sea, the Baltic can even today also be understood as the common estuary of all rivers flowing into it.
From that time the waters underwent a geologic history summarized under the names listed below. Many of the stages are named after marine animals (e.g. the Littorina mollusk) that are clear markers of changing water temperatures and salinity.
The factors that determined the sea's characteristics were the submergence or emergence of the region due to the weight of ice and subsequent isostatic readjustment, and the connecting channels it found to the North Sea-Atlantic, either through the straits of Denmark or at what are now the large lakes of Sweden, and the White Sea-Arctic Sea.
The land is still emerging isostatically from its depressed state, which was caused by the weight of ice during the last glaciation. The phenomenon is known as post-glacial rebound. Consequently, the surface area and the depth of the sea are diminishing. The uplift is about eight millimetres per year on the Finnish coast of the northernmost Gulf of Bothnia. In the area, the former seabed is only gently sloping, leading to large areas of land being reclaimed in what are, geologically speaking, relatively short periods (decades and centuries).
The "Baltic Sea anomaly" refers to interpretations of an indistinct sonar image taken by Swedish salvage divers on the floor of the northern Baltic Sea in June 2011. The treasure hunters suggested the image showed an object with unusual features of seemingly extraordinary origin. Speculation published in tabloid newspapers claimed that the object was a sunken UFO. A consensus of experts and scientists say that the image most likely shows a natural geological formation.
The fauna of the Baltic Sea is a mixture of marine and freshwater species. Among marine fishes are Atlantic cod, Atlantic herring, European hake, European plaice, European flounder, shorthorn sculpin and turbot, and examples of freshwater species include European perch, northern pike, whitefish and common roach. Freshwater species may occur at outflows of rivers or streams in all coastal sections of the Baltic Sea. Otherwise marine species dominate in most sections of the Baltic, at least as far north as Gävle, where less than one-tenth are freshwater species. Further north the pattern is inverted. In the Bothnian Bay, roughly two-thirds of the species are freshwater. In the far north of this bay, saltwater species are almost entirely absent. For example, the common starfish and shore crab, two species that are very widespread along European coasts, are both unable to cope with the significantly lower salinity. Their range limit is west of Bornholm, meaning that they are absent from the vast majority of the Baltic Sea. Some marine species, like the Atlantic cod and European flounder, can survive at relatively low salinities, but need higher salinities to breed, which therefore occurs in deeper parts of the Baltic Sea.
There is a decrease in species richness from the Danish belts to the Gulf of Bothnia. The decreasing salinity along this path causes restrictions in both physiology and habitats. At more than 600 species of invertebrates, fish, aquatic mammals, aquatic birds and macrophytes, the Arkona Basin (roughly between southeast Zealand and Bornholm) is far richer than other more eastern and northern basins in the Baltic Sea, which all have less than 400 species from these groups, with the exception of the Gulf of Finland with more than 750 species. However, even the most diverse sections of the Baltic Sea have far less species than the almost-full saltwater Kattegat, which is home to more than 1600 species from these groups. The lack of tides has affected the marine species as compared with the Atlantic.
Since the Baltic Sea is so young there are only two or three known endemic species: the brown alga "Fucus radicans" and the flounder "Platichthys solemdali". Both appear to have evolved in the Baltic basin and were only recognized as species in 2005 and 2018 respectively, having formerly been confused with more widespread relatives. The tiny Copenhagen cockle ("Parvicardium hauniense"), a rare mussel, is sometimes considered endemic, but has now been recorded in the Mediterranean. However, some consider non-Baltic records to be misidentifications of juvenile lagoon cockles ("Cerastoderma glaucum"). Several widespread marine species have distinctive subpopulations in the Baltic Sea adapted to the low salinity, such as the Baltic Sea forms of the Atlantic herring and lumpsucker, which are smaller than the widespread forms in the North Atlantic.
A peculiar feature of the fauna is that it contains a number of glacial relict species, isolated populations of arctic species which have remained in the Baltic Sea since the last glaciation, such as the large isopod "Saduria entomon", the Baltic subspecies of ringed seal, and the fourhorn sculpin. Some of these relicts are derived from glacial lakes, such as "Monoporeia affinis", which is a main element in the benthic fauna of the low-salinity Bothnian Bay.
Cetaceans in Baltic Sea have been monitored by the ASCOBANS. Critically endangered populations of Atlantic white-sided dolphins and harbor porpoises inhabit the sea where white-colored porpoises have been recorded, and occasionally oceanic and out-of-range species such as minke whales, bottlenose dolphins, beluga whales, orcas, and beaked whales visit the waters. In recent years, very small, but with increasing rates, fin whales | https://en.wikipedia.org/wiki?curid=3335 |
Brackish water
Brackish water is water having more salinity than freshwater, but not as much as seawater. It may result from mixing seawater with fresh water together, as in estuaries, or it may occur in brackish fossil aquifers. The word comes from the Middle Dutch root "brak". Certain human activities can produce brackish water, in particular civil engineering projects such as dikes and the flooding of coastal marshland to produce brackish water pools for freshwater prawn farming. Brackish water is also the primary waste product of the salinity gradient power process. Because brackish water is hostile to the growth of most terrestrial plant species, without appropriate management it is damaging to the environment (see article on shrimp farms).
Technically, brackish water contains between 0.5 and 30 grams of salt per litre—more often expressed as 0.5 to 30 parts per thousand (‰), which is a specific gravity of between 1.0004 and 1.0226. Thus, "brackish" covers a range of salinity regimes and is not considered a precisely defined condition. It is characteristic of many brackish surface waters that their salinity can vary considerably over space or time.
Brackish water condition commonly occurs when fresh water meets seawater. In fact, the most extensive brackish water habitats worldwide are estuaries, where a river meets the sea.
The River Thames flowing through London is a classic river estuary. The town of Teddington a few miles west of London marks the boundary between the tidal and non-tidal parts of the Thames, although it is still considered a freshwater river about as far east as Battersea insofar as the average salinity is very low and the fish fauna consists predominantly of freshwater species such as roach, dace, carp, perch, and pike. The Thames Estuary becomes brackish between Battersea and Gravesend, and the diversity of freshwater fish species present is smaller, primarily roach and dace; euryhaline marine species such as flounder, European seabass, mullet, and smelt become much more common. Further east, the salinity increases and the freshwater fish species are completely replaced by euryhaline marine ones, until the river reaches Gravesend, at which point conditions become fully marine and the fish fauna resembles that of the adjacent North Sea and includes both euryhaline and stenohaline marine species. A similar pattern of replacement can be observed with the aquatic plants and invertebrates living in the river.
This type of ecological succession from a freshwater to marine ecosystem is typical of river estuaries. River estuaries form important staging points during the migration of anadromous and catadromous fish species, such as salmon, shad and eels, giving them time to form social groups and to adjust to the changes in salinity. Salmon are anadromous, meaning they live in the sea but ascend rivers to spawn; eels are catadromous, living in rivers and streams, but returning to the sea to breed. Besides the species that migrate through estuaries, there are many other fish that use them as "nursery grounds" for spawning or as places young fish can feed and grow before moving elsewhere. Herring and plaice are two commercially important species that use the Thames Estuary for this purpose.
Estuaries are also commonly used as fishing grounds, and as places for fish farming or ranching. For example, Atlantic salmon farms are often located in estuaries, although this has caused controversy, because in doing so, fish farmers expose migrating wild fish to large numbers of external parasites such as sea lice that escape from the pens the farmed fish are kept in.
Another important brackish water habitat is the mangrove swamp or mangal. Many, though not all, mangrove swamps fringe estuaries and lagoons where the salinity changes with each tide. Among the most specialised residents of mangrove forests are mudskippers, fish that forage for food on land, and archer fish, perch-like fish that "spit" at insects and other small animals living in the trees, knocking them into the water where they can be eaten. Like estuaries, mangrove swamps are extremely important breeding grounds for many fish, with species such as snappers, halfbeaks, and tarpon spawning or maturing among them. Besides fish, numerous other animals use mangroves, including such species as the saltwater crocodile, American crocodile, proboscis monkey, diamondback terrapin, and the crab-eating frog, "Fejervarya cancrivora" (formerly "Rana cancrivora"). Mangroves represent important nesting site for numerous birds groups such as herons, storks, spoonbills, ibises, kingfishers, shorebirds and seabirds.
Although often plagued with mosquitoes and other insects that make them unpleasant for humans, mangrove swamps are very important buffer zones between land and sea, and are a natural defense against hurricane and tsunami damage in particular.
The Sundarbans and Bhitarkanika Mangroves are two of the large mangrove forests in the world, both on the coast of the Bay of Bengal.
Some seas and lakes are brackish. The Baltic Sea is a brackish sea adjoining the North Sea. Originally the confluence of two major river systems prior to the Pleistocene, since then it has been flooded by the North Sea but still receives so much freshwater from the adjacent lands that the water is brackish. Because the salt water coming in from the sea is denser than freshwater, the water in the Baltic is stratified, with salt water at the bottom and freshwater at the top. Limited mixing occurs because of the lack of tides and storms, with the result that the fish fauna at the surface is freshwater in composition while that lower down is more marine. Cod are an example of a species only found in deep water in the Baltic, while pike are confined to the less saline surface waters.
The Caspian Sea is the world's largest lake and contains brackish water with a salinity about one-third that of normal seawater. The Caspian is famous for its peculiar animal fauna, including one of the few non-marine seals (the Caspian seal) and the great sturgeons, a major source of caviar.
The Hudson Bay is a brackish marginal sea of the Arctic ocean, it remains brackish due its limited connections to the open ocean, very high levels freshwater surface runoff input from the large Hudson Bay drainage basin, and low rate of evaporation due to being completely covered in ice for over half the year.
In the Black Sea the surface water is brackish with an average salinity of about 17-18 parts per thousand compared to 30 to 40 for the oceans. The deep, anoxic water of the Black Sea originates from warm, salty water of the Mediterranean.
Lake Texoma, a reservoir on the border between the U.S. states of Texas and Oklahoma, is a rare example of a brackish lake that is neither part of an endorheic basin nor a direct arm of the ocean, though its salinity is considerably lower than that of the other bodies of water mentioned here. The reservoir was created by the damming of the Red River of the South, which (along with several of its tributaries) receives large amounts of salt from natural seepage from buried deposits in the upstream region. The salinity is high enough that striped bass, a fish normally found only in salt water, has self-sustaining populations in the lake.
A brackish marsh may occur where a freshwater flow enters a salt marsh.
Brackish seas
Brackish water lakes
Lochs (Scottish)
Coastal lagoons, marshes, and deltas
Estuaries | https://en.wikipedia.org/wiki?curid=3336 |
BearShare
BearShare was a peer-to-peer file sharing application originally created by Free Peers, Inc. for Microsoft Windows and also a rebranded version of iMesh by MusicLab, LLC, tightly integrated with their music subscription service.
The principal operators of Free Peers, Inc. were Vincent Falco and Louis Tatta. Bearshare was launched on December 4, 2000 as a Gnutella-based peer-to-peer file sharing application with innovative features that eventually grew to include IRC, a free library of software and media called BearShare Featured Artists, online help pages and a support forum integrated as dedicated web browser windows in the application; as well as a media player and a library window to organize the user's media collection.
Following the June 27, 2005 United States Supreme Court decision on the "MGM Studios, Inc. v. Grokster, Ltd." case the BearShare Community support forums were abruptly closed during negotiations to settle an impending lawsuit with the RIAA. The webmaster and forum administrator immediately created a new site called Technutopia and the same support staff continue to support the gnutella versions from there. A few months later the unused Community window was removed from BearShare 5.1.
On May 4, 2006, Free Peers agreed to transfer all their BearShare-related assets to MusicLab, LLC (an iMesh subsidiary) and use the $30 million raised from that sale to settle with the RIAA.
On August 17, 2006, MusicLab released a reskinned and updated version of iMesh named BearSharev6 which connected to its proprietary iMesh network instead of gnutella. BearShareV6 and its successors offer paid music downloads in the PlaysForSure DRM controlled WMA format as well as free content in various formats, chiefly MP3. Like BearShare they also include a media player and embedded online and social networking features but with a Web 2.0 style, somewhat similar to MySpace or Facebook. Free content provided by users is automatically verified using acoustic fingerprinting as non-infringing before it can be shared. Video files more than 50 Mb in size and 15 minutes in length cannot be shared, ensuring television shows and feature-length movies cannot be distributed over the network. Only a limited set of music and video file types can be shared, thus excluding everything else like executable files, documents and compressed archives.
In August 2006, MusicLab released a variant of the original BearShare gnutella servant, called BearFlix, which was altered to limit sharing, searches and downloads to images and videos. Shared videos were limited in length and duration, similar to limits in BearShareV6. The first release was version 1.2.1. Its version numbers appear to start from 1.1.2.1 in the user interface but it presents itself on the gnutella network as versions 6.1.2.1 to 6.2.2.530. This version has since been discontinued by MusicLab and no longer available on their websites; however it remains in wide usage.
On October 27, 2008, responding to uncertainty around the future of PlaysForSure, MusicLab added iPod support in BearShareV7.
As of June 12, 2016, BearShare is no longer available to download. The official page with a message announcing its discontinuation remained active until March 2017.
Three variants of the original BearShare gnutella servant were distributed by Free Peers: Free, Lite and Pro. The Free version had higher performance limits than the Lite version but contained some adware. The Pro version had higher limits than both the Free and Lite versions but cost US$24. Version numbers in this series ranged from 1.0 to 5.2.5.9. Though lacking MusicLab's support a wide spread of BearShare versions from 4.7 to 5.2.5.6 remain the second most popular servant on gnutella, alongside LimeWire.
Old-School fans of the gnutella versions tend to favour the last of the beta versions, 5.1.0 beta25, because it has no adware, is hard-coded for performance levels roughly between Pro and regular (ad-supported) versions and has the unique ability to switch between leaf and ultrapeer mode on demand, a feature deemed necessary for effective testing. No other gnutella servant has enjoyed this capability.
The most recent MusicLab version, V10, was available by free download from their support website and "Pro" features could be unlocked with a six or twelve-month subscription. Access to premium content required a $9.95 monthly subscription. Customers in Canada and the U.S.A. could opt for a $14.95 monthly "BearShare ToGo" subscription which allowed downloads of premium music to portable music players. | https://en.wikipedia.org/wiki?curid=3340 |
Blues
Blues is a music genre and musical form which was originated in the Deep South of the United States around the 1870s by African-Americans from roots in African musical traditions, African-American work songs, and spirituals. Blues incorporated spirituals, work songs, field hollers, shouts, chants, and rhymed simple narrative ballads. The blues form, ubiquitous in jazz, rhythm and blues and rock and roll, is characterized by the call-and-response pattern, the blues scale and specific chord progressions, of which the twelve-bar blues is the most common. Blue notes (or "worried notes"), usually thirds, fifths or sevenths flattened in pitch are also an essential part of the sound. Blues shuffles or walking bass reinforce the trance-like rhythm and form a repetitive effect known as the groove.
Blues as a genre is also characterized by its lyrics, bass lines, and instrumentation. Early traditional blues verses consisted of a single line repeated four times. It was only in the first decades of the 20th century that the most common current structure became standard: the AAB pattern, consisting of a line sung over the four first bars, its repetition over the next four, and then a longer concluding line over the last bars. Early blues frequently took the form of a loose narrative, often relating the racial discrimination and other challenges experienced by African-Americans.
Many elements, such as the call-and-response format and the use of blue notes, can be traced back to the music of Africa. The origins of the blues are also closely related to the religious music of the Afro-American community, the spirituals. The first appearance of the blues is often dated to after the ending of slavery and, later, the development of juke joints. It is associated with the newly acquired freedom of the former slaves. Chroniclers began to report about blues music at the dawn of the 20th century. The first publication of blues sheet music was in 1908. Blues has since evolved from unaccompanied vocal music and oral traditions of slaves into a wide variety of styles and subgenres. Blues subgenres include country blues, such as Delta blues and Piedmont blues, as well as urban blues styles such as Chicago blues and West Coast blues. World War II marked the transition from acoustic to electric blues and the progressive opening of blues music to a wider audience, especially white listeners. In the 1960s and 1970s, a hybrid form called blues rock developed, which blended blues styles with rock music.
The term "Blues" may have come from "blue devils", meaning melancholy and sadness; an early use of the term in this sense is in George Colman's one-act farce "Blue Devils" (1798). The phrase "blue devils" may also have been derived from Britain in the 1600s, when the term referred to the "intense visual hallucinations that can accompany severe alcohol withdrawal". As time went on, the phrase lost the reference to devils, and "it came to mean a state of agitation or depression." By the 1800s in the United States, the term "blues" was associated with drinking alcohol, a meaning which survives in the phrase "blue law", which prohibits the sale of alcohol on Sunday. Though the use of the phrase in African-American music may be older, it has been attested to in print since 1912, when Hart Wand's "Dallas Blues" became the first copyrighted blues composition.
In lyrics the phrase is often used to describe a depressed mood. It is in this sense of a sad state of mind that one of the earliest recorded references to "the blues" was written by Charlotte Forten, then aged 25, in her diary on December 14, 1862. She was a free-born black from Pennsylvania who was working as a schoolteacher in South Carolina, instructing both slaves and freedmen, and wrote that she "came home with the blues" because she felt lonesome and pitied herself. She overcame her depression and later noted a number of songs, such as "Poor Rosy", that were popular among the slaves. Although she admitted being unable to describe the manner of singing she heard, Forten wrote that the songs "can't be sung without a full heart and a troubled spirit", conditions that have inspired countless blues songs.
The lyrics of early traditional blues verses probably often consisted of a single line repeated four times. It was only in the first decades of the 20th century that the most common current structure became standard: the so-called "AAB" pattern, consisting of a line sung over the four first bars, its repetition over the next four, and then a longer concluding line over the last bars. Two of the first published blues songs, "Dallas Blues" (1912) and "Saint Louis Blues" (1914), were 12-bar blues with the AAB lyric structure. W.C. Handy wrote that he adopted this convention to avoid the monotony of lines repeated three times. The lines are often sung following a pattern closer to rhythmic talk than to a melody.
Early blues frequently took the form of a loose narrative. African-American singers voiced his or her "personal woes in a world of harsh reality: a lost love, the cruelty of police officers, oppression at the hands of white folk, [and] hard times". This melancholy has led to the suggestion of an Igbo origin for blues because of the reputation the Igbo had throughout plantations in the Americas for their melancholic music and outlook on life when they were enslaved.
The lyrics often relate troubles experienced within African American society. For instance Blind Lemon Jefferson's "Rising High Water Blues" (1927) tells of the Great Mississippi Flood of 1927:
Although the blues gained an association with misery and oppression, the lyrics could also be humorous and raunchy:
Hokum blues celebrated both comedic lyrical content and a boisterous, farcical performance style. Tampa Red's classic "Tight Like That" (1928) is a sly wordplay with the double meaning of being "tight" with someone coupled with a more salacious physical familiarity. Blues songs with sexually explicit lyrics were known as dirty blues. The lyrical content became slightly simpler in postwar blues, which tended to focus on relationship woes or sexual worries. Lyrical themes that frequently appeared in prewar blues, such as economic depression, farming, devils, gambling, magic, floods and drought, were less common in postwar blues.
The writer Ed Morales claimed that Yoruba mythology played a part in early blues, citing Robert Johnson's "Cross Road Blues" as a "thinly veiled reference to Eleggua, the orisha in charge of the crossroads". However, the Christian influence was far more obvious. The repertoires of many seminal blues artists, such as Charley Patton and Skip James, included religious songs or spirituals. Reverend Gary Davis and Blind Willie Johnson are examples of artists often categorized as blues musicians for their music, although their lyrics clearly belong to spirituals.
The blues form is a cyclic musical form in which a repeating progression of chords mirrors the call and response scheme commonly found in African and African-American music. During the first decades of the 20th century blues music was not clearly defined in terms of a particular chord progression. With the popularity of early performers, such as Bessie Smith, use of the twelve-bar blues spread across the music industry during the 1920s and 30s. Other chord progressions, such as 8-bar forms, are still considered blues; examples include "How Long Blues", "Trouble in Mind", and Big Bill Broonzy's "Key to the Highway". There are also 16-bar blues, such as Ray Charles's instrumental "Sweet 16 Bars" and Herbie Hancock's "Watermelon Man". Idiosyncratic numbers of bars are occasionally used, such as the 9-bar progression in "Sitting on Top of the World", by Walter Vinson.
The basic 12-bar lyric framework of a blues composition is reflected by a standard harmonic progression of 12 bars in a 4/4 time signature. The blues chords associated to a twelve-bar blues are typically a set of three different chords played over a 12-bar scheme. They are labeled by Roman numbers referring to the degrees of the progression. For instance, for a blues in the key of C, C is the tonic chord (I) and F is the subdominant (IV).
The last chord is the dominant (V) turnaround, marking the transition to the beginning of the next progression. The lyrics generally end on the last beat of the tenth bar or the first beat of the 11th bar, and the final two bars are given to the instrumentalist as a break; the harmony of this two-bar break, the turnaround, can be extremely complex, sometimes consisting of single notes that defy analysis in terms of chords.
Much of the time, some or all of these chords are played in the harmonic seventh (7th) form. The use of the harmonic seventh interval is characteristic of blues and is popularly called the "blues seven". Blues seven chords add to the harmonic chord a note with a frequency in a 7:4 ratio to the fundamental note. At a 7:4 ratio, it is not close to any interval on the conventional Western diatonic scale. For convenience or by necessity it is often approximated by a minor seventh interval or a dominant seventh chord.
In melody, blues is distinguished by the use of the flattened third, fifth and seventh of the associated major scale.
Blues shuffles or walking bass reinforce the trance-like rhythm and call-and-response, and they form a repetitive effect called a groove. Characteristic of the blues since its Afro-American origins, the shuffles played a central role in swing music. The simplest shuffles, which were the clearest signature of the R&B wave that started in the mid-1940s, were a three-note riff on the bass strings of the guitar. When this riff was played over the bass and the drums, the groove "feel" was created. Shuffle rhythm is often vocalized as ""dow", da "dow", da "dow", da" or ""dump", da "dump", da "dump", da": it consists of uneven, or "swung", eighth notes. On a guitar this may be played as a simple steady bass or it may add to that stepwise quarter note motion from the fifth to the sixth of the chord and back.
The first publication of blues sheet music may have been "I Got the Blues", published by New Orleans musician Antonio Maggio in 1908 and described as "the earliest published composition known to link the condition of having the blues to the musical form that would become popularly known as 'the blues.'" Hart Wand's "Dallas Blues" was published in 1912; W.C. Handy's "The Memphis Blues" followed in the same year. The first recording by an African American singer was Mamie Smith's 1920 rendition of Perry Bradford's "Crazy Blues". But the origins of the blues were some decades earlier, probably around 1890. This music is poorly documented, partly because of racial discrimination in U.S. society, including academic circles, and partly because of the low rate of literacy among rural African Americans at the time.
Reports of blues music in southern Texas and the Deep South were written at the dawn of the 20th century. Charles Peabody mentioned the appearance of blues music at Clarksdale, Mississippi, and Gate Thomas reported similar songs in southern Texas around 1901–1902. These observations coincide more or less with the recollections of Jelly Roll Morton, who said he first heard blues music in New Orleans in 1902; Ma Rainey, who remembered first hearing the blues in the same year in Missouri; and W.C. Handy, who first heard the blues in Tutwiler, Mississippi, in 1903. The first extensive research in the field was performed by Howard W. Odum, who published an anthology of folk songs from Lafayette County, Mississippi, and Newton County, Georgia, between 1905 and 1908. The first noncommercial recordings of blues music, termed "proto-blues" by Paul Oliver, were made by Odum for research purposes at the very beginning of the 20th century. They are now lost.
Other recordings that are still available were made in 1924 by Lawrence Gellert. Later, several recordings were made by Robert W. Gordon, who became head of the Archive of American Folk Songs of the Library of Congress. Gordon's successor at the library was John Lomax. In the 1930s, Lomax and his son Alan made a large number of non-commercial blues recordings that testify to the huge variety of proto-blues styles, such as field hollers and ring shouts. A record of blues music as it existed before 1920 can also be found in the recordings of artists such as Lead Belly and Henry Thomas. All these sources show the existence of many different structures distinct from twelve-, eight-, or sixteen-bar.
The social and economic reasons for the appearance of the blues are not fully known. The first appearance of the blues is usually dated after the Emancipation Act of 1863, between 1870 and 1900, a period that coincides with post-emancipation and later, the establishment of juke joints as places where blacks went to listen to music, dance, or gamble after a hard day's work. This period corresponds to the transition from slavery to sharecropping, small-scale agricultural production, and the expansion of railroads in the southern United States. Several scholars characterize the development of blues music in the early 1900s as a move from group performance to individualized performance. They argue that the development of the blues is associated with the newly acquired freedom of the enslaved people.
According to Lawrence Levine, "there was a direct relationship between the national ideological emphasis upon the individual, the popularity of Booker T. Washington's teachings, and the rise of the blues." Levine stated that "psychologically, socially, and economically, African-Americans were being acculturated in a way that would have been impossible during slavery, and it is hardly surprising that their secular music reflected this as much as their religious music did."
There are few characteristics common to all blues music, because the genre took its shape from the idiosyncrasies of individual performers. However, there are some characteristics that were present long before the creation of the modern blues. Call-and-response shouts were an early form of blues-like music; they were a "functional expression ... style without accompaniment or harmony and unbounded by the formality of any particular musical structure". A form of this pre-blues was heard in slave ring shouts and field hollers, expanded into "simple solo songs laden with emotional content".
Blues has evolved from the unaccompanied vocal music and oral traditions of slaves imported from West Africa and rural blacks into a wide variety of styles and subgenres, with regional variations across the United States. Although blues (as it is now known) can be seen as a musical style based on both European harmonic structure and the African call-and-response tradition that transformed into an interplay of voice and guitar, the blues form itself bears no resemblance to the melodic styles of the West African griots. Additionally, there are theories that the four-beats-per-measure structure of the blues might have its origins in the Native American tradition of pow wow drumming.
No specific African musical form can be identified as the single direct ancestor of the blues. However the call-and-response format can be traced back to the music of Africa. That blue notes predate their use in blues and have an African origin is attested to by "A Negro Love Song", by the English composer Samuel Coleridge-Taylor, from his "African Suite for Piano", written in 1898, which contains blue third and seventh notes.
The Diddley bow (a homemade one-stringed instrument found in parts of the American South in the early twentieth century) and the banjo are African-derived instruments that may have helped in the transfer of African performance techniques into the early blues instrumental vocabulary. The banjo seems to be directly imported from West African music. It is similar to the musical instrument that griots and other Africans such as the Igbo played (called halam or akonting by African peoples such as the Wolof, Fula and Mandinka). However, in the 1920s, when country blues began to be recorded, the use of the banjo in blues music was quite marginal and limited to individuals such as Papa Charlie Jackson and later Gus Cannon.
Blues music also adopted elements from the "Ethiopian airs", minstrel shows and Negro spirituals, including instrumental and harmonic accompaniment. The style also was closely related to ragtime, which developed at about the same time, though the blues better preserved "the original melodic patterns of African music".
The musical forms and styles that are now considered the blues as well as modern country music arose in the same regions of the southern United States during the 19th century. Recorded blues and country music can be found as far back as the 1920s, when the record industry created the marketing categories "race music" and "hillbilly music" to sell music by blacks for blacks and by whites for whites, respectively. At the time, there was no clear musical division between "blues" and "country", except for the ethnicity of the performer, and even that was sometimes documented incorrectly by record companies.
Though musicologists can now attempt to define the blues narrowly in terms of certain chord structures and lyric forms thought to have originated in West Africa, audiences originally heard the music in a far more general way: it was simply the music of the rural south, notably the Mississippi Delta. Black and white musicians shared the same repertoire and thought of themselves as "songsters" rather than blues musicians. The notion of blues as a separate genre arose during the black migration from the countryside to urban areas in the 1920s and the simultaneous development of the recording industry. "Blues" became a code word for a record designed to sell to black listeners.
The origins of the blues are closely related to the religious music of Afro-American community, the spirituals. The origins of spirituals go back much further than the blues, usually dating back to the middle of the 18th century, when the slaves were Christianized and began to sing and play Christian hymns, in particular those of Isaac Watts, which were very popular. Before the blues gained its formal definition in terms of chord progressions, it was defined as the secular counterpart of spirituals. It was the low-down music played by rural blacks.
Depending on the religious community a musician belonged to, it was more or less considered a sin to play this low-down music: blues was the devil's music. Musicians were therefore segregated into two categories: gospel singers and blues singers, guitar preachers and songsters. However, when rural black music began to be recorded in the 1920s, both categories of musicians used similar techniques: call-and-response patterns, blue notes, and slide guitars. Gospel music was nevertheless using musical forms that were compatible with Christian hymns and therefore less marked by the blues form than its secular counterpart.
The American sheet music publishing industry produced a great deal of ragtime music. By 1912, the sheet music industry had published three popular blues-like compositions, precipitating the Tin Pan Alley adoption of blues elements: "Baby Seals' Blues", by "Baby" Franklin Seals (arranged by Artie Matthews); "Dallas Blues", by Hart Wand; and "The Memphis Blues", by W.C. Handy.
Handy was a formally trained musician, composer and arranger who helped to popularize the blues by transcribing and orchestrating blues in an almost symphonic style, with bands and singers. He became a popular and prolific composer, and billed himself as the "Father of the Blues"; however, his compositions can be described as a fusion of blues with ragtime and jazz, a merger facilitated using the Cuban habanera rhythm that had long been a part of ragtime; Handy's signature work was the "Saint Louis Blues".
In the 1920s, the blues became a major element of African American and American popular music, reaching white audiences via Handy's arrangements and the classic female blues performers. The blues evolved from informal performances in bars to entertainment in theaters. Blues performances were organized by the Theater Owners Bookers Association in nightclubs such as the Cotton Club and juke joints such as the bars along Beale Street in Memphis. Several record companies, such as the American Record Corporation, Okeh Records, and Paramount Records, began to record African-American music.
As the recording industry grew, country blues performers like Bo Carter, Jimmie Rodgers (country singer), Blind Lemon Jefferson, Lonnie Johnson, Tampa Red and Blind Blake became more popular in the African American community. Kentucky-born Sylvester Weaver was in 1923 the first to record the slide guitar style, in which a guitar is fretted with a knife blade or the sawed-off neck of a bottle. The slide guitar became an important part of the Delta blues. The first blues recordings from the 1920s are categorized as a traditional, rural country blues and a more polished city or urban blues.
Country blues performers often improvised, either without accompaniment or with only a banjo or guitar. Regional styles of country blues varied widely in the early 20th century. The (Mississippi) Delta blues was a rootsy sparse style with passionate vocals accompanied by slide guitar. The little-recorded Robert Johnson combined elements of urban and rural blues. In addition to Robert Johnson, influential performers of this style included his predecessors Charley Patton and Son House. Singers such as Blind Willie McTell and Blind Boy Fuller performed in the southeastern "delicate and lyrical" Piedmont blues tradition, which used an elaborate ragtime-based fingerpicking guitar technique. Georgia also had an early slide tradition, with Curley Weaver, Tampa Red, "Barbecue Bob" Hicks and James "Kokomo" Arnold as representatives of this style.
The lively Memphis blues style, which developed in the 1920s and 1930s near Memphis, Tennessee, was influenced by jug bands such as the Memphis Jug Band or the Gus Cannon's Jug Stompers. Performers such as Frank Stokes, Sleepy John Estes, Robert Wilkins, Joe McCoy, Casey Bill Weldon and Memphis Minnie used a variety of unusual instruments such as washboard, fiddle, kazoo or mandolin. Memphis Minnie was famous for her virtuoso guitar style. Pianist Memphis Slim began his career in Memphis, but his distinct style was smoother and had some swing elements. Many blues musicians based in Memphis moved to Chicago in the late 1930s or early 1940s and became part of the urban blues movement.
City or urban blues styles were more codified and elaborate, as a performer was no longer within their local, immediate community, and had to adapt to a larger, more varied audience's aesthetic. Classic female urban and vaudeville blues singers were popular in the 1920s, among them "the big three"—Gertrude "Ma" Rainey, Bessie Smith, and Lucille Bogan. Mamie Smith, more a vaudeville performer than a blues artist, was the first African American to record a blues song in 1920; her second record, "Crazy Blues", sold 75,000 copies in its first month. Ma Rainey, the "Mother of Blues", and Bessie Smith each "[sang] around center tones, perhaps in order to project her voice more easily to the back of a room". Smith would "sing a song in an unusual key, and her artistry in bending and stretching notes with her beautiful, powerful contralto to accommodate her own interpretation was unsurpassed".
In 1920 the vaudeville singer Lucille Hegamin became the second black woman to record blues when she recorded "The Jazz Me Blues", and Victoria Spivey, sometimes called Queen Victoria or Za Zu Girl, had a recording career that began in 1926 and spanned forty years. These recordings were typically labeled "race records" to distinguish them from records sold to white audiences. Nonetheless, the recordings of some of the classic female blues singers were purchased by white buyers as well. These blueswomen's contributions to the genre included "increased improvisation on melodic lines, unusual phrasing which altered the emphasis and impact of the lyrics, and vocal dramatics using shouts, groans, moans, and wails. The blues women thus effected changes in other types of popular singing that had spin-offs in jazz, Broadway musicals, torch songs of the 1930s and 1940s, gospel, rhythm and blues, and eventually rock and roll."
Urban male performers included popular black musicians of the era, such as Tampa Red, Big Bill Broonzy and Leroy Carr. An important label of this era was the Chicago-based Bluebird Records. Before World War II, Tampa Red was sometimes referred to as "the Guitar Wizard". Carr accompanied himself on the piano with Scrapper Blackwell on guitar, a format that continued well into the 1950s with artists such as Charles Brown and even Nat "King" Cole.
Boogie-woogie was another important style of 1930s and early 1940s urban blues. While the style is often associated with solo piano, boogie-woogie was also used to accompany singers and, as a solo part, in bands and small combos. Boogie-Woogie style was characterized by a regular bass figure, an ostinato or riff and shifts of level in the left hand, elaborating each chord and trills and decorations in the right hand. Boogie-woogie was pioneered by the Chicago-based Jimmy Yancey and the Boogie-Woogie Trio (Albert Ammons, Pete Johnson and Meade Lux Lewis). Chicago boogie-woogie performers included Clarence "Pine Top" Smith and Earl Hines, who "linked the propulsive left-hand rhythms of the ragtime pianists with melodic figures similar to those of Armstrong's trumpet in the right hand". The smooth Louisiana style of Professor Longhair and, more recently, Dr. John blends classic rhythm and blues with blues styles.
Another development in this period was big band blues. The "territory bands" operating out of Kansas City, the Bennie Moten orchestra, Jay McShann, and the Count Basie Orchestra were also concentrating on the blues, with 12-bar blues instrumentals such as Basie's "One O'Clock Jump" and "Jumpin' at the Woodside" and boisterous "blues shouting" by Jimmy Rushing on songs such as "Going to Chicago" and "Sent for You Yesterday". A well-known big band blues tune is Glenn Miller's "In the Mood". In the 1940s, the jump blues style developed. Jump blues grew up from the boogie woogie wave and was strongly influenced by big band music. It uses saxophone or other brass instruments and the guitar in the rhythm section to create a jazzy, up-tempo sound with declamatory vocals. Jump blues tunes by Louis Jordan and Big Joe Turner, based in Kansas City, Missouri, influenced the development of later styles such as rock and roll and rhythm and blues. Dallas-born T-Bone Walker, who is often associated with the California blues style, performed a successful transition from the early urban blues à la Lonnie Johnson and Leroy Carr to the jump blues style and dominated the blues-jazz scene at Los Angeles during the 1940s.
The transition from country blues to urban blues that began in the 1920s was driven by the successive waves of economic crisis and booms which led many rural blacks to move to urban areas, in a movement known as the Great Migration. The long boom following World War II induced another massive migration of the African-American population, the Second Great Migration, which was accompanied by a significant increase of the real income of the urban blacks. The new migrants constituted a new market for the music industry. The term "race record", initially used by the music industry for African-American music, was replaced by the term "rhythm and blues". This rapidly evolving market was mirrored by "Billboard" magazine's Rhythm and Blues chart. This marketing strategy reinforced trends in urban blues music such as the use of electric instruments and amplification and the generalization of the blues beat, the blues shuffle, which became ubiquitous in R&B. This commercial stream had important consequences for blues music, which, together with jazz and gospel music, became a component of R&B.
After World War II, new styles of electric blues became popular in cities such as Chicago, Memphis, Detroit and St. Louis. Electric blues used electric guitars, double bass (gradually replaced by bass guitar), drums, and harmonica (or "blues harp") played through a microphone and a PA system or an overdriven guitar amplifier. Chicago became a center for electric blues from 1948 on, when Muddy Waters recorded his first success, "I Can't Be Satisfied". Chicago blues is influenced to a large extent by Delta blues, because many performers had migrated from the Mississippi region.
Howlin' Wolf, Muddy Waters, Willie Dixon and Jimmy Reed were all born in Mississippi and moved to Chicago during the Great Migration. Their style is characterized by the use of electric guitar, sometimes slide guitar, harmonica, and a rhythm section of bass and drums. The saxophonist J. T. Brown played in bands led by Elmore James and by J. B. Lenoir, but the saxophone was used as a backing instrument for rhythmic support more than as a lead instrument.
Little Walter, Sonny Boy Williamson (Rice Miller) and Sonny Terry are well known harmonica (called "harp" by blues musicians) players of the early Chicago blues scene. Other harp players such as Big Walter Horton were also influential. Muddy Waters and Elmore James were known for their innovative use of slide electric guitar. Howlin' Wolf and Muddy Waters were known for their deep, "gravelly" voices.
The bassist and prolific songwriter and composer Willie Dixon played a major role on the Chicago blues scene. He composed and wrote many standard blues songs of the period, such as "Hoochie Coochie Man", "I Just Want to Make Love to You" (both penned for Muddy Waters) and, "Wang Dang Doodle" and "Back Door Man" for Howlin' Wolf. Most artists of the Chicago blues style recorded for the Chicago-based Chess Records and Checker Records labels. Smaller blues labels of this era included Vee-Jay Records and J.O.B. Records. During the early 1950s, the dominating Chicago labels were challenged by Sam Phillips' Sun Records company in Memphis, which recorded B. B. King and Howlin' Wolf before he moved to Chicago in 1960. After Phillips discovered Elvis Presley in 1954, the Sun label turned to the rapidly expanding white audience and started recording mostly rock 'n' roll.
In the 1950s, blues had a huge influence on mainstream American popular music. While popular musicians like Bo Diddley and Chuck Berry, both recording for Chess, were influenced by the Chicago blues, their enthusiastic playing styles departed from the melancholy aspects of blues. Chicago blues also influenced Louisiana's zydeco music, with Clifton Chenier using blues accents. Zydeco musicians used electric solo guitar and cajun arrangements of blues standards.
In England, electric blues took root there during a much acclaimed Muddy Waters tour. Waters, unsuspecting of his audience's tendency towards skiffle, an acoustic, softer brand of blues, turned up his amp and started to play his Chicago brand of electric blues. Although the audience was largely jolted by the performance, the performance influenced local musicians such as Alexis Korner and Cyril Davies to emulate this louder style, inspiring the British invasion of the Rolling Stones and the Yardbirds.
In the late 1950s, a new blues style emerged on Chicago's West Side pioneered by Magic Sam, Buddy Guy and Otis Rush on Cobra Records. The "West Side sound" had strong rhythmic support from a rhythm guitar, bass guitar and drums and as perfected by Guy, Freddie King, Magic Slim and Luther Allison was dominated by amplified electric lead guitar. Expressive guitar solos were a key feature of this music.
Other blues artists, such as John Lee Hooker had influences not directly related to the Chicago style. John Lee Hooker's blues is more "personal", based on Hooker's deep rough voice accompanied by a single electric guitar. Though not directly influenced by boogie woogie, his "groovy" style is sometimes called "guitar boogie". His first hit, "Boogie Chillen", reached number 1 on the R&B charts in 1949.
By the late 1950s, the swamp blues genre developed near Baton Rouge, with performers such as Lightnin' Slim, Slim Harpo, Sam Myers and Jerry McCain around the producer J. D. "Jay" Miller and the Excello label. Strongly influenced by Jimmy Reed, Swamp blues has a slower pace and a simpler use of the harmonica than the Chicago blues style performers such as Little Walter or Muddy Waters. Songs from this genre include "Scratch my Back", "She's Tough" and "I'm a King Bee". Alan Lomax's recordings of Mississippi Fred McDowell would eventually bring him wider attention on both the blues and folk circuit, with McDowell's droning style influencing North Mississippi hill country blues musicians.
By the beginning of the 1960s, genres influenced by African American music such as rock and roll and soul were part of mainstream popular music. White performers such as the Beatles had brought African-American music to new audiences, both within the U.S. and abroad. However, the blues wave that brought artists such as Muddy Waters to the foreground had stopped. Bluesmen such as Big Bill Broonzy and Willie Dixon started looking for new markets in Europe. Dick Waterman and the blues festivals he organized in Europe played a major role in propagating blues music abroad. In the UK, bands emulated U.S. blues legends, and UK blues rock-based bands had an influential role throughout the 1960s.
Blues performers such as John Lee Hooker and Muddy Waters continued to perform to enthusiastic audiences, inspiring new artists steeped in traditional blues, such as New York–born Taj Mahal. John Lee Hooker blended his blues style with rock elements and playing with younger white musicians, creating a musical style that can be heard on the 1971 album "Endless Boogie". B. B. King's singing and virtuoso guitar technique earned him the eponymous title "king of the blues". King introduced a sophisticated style of guitar soloing based on fluid string bending and shimmering vibrato that influenced many later electric blues guitarists.
In contrast to the Chicago style, King's band used strong brass support from a saxophone, trumpet, and trombone, instead of using slide guitar or harp. Tennessee-born Bobby "Blue" Bland, like B. B. King, also straddled the blues and R&B genres. During this period, Freddie King and Albert King often played with rock and soul musicians (Eric Clapton and Booker T & the MGs) and had a major influence on those styles of music.
The music of the civil rights movement and Free Speech Movement in the U.S. prompted a resurgence of interest in American roots music and early African American music. As well festivals such as the Newport Folk Festival brought traditional blues to a new audience, which helped to revive interest in prewar acoustic blues and performers such as Son House, Mississippi John Hurt, Skip James, and Reverend Gary Davis. Many compilations of classic prewar blues were republished by the Yazoo Records. J. B. Lenoir from the Chicago blues movement in the 1950s recorded several LPs using acoustic guitar, sometimes accompanied by Willie Dixon on the acoustic bass or drums. His songs, originally distributed only in Europe, commented on political issues such as racism or Vietnam War issues, which was unusual for this period. His album "Alabama Blues" contained a song with the following lyric:
White audiences' interest in the blues during the 1960s increased due to the Chicago-based Paul Butterfield Blues Band featuring guitarist Michael Bloomfield, and the British blues movement. The style of British blues developed in the UK, when bands such as the Animals, Fleetwood Mac, John Mayall & the Bluesbreakers, the Rolling Stones, the Yardbirds, the supergroup Cream and the Irish musician Rory Gallagher performed classic blues songs from the Delta or Chicago blues traditions.
In 1963, LeRoi Jones, later known as Amiri Baraka, was the first to write a book on the social history of the blues in "Blues People: The Negro Music in White America". The British and blues musicians of the early 1960s inspired a number of American blues rock fusion performers, including the Doors, Canned Heat, the early Jefferson Airplane, Janis Joplin, Johnny Winter, The J. Geils Band, Ry Cooder, and the Allman Brothers Band. One blues rock performer, Jimi Hendrix, was a rarity in his field at the time: a black man who played psychedelic rock. Hendrix was a skilled guitarist, and a pioneer in the innovative use of distortion and audio feedback in his music. Through these artists and others, blues music influenced the development of rock music.
In the early 1970s, the Texas rock-blues style emerged, which used guitars in both solo and rhythm roles. In contrast with the West Side blues, the Texas style is strongly influenced by the British rock-blues movement. Major artists of the Texas style are Johnny Winter, Stevie Ray Vaughan, the Fabulous Thunderbirds (led by harmonica player and singer-songwriter Kim Wilson), and ZZ Top. These artists all began their musical careers in the 1970s but they did not achieve international success until the next decade.
Since the 1980s there has been a resurgence of interest in the blues among a certain part of the African-American population, particularly around Jackson, Mississippi and other deep South regions. Often termed "soul blues" or "Southern soul", the music at the heart of this movement was given new life by the unexpected success of two particular recordings on the Jackson-based Malaco label: Z. Z. Hill's "Down Home Blues" (1982) and Little Milton's "The Blues is Alright" (1984). Contemporary African-American performers who work in this style of the blues include Bobby Rush, Denise LaSalle, Sir Charles Jones, Bettye LaVette, Marvin Sease, Peggy Scott-Adams, Mel Waiters, Clarence Carter, Dr. "Feelgood" Potts, O.B. Buchana, Ms. Jody, Shirley Brown, and dozens of others.
During the 1980s blues also continued in both traditional and new forms. In 1986 the album "Strong Persuader" announced Robert Cray as a major blues artist. The first Stevie Ray Vaughan recording "Texas Flood" was released in 1983, and the Texas-based guitarist exploded onto the international stage. John Lee Hooker's popularity was revived with the album "The Healer" in 1989. Eric Clapton, known for his performances with the Blues Breakers and Cream, made a comeback in the 1990s with his album "Unplugged", in which he played some standard blues numbers on acoustic guitar.
However, beginning in the 1990s, digital multitrack recording and other technological advances and new marketing strategies including video clip production increased costs, challenging the spontaneity and improvisation that are an important component of blues music.
In the 1980s and 1990s, blues publications such as "Living Blues" and "Blues Revue" were launched, major cities began forming blues societies, outdoor blues festivals became more common, and more nightclubs and venues for blues emerged.
In the 1990s, the largely ignored hill country blues gained minor recognition in both blues and alternative rock music circles with northern Mississippi artists R. L. Burnside and Junior Kimbrough. Blues performers explored a range of musical genres, as can be seen, for example, from the broad array of nominees of the yearly Blues Music Awards, previously named W.C. Handy Awards or of the Grammy Awards for Best Contemporary and Traditional Blues Album. The Billboard Blues Album chart provides an overview of current blues hits. Contemporary blues music is nurtured by several blues labels such as: Alligator Records, Ruf Records, Severn Records, Chess Records (MCA), Delmark Records, NorthernBlues Music, Fat Possum Records and Vanguard Records (Artemis Records). Some labels are famous for rediscovering and remastering blues rarities, including Arhoolie Records, Smithsonian Folkways Recordings (heir of Folkways Records), and Yazoo Records (Shanachie Records).
Blues musical styles, forms (12-bar blues), melodies, and the blues scale have influenced many other genres of music, such as rock and roll, jazz, and popular music. Prominent jazz, folk or rock performers, such as Louis Armstrong, Duke Ellington, Miles Davis, and Bob Dylan have performed significant blues recordings. The blues scale is often used in popular songs like Harold Arlen's "Blues in the Night", blues ballads like "Since I Fell for You" and "Please Send Me Someone to Love", and even in orchestral works such as George Gershwin's "Rhapsody in Blue" and "Concerto in F". Gershwin's second "Prelude" for solo piano is an interesting example of a classical blues, maintaining the form with academic strictness. The blues scale is ubiquitous in modern popular music and informs many modal frames, especially the ladder of thirds used in rock music (for example, in "A Hard Day's Night"). Blues forms are used in the theme to the televised "Batman", teen idol Fabian Forte's hit, "Turn Me Loose", country music star Jimmie Rodgers' music, and guitarist/vocalist Tracy Chapman's hit "Give Me One Reason".
Early country bluesmen such as Skip James, Charley Patton, Georgia Tom Dorsey played country and urban blues and had influences from spiritual singing. Dorsey helped to popularize Gospel music. Gospel music developed in the 1930s, with the Golden Gate Quartet. In the 1950s, soul music by Sam Cooke, Ray Charles and James Brown used gospel and blues music elements. In the 1960s and 1970s, gospel and blues were merged in soul blues music. Funk music of the 1970s was influenced by soul; funk can be seen as an antecedent of hip-hop and contemporary R&B.
R&B music can be traced back to spirituals and blues. Musically, spirituals were a descendant of New England choral traditions, and in particular of Isaac Watts's hymns, mixed with African rhythms and call-and-response forms. Spirituals or religious chants in the African-American community are much better documented than the "low-down" blues. Spiritual singing developed because African-American communities could gather for mass or worship gatherings, which were called camp meetings.
Edward P. Comentale has noted how the blues was often used as a medium for art or self-expression, stating: "As heard from Delta shacks to Chicago tenements to Harlem cabarets, the blues proved—despite its pained origins—a remarkably flexible medium and a new arena for the shaping of identity and community."
Before World War II, the boundaries between blues and jazz were less clear. Usually jazz had harmonic structures stemming from brass bands, whereas blues had blues forms such as the 12-bar blues. However, the jump blues of the 1940s mixed both styles. After WWII, blues had a substantial influence on jazz. Bebop classics, such as Charlie Parker's "Now's the Time", used the blues form with the pentatonic scale and blue notes.
Bebop marked a major shift in the role of jazz, from a popular style of music for dancing, to a "high-art", less-accessible, cerebral "musician's music". The audience for both blues and jazz split, and the border between blues and jazz became more defined.
The blues' 12-bar structure and the blues scale was a major influence on rock and roll music. Rock and roll has been called "blues with a backbeat"; Carl Perkins called rockabilly "blues with a country beat". Rockabillies were also said to be 12-bar blues played with a bluegrass beat. "Hound Dog", with its unmodified 12-bar structure (in both harmony and lyrics) and a melody centered on flatted third of the tonic (and flatted seventh of the subdominant), is a blues song transformed into a rock and roll song. Jerry Lee Lewis's style of rock and roll was heavily influenced by the blues and its derivative boogie woogie. His style of music was not exactly rockabilly but it has been often called real rock and roll (this is a label he shares with several African American rock and roll performers).
Many early rock and roll songs are based on blues: "That's All Right Mama", "Johnny B. Goode", "Blue Suede Shoes", "Whole Lotta Shakin' Goin On", "Shake, Rattle, and Roll", and "Long Tall Sally". The early African American rock musicians retained the sexual themes and innuendos of blues music: "Got a gal named Sue, knows just what to do" ("Tutti Frutti", Little Richard) or "See the girl with the red dress on, She can do the Birdland all night long" ("What'd I Say", Ray Charles). The 12-bar blues structure can be found even in novelty pop songs, such as Bob Dylan's "Obviously Five Believers" and Esther and Abi Ofarim's "Cinderella Rockefella".
Early country music was infused with the blues. Jimmie Rodgers, Moon Mullican, Bob Wills, Bill Monroe and Hank Williams have all described themselves as blues singers and their music has a blues feel that is different, at first glance at least, from the later country pop of artists like Eddy Arnold. Yet, if one looks back further, Arnold also started out singing bluesy songs like 'I'll Hold You in My Heart'. A lot of the 1970s-era "outlaw" country music by Willie Nelson and Waylon Jennings also borrowed from the blues. When Jerry Lee Lewis returned to country after the decline of 1950s style rock and roll, he sang his country with a blues feel and often included blues standards on his albums.
Like jazz, rock and roll, heavy metal music, hip hop music, reggae, rap, country music, and pop music, blues has been accused of being the "devil's music" and of inciting violence and other poor behavior. In the early 20th century, the blues was considered disreputable, especially as white audiences began listening to the blues during the 1920s. In the early twentieth century, W.C. Handy was the first to popularize blues-influenced music among non-black Americans.
During the blues revival of the 1960s and '70s, acoustic blues artist Taj Mahal and legendary Texas bluesman Lightnin' Hopkins wrote and performed music that figured prominently in the popularly and critically acclaimed film "Sounder" (1972). The film earned Mahal a Grammy nomination for Best Original Score Written for a Motion Picture and a BAFTA nomination. Almost 30 years later, Mahal wrote blues for, and performed a banjo composition, claw-hammer style, in the 2001 movie release "Songcatcher", which focused on the story of the preservation of the roots music of Appalachia.
Perhaps the most visible example of the blues style of music in the late 20th century came in 1980, when Dan Aykroyd and John Belushi released the film "The Blues Brothers". The film drew many of the biggest living influencers of the rhythm and blues genre together, such as Ray Charles, James Brown, Cab Calloway, Aretha Franklin, and John Lee Hooker. The band formed also began a successful tour under the Blues Brothers marquee. 1998 brought a sequel, "Blues Brothers 2000" that, while not holding as great a critical and financial success, featured a much larger number of blues artists, such as B.B. King, Bo Diddley, Erykah Badu, Eric Clapton, Steve Winwood, Charlie Musselwhite, Blues Traveler, Jimmie Vaughan, and Jeff Baxter.
In 2003, Martin Scorsese made significant efforts to promote the blues to a larger audience. He asked several famous directors such as Clint Eastwood and Wim Wenders to participate in a series of documentary films for PBS called "The Blues". He also participated in the rendition of compilations of major blues artists in a series of high-quality CDs. Blues guitarist and vocalist Keb' Mo' performed his blues rendition of "America, the Beautiful" in 2006 to close out the final season of the television series "The West Wing".
The blues was highlighted in Season 2012, Episode 1 of "In Performance at The White House", entitled "Red, White and Blues". Hosted by President Obama and Mrs. Obama, the show featured performances by B.B. King, Buddy Guy, Gary Clark Jr., Jeff Beck, Derek Trucks, Keb Mo, and others. | https://en.wikipedia.org/wiki?curid=3352 |
Berlin
Berlin (; ) is the capital and largest city of Germany by both area and population. Its 3,769,495 (2019) inhabitants make it the most populous city proper of the European Union. The city is one of Germany's 16 federal states. It is surrounded by the state of Brandenburg, and contiguous with Potsdam, Brandenburg's capital. The two cities are at the center of the Berlin-Brandenburg capital region, which is, with about six million inhabitants and an area of more than 30,000 km2, Germany's third-largest metropolitan region after the Rhine-Ruhr and Rhine-Main regions.
Berlin straddles the banks of the River Spree, which flows into the River Havel (a tributary of the River Elbe) in the western borough of Spandau. Among the city's main topographical features are the many lakes in the western and southeastern boroughs formed by the Spree, Havel, and Dahme rivers (the largest of which is Lake Müggelsee). Due to its location in the European Plain, Berlin is influenced by a temperate seasonal climate. About one-third of the city's area is composed of forests, parks, gardens, rivers, canals and lakes. The city lies in the Central German dialect area, the Berlin dialect being a variant of the Lusatian-New Marchian dialects.
First documented in the 13th century and situated at the crossing of two important historic trade routes, Berlin became the capital of the Margraviate of Brandenburg (14171701), the Kingdom of Prussia (1701–1918), the German Empire (1871–1918), the Weimar Republic (1919–1933), and the Third Reich (1933–1945). Berlin in the 1920s was the third-largest municipality in the world. After World War II and its subsequent occupation by the victorious countries, the city was divided; West Berlin became a de facto West German exclave, surrounded by the Berlin Wall (1961–1989) and East German territory. East Berlin was declared capital of East Germany, while Bonn became the West German capital. Following German reunification in 1990, Berlin once again became the capital of all of Germany.
Berlin is a world city of culture, politics, media and science. Its economy is based on high-tech firms and the service sector, encompassing a diverse range of creative industries, research facilities, media corporations and convention venues. Berlin serves as a continental hub for air and rail traffic and has a highly complex public transportation network. The metropolis is a popular tourist destination. Significant industries also include IT, pharmaceuticals, biomedical engineering, clean tech, biotechnology, construction and electronics.
Berlin is home to world-renowned universities such as the Humboldt Universität zu Berlin (HU Berlin), the Technische Universität Berlin (TU Berlin), the Freie Universität Berlin (Free University of Berlin), the Universität der Künste (University of the Arts, UdK) and the Berlin School of Economics and Law. Its Zoological Garden is the most visited zoo in Europe and one of the most popular worldwide. With the world's oldest large-scale movie studio complex, Berlin is an increasingly popular location for international film productions. The city is well known for its festivals, diverse architecture, nightlife, contemporary arts and a very high quality of living. Since the 2000s Berlin has seen the emergence of a cosmopolitan entrepreneurial scene.
Berlin contains three World Heritage Sites: Museum Island; the Palaces and Parks of Potsdam and Berlin; and the Berlin Modernism Housing Estates. Other landmarks include the Brandenburg Gate, the Reichstag building, Potsdamer Platz, the Memorial to the Murdered Jews of Europe, the Berlin Wall Memorial, the East Side Gallery, the Berlin Victory Column, Berlin Cathedral and the Berlin Television Tower, the tallest structure in Germany. Berlin has numerous museums, galleries, libraries, orchestras and sporting events. These include the Old National Gallery, the Bode Museum, the Pergamon Museum, the German Historical Museum, the Jewish Museum Berlin, the Natural History Museum, the Humboldt Forum, which is scheduled to open in late 2020, the Berlin State Library, the Berlin Philharmonic and the Berlin Marathon.
Berlin lies in northeastern Germany, east of the River Elbe, that once constituted, together with the River (Saxon or Thuringian) Saale (from their confluence at Barby onwards), the eastern border of the Frankish Realm. While the Frankish Realm was primarily inhabited by Germanic tribes like the Franks and the Saxons, the regions east of the border rivers were inhabited by Slavic tribes. This is why most of the cities and villages in northeastern Germany bear Slavic-derived names (Germania Slavica). Typical Germanised place name suffixes of Slavic origin are "-ow", "-itz", "-vitz", "-witz", "-itzsch" and "-in", prefixes are "Windisch" and "Wendisch". The name "Berlin" has its roots in the language of West Slavic inhabitants of the area of today's Berlin, and may be related to the Old Polabian stem "berl-"/"birl-" ("swamp"). Since the "Ber-" at the beginning sounds like the German word "Bär" (bear), a bear appears in the coat of arms of the city. It is therefore a canting arm.
Of Berlin's twelve boroughs, five bear a (partly) Slavic-derived name: Pankow (the most populous), Steglitz-Zehlendorf, Marzahn-Hellersdorf, Treptow-Köpenick and Spandau (named Spandow until 1878). Of its ninety-six neighborhoods, twenty-two bear a (partly) Slavic-derived name: Altglienicke, Alt-Treptow, Britz, Buch, Buckow, Gatow, Karow, Kladow, Köpenick, Lankwitz, Lübars, Malchow, Marzahn, Pankow, Prenzlauer Berg, Rudow, Schmöckwitz, Spandau, Stadtrandsiedlung Malchow, Steglitz, Tegel and Zehlendorf. The neighborhood of Moabit bears a French-derived name, and Französisch Buchholz is named after the Huguenots.
The earliest evidence of settlements in the area of today's Berlin are remnants of a house foundation dated to 1174, found in excavations in Berlin Mitte, and a wooden beam dated from approximately 1192. The first written records of towns in the area of present-day Berlin date from the late 12th century. Spandau is first mentioned in 1197 and Köpenick in 1209, although these areas did not join Berlin until 1920. The central part of Berlin can be traced back to two towns. Cölln on the Fischerinsel is first mentioned in a 1237 document, and Berlin, across the Spree in what is now called the Nikolaiviertel, is referenced in a document from 1244. 1237 is considered the founding date of the city. The two towns over time formed close economic and social ties, and profited from the staple right on the two important trade routes "Via Imperii" and from Bruges to Novgorod. In 1307, they formed an alliance with a common external policy, their internal administrations still being separated.
In 1415, Frederick I became the elector of the Margraviate of Brandenburg, which he ruled until 1440. During the 15th century, his successors established Berlin-Cölln as capital of the margraviate, and subsequent members of the Hohenzollern family ruled in Berlin until 1918, first as electors of Brandenburg, then as kings of Prussia, and eventually as German emperors. In 1443, Frederick II Irontooth started the construction of a new royal palace in the twin city Berlin-Cölln. The protests of the town citizens against the building culminated in 1448, in the "Berlin Indignation" ("Berliner Unwille"). This protest was not successful and the citizenry lost many of its political and economic privileges. After the royal palace was finished in 1451, it gradually came into use. From 1470, with the new elector Albrecht III Achilles, Berlin-Cölln became the new royal residence. Officially, the Berlin-Cölln palace became permanent residence of the Brandenburg electors of the Hohenzollerns from 1486, when John Cicero came to power. Berlin-Cölln, however, had to give up its status as a free Hanseatic city. In 1539, the electors and the city officially became Lutheran.
The Thirty Years' War between 1618 and 1648 devastated Berlin. One third of its houses were damaged or destroyed, and the city lost half of its population. Frederick William, known as the "Great Elector", who had succeeded his father George William as ruler in 1640, initiated a policy of promoting immigration and religious tolerance. With the Edict of Potsdam in 1685, Frederick William offered asylum to the French Huguenots.
By 1700, approximately 30 percent of Berlin's residents were French, because of the Huguenot immigration. Many other immigrants came from Bohemia, Poland, and Salzburg.
Since 1618, the Margraviate of Brandenburg had been in personal union with the Duchy of Prussia. In 1701, the dual state formed the Kingdom of Prussia, as Frederick III, Elector of Brandenburg, crowned himself as king Frederick I in Prussia. Berlin became the capital of the new Kingdom, replacing Königsberg. This was a successful attempt to centralise the capital in the very far-flung state, and it was the first time the city began to grow. In 1709, Berlin merged with the four cities of Cölln, Friedrichswerder, Friedrichstadt and Dorotheenstadt under the name Berlin, "Haupt- und Residenzstadt Berlin".
In 1740, Frederick II, known as Frederick the Great (1740–1786), came to power. Under the rule of Frederick II, Berlin became a center of the Enlightenment, but also, was briefly occupied during the Seven Years' War by the Russian army. Following France's victory in the War of the Fourth Coalition, Napoleon Bonaparte marched into Berlin in 1806, but granted self-government to the city. In 1815, the city became part of the new Province of Brandenburg.
The Industrial Revolution transformed Berlin during the 19th century; the city's economy and population expanded dramatically, and it became the main railway hub and economic centre of Germany. Additional suburbs soon developed and increased the area and population of Berlin. In 1861, neighbouring suburbs including Wedding, Moabit and several others were incorporated into Berlin. In 1871, Berlin became capital of the newly founded German Empire. In 1881, it became a city district separate from Brandenburg.
In the early 20th century, Berlin had become a fertile ground for the German Expressionist movement. In fields such as architecture, painting and cinema new forms of artistic styles were invented. At the end of the First World War in 1918, a republic was proclaimed by Philipp Scheidemann at the Reichstag building. In 1920, the Greater Berlin Act incorporated dozens of suburban cities, villages and estates around Berlin into an expanded city. The act increased the area of Berlin from . The population almost doubled and Berlin had a population of around four million. During the Weimar era, Berlin underwent political unrest due to economic uncertainties, but also became a renowned centre of the Roaring Twenties. The metropolis experienced its heyday as a major world capital and was known for its leadership roles in science, technology, arts, the humanities, city planning, film, higher education, government and industries. Albert Einstein rose to public prominence during his years in Berlin, being awarded the Nobel Prize for Physics in 1921.
In 1933, Adolf Hitler and the Nazi Party came to power. NSDAP rule diminished Berlin's Jewish community from 160,000 (one-third of all Jews in the country) to about 80,000 as a result of emigration between 1933 and 1939. After Kristallnacht in 1938, thousands of the city's Jews were imprisoned in the nearby Sachsenhausen concentration camp. Starting in early 1943, many were shipped to death camps, such as Auschwitz. Berlin is the most heavily bombed city in history. During World War II, large parts of Berlin were destroyed during Allied air raids and the 1945 Battle of Berlin. The Allies dropped 67,607 tons of bombs on the city, destroying 6,427 acres of the built up area. Around 125,000 civilians were killed. After the end of the war in Europe in May 1945, Berlin received large numbers of refugees from the Eastern provinces. The victorious powers divided the city into four sectors, analogous to the occupation zones into which Germany was divided. The sectors of the Western Allies (the United States, the United Kingdom and France) formed West Berlin, while the Soviet sector formed East Berlin.
All four Allies shared administrative responsibilities for Berlin. However, in 1948, when the Western Allies extended the currency reform in the Western zones of Germany to the three western sectors of Berlin, the Soviet Union imposed a blockade on the access routes to and from West Berlin, which lay entirely inside Soviet-controlled territory. The Berlin airlift, conducted by the three western Allies, overcame this blockade by supplying food and other supplies to the city from June 1948 to May 1949. In 1949, the Federal Republic of Germany was founded in West Germany and eventually included all of the American, British and French zones, excluding those three countries' zones in Berlin, while the Marxist-Leninist German Democratic Republic was proclaimed in East Germany. West Berlin officially remained an occupied city, but it politically was aligned with the Federal Republic of Germany despite West Berlin's geographic isolation. Airline service to West Berlin was granted only to American, British and French airlines.
The founding of the two German states increased Cold War tensions. West Berlin was surrounded by East German territory, and East Germany proclaimed the Eastern part as its capital, a move the western powers did not recognize. East Berlin included most of the city's historic centre. The West German government established itself in Bonn. In 1961, East Germany began to build the Berlin Wall around West Berlin, and events escalated to a tank standoff at Checkpoint Charlie. West Berlin was now de facto a part of West Germany with a unique legal status, while East Berlin was de facto a part of East Germany. John F. Kennedy gave his ""Ich bin ein Berliner"" speech in 1963, underlining the US support for the Western part of the city. Berlin was completely divided. Although it was possible for Westerners to pass to the other side through strictly controlled checkpoints, for most Easterners travel to West Berlin or West Germany was prohibited by the government of East Germany. In 1971, a Four-Power agreement guaranteed access to and from West Berlin by car or train through East Germany.
In 1989, with the end of the Cold War and pressure from the East German population, the Berlin Wall fell on 9 November and was subsequently mostly demolished. Today, the East Side Gallery preserves a large portion of the wall. On 3 October 1990, the two parts of Germany were reunified as the Federal Republic of Germany and Berlin again became a reunified city. Walter Momper, the mayor of West Berlin, became the first mayor of the reunified city in the interim. City-wide elections in December 1990 resulted in the first "all Berlin" mayor being elected to take office in January 1991, with the separate offices of mayors in East and West Berlin expiring by that time, and Eberhard Diepgen (a former mayor of West Berlin) became the first elected mayor of a reunited Berlin. On 18 June 1994, soldiers from the United States, France and Britain marched in a parade which was part of the ceremonies to mark the withdrawal of allied occupation troops allowing a reunified Berlin (the last Russian troops departed on 31 August, while the final departure of Western Allies forces was on 8 September 1994). On 20 June 1991, the Bundestag (German Parliament) voted to move the seat of the German capital from Bonn to Berlin, which was completed in 1999. Berlin's 2001 administrative reform merged several districts. The number of boroughs was reduced from 23 to 12.
In 2002, the German parliament voted to allow the reconstruction of the Berlin Palace, which started in 2013 and will be finished in 2019. In 2006, the FIFA World Cup Final was held in Berlin.
In a 2016 terrorist attack linked to ISIL, a truck was deliberately driven into a Christmas market next to the Kaiser Wilhelm Memorial Church, leaving 12 people dead and 56 others injured.
Berlin is in northeastern Germany, in an area of low-lying marshy woodlands with a mainly flat topography, part of the vast Northern European Plain which stretches all the way from northern France to western Russia. The "Berliner Urstromtal" (an ice age glacial valley), between the low Barnim Plateau to the north and the Teltow plateau to the south, was formed by meltwater flowing from ice sheets at the end of the last Weichselian glaciation. The Spree follows this valley now. In Spandau, a borough in the west of Berlin, the Spree empties into the river Havel, which flows from north to south through western Berlin. The course of the Havel is more like a chain of lakes, the largest being the Tegeler See and the Großer Wannsee. A series of lakes also feeds into the upper Spree, which flows through the Großer Müggelsee in eastern Berlin.
Substantial parts of present-day Berlin extend onto the low plateaus on both sides of the Spree Valley. Large parts of the boroughs Reinickendorf and Pankow lie on the Barnim Plateau, while most of the boroughs of Charlottenburg-Wilmersdorf, Steglitz-Zehlendorf, Tempelhof-Schöneberg, and Neukölln lie on the Teltow Plateau.
The borough of Spandau lies partly within the Berlin Glacial Valley and partly on the Nauen Plain, which stretches to the west of Berlin. Since 2015, the highest elevation in Berlin is found on the Arkenberge hills in Pankow, at . Through the dumping of construction debris, they surpassed Teufelsberg (), a hill made of rubble from the ruins of the Second World War. The highest natural elevation is found on the Müggelberge at , and the lowest at the Spektesee in Spandau, at .
Berlin has an oceanic climate (Köppen: "Cfb"); the eastern part of the city has a slight continental influence ("Dfb"), especially in the 0 °C isotherm, one of the changes being the annual rainfall according to the air masses and the greater abundance during a period of the year. This type of climate features moderate summer temperatures but sometimes hot (for being semicontinental) and cold winters but not rigorous most of the time.
Due to its transitional climate zones, frosts are common in winter and there are larger temperature differences between seasons than typical for many oceanic climates. Furthermore, Berlin is classified as a temperate continental climate ("Dc") under the Trewartha climate scheme, as well as the suburbs of New York City, although the Köppen system puts them in different types. By classification of Wincenty Okołowicz has a warm-temperate climate in the center of continental Europe with the "fusion" of different features (although being influenced mainly by western standards, which best describes at macro-regional levels).
Summers are warm and sometimes humid with average high temperatures of and lows of . Winters are cool with average high temperatures of and lows of . Spring and autumn are generally chilly to mild. Berlin's built-up area creates a microclimate, with heat stored by the city's buildings and pavement. Temperatures can be higher in the city than in the surrounding areas. Annual precipitation is with moderate rainfall throughout the year. Snowfall mainly occurs from December through March. The hottest month in Berlin was July 1834, with a mean temperature of and the coldest was January 1709, with a mean temperature of . The wettest month on record was July 1907, with of rainfall, whereas the driest were October 1866, November 1902, October 1908 and September 1928, all with of rainfall.
Berlin's history has left the city with a polycentric organization and a highly eclectic array of architecture and buildings. The city's appearance today is predominantly shaped by the key role it played in Germany's history in the 20th century. Each of the national governments based in Berlin the Kingdom of Prussia, the 2nd German Empire of 1871, the Weimar Republic, Nazi Germany, East Germany, and now the reunified Germany initiated ambitious reconstruction programs, with each adding its own distinctive style to the city's architecture.
Berlin was devastated by bombing raids, fires and street battles during World War II, and many of the buildings that remained after the war were demolished in the post-war period in both West and East Berlin. Much of this demolition was initiated by municipal architecture programs to build new residential or business quarters and main roads. Many ornaments of pre-war buildings were destroyed following modernist dogmas, while in both systems and in reunified Berlin, various important heritage monuments were also (partly) reconstructed, including the "Forum Fridericianum" with e.g., the State Opera (1955), Charlottenburg Palace (1957), the main monuments of the Gendarmenmarkt (1980s), Kommandantur (2003) and the project to reconstruct the baroque façades of the City Palace. A number of new buildings are inspired by historical predecessors or the general classical style of Berlin, such as Hotel Adlon.
Clusters of high-rise buildings emerge at disperse locations, e.g. Potsdamer Platz, City West, and Alexanderplatz, the latter two representing the previous centers of West and East Berlin, respectively, and the former representing the new Berlin of the 21st century built upon the previous no-man's land of the Berlin Wall. Berlin has three of the top 40 tallest buildings in Germany.
The Fernsehturm (TV tower) at Alexanderplatz in Mitte is among the tallest structures in the European Union at . Built in 1969, it is visible throughout most of the central districts of Berlin. The city can be viewed from its observation floor. Starting here the Karl-Marx-Allee heads east, an avenue lined by monumental residential buildings, designed in the Socialist Classicism style. Adjacent to this area is the Rotes Rathaus (City Hall), with its distinctive red-brick architecture. In front of it is the Neptunbrunnen, a fountain featuring a mythological group of Tritons, personifications of the four main Prussian rivers and Neptune on top of it.
The Brandenburg Gate is an iconic landmark of Berlin and Germany; it stands as a symbol of eventful European history and of unity and peace. The Reichstag building is the traditional seat of the German Parliament. It was remodelled by British architect Norman Foster in the 1990s and features a glass dome over the session area, which allows free public access to the parliamentary proceedings and magnificent views of the city.
The East Side Gallery is an open-air exhibition of art painted directly on the last existing portions of the Berlin Wall. It is the largest remaining evidence of the city's historical division.
The Gendarmenmarkt is a neoclassical square in Berlin, the name of which derives from the headquarters of the famous Gens d'armes regiment located here in the 18th century. It is bordered by two similarly designed cathedrals, the Französischer Dom with its observation platform and the Deutscher Dom. The Konzerthaus (Concert Hall), home of the Berlin Symphony Orchestra, stands between the two cathedrals.
The Museum Island in the River Spree houses five museums built from 1830 to 1930 and is a UNESCO World Heritage site. Restoration and construction of a main entrance to all museums, as well as reconstruction of the Stadtschloss continues. Also on the island and next to the Lustgarten and palace is Berlin Cathedral, emperor William II's ambitious attempt to create a Protestant counterpart to St. Peter's Basilica in Rome. A large crypt houses the remains of some of the earlier Prussian royal family. St. Hedwig's Cathedral is Berlin's Roman Catholic cathedral.
Unter den Linden is a tree-lined east–west avenue from the Brandenburg Gate to the site of the former Berliner Stadtschloss, and was once Berlin's premier promenade. Many Classical buildings line the street and part of Humboldt University is there. Friedrichstraße was Berlin's legendary street during the Golden Twenties. It combines 20th-century traditions with the modern architecture of today's Berlin.
Potsdamer Platz is an entire quarter built from scratch after the Wall came down. To the west of Potsdamer Platz is the Kulturforum, which houses the Gemäldegalerie, and is flanked by the Neue Nationalgalerie and the Berliner Philharmonie. The Memorial to the Murdered Jews of Europe, a Holocaust memorial, is to the north.
The area around Hackescher Markt is home to fashionable culture, with countless clothing outlets, clubs, bars, and galleries. This includes the Hackesche Höfe, a conglomeration of buildings around several courtyards, reconstructed around 1996. The nearby New Synagogue is the center of Jewish culture.
The Straße des 17. Juni, connecting the Brandenburg Gate and Ernst-Reuter-Platz, serves as the central east–west axis. Its name commemorates the uprisings in East Berlin of 17 June 1953. Approximately halfway from the Brandenburg Gate is the Großer Stern, a circular traffic island on which the Siegessäule (Victory Column) is situated. This monument, built to commemorate Prussia's victories, was relocated in 1938–39 from its previous position in front of the Reichstag.
The Kurfürstendamm is home to some of Berlin's luxurious stores with the Kaiser Wilhelm Memorial Church at its eastern end on Breitscheidplatz. The church was destroyed in the Second World War and left in ruins. Nearby on Tauentzienstraße is KaDeWe, claimed to be continental Europe's largest department store. The Rathaus Schöneberg, where John F. Kennedy made his famous "Ich bin ein Berliner!" speech, is in Tempelhof-Schöneberg.
West of the center, Bellevue Palace is the residence of the German President. Charlottenburg Palace, which was burnt out in the Second World War, is the largest historical palace in Berlin.
The Funkturm Berlin is a lattice radio tower in the fairground area, built between 1924 and 1926. It is the only observation tower which stands on insulators and has a restaurant and an observation deck above ground, which is reachable by a windowed elevator.
The Oberbaumbrücke over the Spree river is Berlin's most iconic bridge, connecting the now-combined boroughs of Friedrichshain and Kreuzberg. It carries vehicles, pedestrians, and the U1 Berlin U-Bahn line. The bridge was completed in a brick gothic style in 1896, replacing the former wooden bridge, with an upper deck for the U-Bahn. The center portion was demolished in 1945 to stop the Red Army from crossing. After the war, the repaired bridge served as a checkpoint and border crossing between the Soviet and American sectors, and later between East and West Berlin. In the mid-1950s it was closed to vehicles, and after the construction of the Berlin Wall in 1961, pedestrian traffic was heavily restricted. Following German reunification, the center portion was reconstructed with a steel frame, and U-Bahn service resumed in 1995.
At the end of 2018, the city-state of Berlin had 3.75 million registered inhabitants in an area of . The city's population density was 4,206 inhabitants per km2. Berlin is the most populous city proper in the EU. The urban area of Berlin had about 4.1 million people in 2014 in an area of , making it the sixth-most-populous urban area in the European Union. The urban agglomeration of the metropolis was home to about 4.5 million in an area of . the functional urban area was home to about 5.2 million people in an area of approximately . The entire Berlin-Brandenburg capital region has a population of more than 6 million in an area of .
In 2014, the city state Berlin had 37,368 live births (+6.6%), a record number since 1991. The number of deaths was 32,314. Almost 2.0 million households were counted in the city. 54 percent of them were single-person households. More than 337,000 families with children under the age of 18 lived in Berlin. In 2014 the German capital registered a migration surplus of approximately 40,000 people.
National and international migration into the city has a long history. In 1685, after the revocation of the Edict of Nantes in France, the city responded with the Edict of Potsdam, which guaranteed religious freedom and tax-free status to French Huguenot refugees for ten years. The Greater Berlin Act in 1920 incorporated many suburbs and surrounding cities of Berlin. It formed most of the territory that comprises modern Berlin and increased the population from 1.9 million to 4 million.
Active immigration and asylum politics in West Berlin triggered waves of immigration in the 1960s and 1970s. Berlin is home to at least 180,000 Turkish and Turkish German residents, making it the largest Turkish community outside of Turkey. In the 1990s the "Aussiedlergesetze" enabled immigration to Germany of some residents from the former Soviet Union. Today ethnic Germans from countries of the former Soviet Union make up the largest portion of the Russian-speaking community. The last decade experienced an influx from various Western countries and some African regions. A portion of the African immigrants have settled in the Afrikanisches Viertel. Young Germans, EU-Europeans and Israelis have also settled in the city.
In December 2019, there were 777,345 registered residents of foreign nationality and another 542,975 German citizens with a "migration background" "(Migrationshintergrund, MH)", meaning they or one of their parents immigrated to Germany after 1955. Foreign residents of Berlin originate from about 190 different countries. 48 percent of the residents under the age of 15 have migration background. Berlin in 2009 was estimated to have 100,000 to 250,000 non-registered inhabitants. Boroughs of Berlin with a significant number of migrants or foreign born population are Mitte, Neukölln and Friedrichshain-Kreuzberg.
There are more than 20 non-indigenous communities with a population of at least 10,000 people, including Turkish, Polish, Russian, Lebanese, Palestinian, Serbian, Italian, Bosnian, Vietnamese, American, Romanian, Bulgarian, Croatian, Chinese, Austrian, Ukrainian, French, British, Spanish, Israeli, Thai, Iranian, Egyptian and Syrian communities.
German is the official and predominant spoken language in Berlin. It is a West Germanic language that derives most of its vocabulary from the Germanic branch of the Indo-European language family. German is one of 24 languages of the European Union, and one of the three working languages of the European Commission.
Berlinerisch or Berlinisch is not a dialect linguistically, but has features of Lausitzisch-neumärkisch dialects. It is spoken in Berlin and the surrounding metropolitan area. It originates from a Mark Brandenburgish variant. The dialect is now seen more as a sociolect, largely through increased immigration and trends among the educated population to speak standard German in everyday life.
The most-commonly-spoken foreign languages in Berlin are Turkish, Polish, English, Arabic, Italian, Bulgarian, Russian, Romanian, Kurdish, Serbo-Croatian, French, Spanish and Vietnamese. Turkish, Arabic, Kurdish and Serbo-Croatian are heard more often in the western part, due to the large Middle Eastern and former-Yugoslavian communities. Polish, English, Russian, and Vietnamese have more native speakers in East Berlin.
According to the 2011 census, approximately 37 percent of the population reported being members of a legally-recognized church or religious organization. The rest either did not belong to such an organization, or there was no information available about them.
The largest religious denomination recorded in 2010 was the Protestant regional church body—the Evangelical Church of Berlin-Brandenburg-Silesian Upper Lusatia (EKBO)—a United church. EKBO is a member of the Evangelical Church in Germany (EKD) and Union Evangelischer Kirchen (UEK). According to the EKBO, their membership accounted for 18.7 percent of the local population, while the Roman Catholic Church had 9.1 percent of residents registered as its members. About 2.7% of the population identify with other Christian denominations (mostly Eastern Orthodox, but also various Protestants). According to the Berlin residents register, in 2018 14.9 percent were members of the Evangelical Church, and 8.5 percent were members of the Catholic Church. The government keeps a register of members of these churches for tax purposes, because it collects church tax on behalf of the churches. It does not keep records of members of other religious organizations which may collect their own church tax, in this way.
In 2009, approximately 249,000 Muslims were reported by the Office of Statistics to be members of Mosques and Islamic religious organizations in Berlin, while in 2016, the newspaper "Der Tagesspiegel" estimated that about 350,000 Muslims observed Ramadan in Berlin. In 2018, more than 420,000 registered residents, about 11% of the total, reported having a migration background from Islamic countries. Between 1992 and 2011 the Muslim population almost doubled.
About 0.9% of Berliners belong to other religions. Of the estimated population of 30,000–45,000 Jewish residents, approximately 12,000 are registered members of religious organizations.
Berlin is the seat of the Roman Catholic archbishop of Berlin and EKBO's elected chairperson is titled the bishop of EKBO. Furthermore, Berlin is the seat of many Orthodox cathedrals, such as the Cathedral of St. Boris the Baptist, one of the two seats of the Bulgarian Orthodox Diocese of Western and Central Europe, and the Resurrection of Christ Cathedral of the Diocese of Berlin (Patriarchate of Moscow).
The faithful of the different religions and denominations maintain many places of worship in Berlin. The Independent Evangelical Lutheran Church has eight parishes of different sizes in Berlin. There are 36 Baptist congregations (within Union of Evangelical Free Church Congregations in Germany), 29 New Apostolic Churches, 15 United Methodist churches, eight Free Evangelical Congregations, four Churches of Christ, Scientist (1st, 2nd, 3rd, and 11th), six congregations of The Church of Jesus Christ of Latter-day Saints, an Old Catholic church, and an Anglican church in Berlin. Berlin has more than 80 mosques, ten synagogues, and two Buddhist temples.
Since the reunification on 3 October 1990, Berlin has been one of the three city states in Germany among the present 16 states of Germany. The House of Representatives ("Abgeordnetenhaus") functions as the city and state parliament, which currently has 141 seats. Berlin's executive body is the Senate of Berlin ("Senat von Berlin"). The Senate consists of the Governing Mayor ("Regierender Bürgermeister") and up to eight senators holding ministerial positions, one of them holding the title of "Mayor" ("Bürgermeister") as deputy to the Governing Mayor. The total annual state budget of Berlin in 2015 exceeded €24.5 ($30.0) billion including a budget surplus of €205 ($240) million. The state owns extensive assets, including administrative and government buildings, real estate companies, as well as stakes in the Olympic Stadium, swimming pools, housing companies, and numerous public enterprises and subsidiary companies.
The Social Democratic Party (SPD) and The Left (Die Linke) took control of the city government after the 2001 state election and won another term in the 2006 state election. Since the 2016 state election, there has been a coalition between the Social Democratic Party, the Greens and the Left Party.
The Governing Mayor is simultaneously Lord Mayor of the City of Berlin ("Oberbürgermeister der Stadt") and Minister President of the Federal State of Berlin ("Ministerpräsident des Bundeslandes"). The office of the Governing Mayor is in the Rotes Rathaus (Red City Hall). Since 2014 this office has been held by Michael Müller of the Social Democrats.
Berlin is subdivided into 12 boroughs or districts ("Bezirke"). Each borough has a number of subdistricts or neighborhoods ("Ortsteile"), which have roots in much older municipalities that predate the formation of Greater Berlin on 1 October 1920. These subdistricts became urbanized and incorporated into the city later on. Many residents strongly identify with their neighbourhoods, colloquially called "Kiez". At present, Berlin consists of 96 subdistricts, which are commonly made up of several smaller residential areas or quarters.
Each borough is governed by a borough council ("Bezirksamt") consisting of five councilors ("Bezirksstadträte") including the borough's mayor ("Bezirksbürgermeister"). The council is elected by the borough assembly ("Bezirksverordnetenversammlung"). However, the individual boroughs are not independent municipalities, but subordinate to the Senate of Berlin. The borough's mayors make up the council of mayors ("Rat der Bürgermeister"), which is led by the city's Governing Mayor and advises the Senate. The neighborhoods have no local government bodies.
Berlin maintains official partnerships with 17 cities. Town twinning between Berlin and other cities began with its sister city Los Angeles in 1967. East Berlin's partnerships were canceled at the time of German reunification but later partially reestablished. West Berlin's partnerships had previously been restricted to the borough level. During the Cold War era, the partnerships had reflected the different power blocs, with West Berlin partnering with capitals in the Western World, and East Berlin mostly partnering with cities from the Warsaw Pact and its allies.
There are several joint projects with many other cities, such as Beirut, Belgrade, São Paulo, Copenhagen, Helsinki, Johannesburg, Mumbai, Oslo, Shanghai, Seoul, Sofia, Sydney, New York City and Vienna. Berlin participates in international city associations such as the Union of the Capitals of the European Union, Eurocities, Network of European Cities of Culture, Metropolis, Summit Conference of the World's Major Cities, and Conference of the World's Capital Cities. Berlin's official sister cities are:
Apart from sister cities, there are also several city and district partnerships that Berlin districts have established. For example, the district of Friedrichshain-Kreuzberg has a partnership with the Israeli city of Kiryat Yam.
Berlin is the capital of the Federal Republic of Germany. The President of Germany, whose functions are mainly ceremonial under the German constitution, has their official residence in Bellevue Palace. Berlin is the seat of the German Chancellor (Prime Minister), housed in the Chancellery building, the "Bundeskanzleramt". Facing the Chancellery is the Bundestag, the German Parliament, housed in the renovated Reichstag building since the government's relocation to Berlin in 1998. The Bundesrat ("federal council", performing the function of an upper house) is the representation of the federal states ("Bundesländer") of Germany and has its seat at the former Prussian House of Lords. The total annual federal budget managed by the German government exceeded €310 ($375) billion in 2013.
The relocation of the federal government and Bundestag to Berlin was mostly completed in 1999, however some ministries as well as some minor departments stayed in the federal city Bonn, the former capital of West Germany. Discussions about moving the remaining ministries and departments to Berlin continue. The ministries and departments of Defence, Justice and Consumer Protection, Finance, Interior, Foreign, Economic Affairs and Energy, Labour and Social Affairs
, Family Affairs, Senior Citizens, Women and Youth, Environment, Nature Conservation, Building and Nuclear Safety, Food and Agriculture, Economic Cooperation and Development, Health, Transport and Digital Infrastructure and Education and Research are based in the capital.
Berlin hosts in total 158 foreign embassies as well as the headquarters of many think tanks, trade unions, non-profit organizations, lobbying groups, and professional associations. Due to the influence and international partnerships of the Federal Republic of Germany, the capital city has become a significant centre of German and European affairs. Frequent official visits, and diplomatic consultations among governmental representatives and national leaders are common in contemporary Berlin.
In 2018, the GDP of Berlin totaled €147 billion, an increase of 3.1% over the previous year. Berlin's economy is dominated by the service sector, with around 84% of all companies doing business in services. In 2015, the total labour force in Berlin was 1.85 million. The unemployment rate reached a 24-year low in November 2015 and stood at 10.0% . From 2012 to 2015 Berlin, as a German state, had the highest annual employment growth rate. Around 130,000 jobs were added in this period.
Important economic sectors in Berlin include life sciences, transportation, information and communication technologies, media and music, advertising and design, biotechnology, environmental services, construction, e-commerce, retail, hotel business, and medical engineering.
Research and development have economic significance for the city. Several major corporations like Volkswagen, Pfizer, and SAP operate innovation laboratories in the city.
The Science and Business Park in Adlershof is the largest technology park in Germany measured by revenue. Within the Eurozone, Berlin has become a center for business relocation and international investments.
Many German and international companies have business or service centers in the city. For several years Berlin has been recognized as a major center of business founders. In 2015, Berlin generated the most venture capital for young startup companies in Europe.
Among the 10 largest employers in Berlin are the City-State of Berlin, Deutsche Bahn, the hospital providers Charité and Vivantes, the Federal Government of Germany, the local public transport provider BVG, Siemens and Deutsche Telekom.
Siemens, a Global 500 and DAX-listed company is partly headquartered in Berlin. Another DAX-listed company headquartered in Berlin is the property company Deutsche Wohnen. The national railway operator Deutsche Bahn, Europe's largest digital publisher Axel Springer as well as the MDAX-listed firms Delivery Hero, Zalando, HelloFresh and Rocket Internet also have their main headquarters in the city. Among the largest international corporations who have their German or European headquarters in Berlin are Bombardier Transportation, Gazprom Germania, Coca-Cola, Pfizer, Sony and Total.
As of 2018, the three largest banks headquartered in the capital were Deutsche Kreditbank, Landesbank Berlin and Berlin Hyp.
Daimler manufactures cars, and BMW builds motorcycles in Berlin. The Pharmaceuticals division of Bayer and Berlin Chemie are major pharmaceutical companies in the city.
Berlin had 788 hotels with 134,399 beds in 2014. The city recorded 28.7 million overnight hotel stays and 11.9 million hotel guests in 2014. Tourism figures have more than doubled within the last ten years and Berlin has become the third-most-visited city destination in Europe. Some of the most visited places in Berlin include: Potsdamer Platz, Brandenburger Tor, the Berlin wall, Alexanderplatz, Museumsinsel, Fernsehturm, the East-Side Gallery, Schloss-Charlottenburg, Zoologischer Garten, Siegessäule, Gedenkstätte Berliner Mauer, Mauerpark, Botanical Garden, Französischer Dom, Deutscher Dom and Holocaust-Mahnmal. The largest visitor groups are from Germany, the United Kingdom, the Netherlands, Italy, Spain and the United States.
According to figures from the International Congress and Convention Association in 2015 Berlin became the leading organizer of conferences in the world hosting 195 international meetings. Some of these congress events take place on venues such as CityCube Berlin or the Berlin Congress Center (bcc).
The Messe Berlin (also known as Berlin ExpoCenter City) is the main convention organizing company in the city. Its main exhibition area covers more than . Several large-scale trade fairs like the consumer electronics trade fair IFA, the ILA Berlin Air Show, the Berlin Fashion Week (including the "Premium Berlin" and the "Panorama Berlin"), the Green Week, the "Fruit Logistica", the transport fair InnoTrans, the tourism fair ITB and the adult entertainment and erotic fair Venus are held annually in the city, attracting a significant number of business visitors.
The creative arts and entertainment business is an important part of Berlin's economy. The sector comprises music, film, advertising, architecture, art, design, fashion, performing arts, publishing, R&D, software, TV, radio, and video games.
In 2014, around 30,500 creative companies operated in the Berlin-Brandenburg metropolitan region, predominantly SMEs. Generating a revenue of 15.6 billion Euro and 6% of all private economic sales, the culture industry grew from 2009 to 2014 at an average rate of 5.5% per year.
Berlin is an important centre in the European and German film industry. It is home to more than 1,000 film and television production companies, 270 movie theaters, and around 300 national and international co-productions are filmed in the region every year. The historic Babelsberg Studios and the production company UFA are adjacent to Berlin in Potsdam. The city is also home of the German Film Academy (Deutsche Filmakademie), founded in 2003, and the European Film Academy, founded in 1988.
Berlin is home to many magazine, newspaper, book and scientific/academic publishers, as well as their associated service industries. In addition around 20 news agencies, more than 90 regional daily newspapers and their websites, as well as the Berlin offices of more than 22 national publications such as Der Spiegel, and Die Zeit re-enforce the capital's position as Germany's epicenter for influential debate. Therefore, many international journalists, bloggers and writers live and work in the city.
Berlin is the central location to several international and regional television and radio stations. The public broadcaster RBB has its headquarters in Berlin as well as the commercial broadcasters MTV Europe and Welt. German international public broadcaster Deutsche Welle has its TV production unit in Berlin, and most national German broadcasters have a studio in the city including ZDF and RTL.
Berlin has Germany's largest number of daily newspapers, with numerous local broadsheets ("Berliner Morgenpost", "Berliner Zeitung", "Der Tagesspiegel"), and three major tabloids, as well as national dailies of varying sizes, each with a different political affiliation, such as "Die Welt", "Neues Deutschland", and "Die Tageszeitung". The "Exberliner", a monthly magazine, is Berlin's English-language periodical and La Gazette de Berlin a French-language newspaper.
Berlin is also the headquarter of major German-language publishing houses like Walter de Gruyter, Springer, the Ullstein Verlagsgruppe (publishing group), Suhrkamp and Cornelsen are all based in Berlin. Each of which publish books, periodicals, and multimedia products.
According to Mercer, Berlin ranked number 13 in the Quality of living city ranking in 2019.
According to "Monocle", Berlin occupies the position of the 6th-most-livable city in the world. Economist Intelligence Unit ranks Berlin number 21 of all global cities. Berlin is number 8 at the Global Power City Index.
In 2019, Berlin has the best future prospects of all cities in Germany, according to HWWI and Berenberg Bank. According to the 2019 study by Forschungsinstitut Prognos, Berlin was ranked number 92 of all 401 regions in Germany. It is also the best ranked region in former East Germany after Jena, Dresden and Potsdam.
Berlin's transport infrastructure is highly complex, providing a diverse range of urban mobility. A total of 979 bridges cross of inner-city waterways. of roads run through Berlin, of which are motorways (). In 2013, 1.344 million motor vehicles were registered in the city. With 377 cars per 1000 residents in 2013 (570/1000 in Germany), Berlin as a Western global city has one of the lowest numbers of cars per capita. In 2012, around 7,600 mostly beige colored taxicabs were in service. Since 2011, a number of app based e-car and e-scooter sharing services have evolved.
Long-distance rail lines connect Berlin with all of the major cities of Germany and with many cities in neighboring European countries. Regional rail lines of the provide access to the surrounding regions of Brandenburg and to the Baltic Sea. The is the largest grade-separated railway station in Europe. runs high speed Intercity-Express trains to domestic destinations like , Munich, Cologne, , and others. It also runs an SXF airport express rail service, as well as trains to several international destinations like Vienna, Prague, , Warsaw, Budapest and Amsterdam.
Similarly to other German cities, there is an increasing quantity of intercity bus services. The city has more than 10 stations that run buses to destinations throughout Germany and Europe, being the biggest station.
The (BVG) and the (DB) manage several extensive urban public transport systems.
Travelers can access all modes of transport with a single ticket
Berlin has two commercial international airports. Airport (TXL) is within the city limits, and Airport (SXF) is just outside Berlin's south-eastern border, in the state of Brandenburg. Both airports together handled 29.5 million passengers in 2015. In 2014, 67 airlines served 163 destinations in 50 countries from Berlin. Airport is a focus city for Lufthansa and Eurowings. Schönefeld serves as an important destination for airlines like , easyJet and Ryanair.
The new Berlin Brandenburg Airport (BER) began construction in 2006, with the intention of replacing the existing airports as the single commercial airport of Berlin. Previously set to open in 2012, after extensive delays and cost overruns it was tentatively estimated to open by October 2020, with an expanded Schönefeld Airport remaining in service until 2026. The planned initial capacity of around 27 million passengers per year is to be further developed to bring the terminal capacity to approximately 55 million per year by 2040.
Berlin is well known for its highly developed bicycle lane system. It is estimated Berlin has 710 bicycles per 1000 residents. Around 500,000 daily bike riders accounted for 13% of total traffic in 2010. Cyclists have access to of bicycle paths including approximately of mandatory bicycle paths, of off-road bicycle routes, of bicycle lanes on roads, of shared bus lanes which are also open to cyclists, of combined pedestrian/bike paths and of marked bicycle lanes on roadside pavements (or sidewalks). Riders are allowed to carry their bicycles on , S-Bahn and U-Bahn trains, on trams, and on night buses if a bike ticket is purchased.
From 1865 until 1976 Berlin had an extensive pneumatic postal network, which at its peak in 1940, totalled 400 kilometres length. After 1949 the system was split in two separated networks. The West Berlin system in operation and open for public use until 1963, and for government use until 1972. The East Berlin system which inherited the "Hauptelegraphenamt", the central hub of the system, was in operation until 1976
Berlin's two largest energy provider for private households are the Swedish firm Vattenfall and the Berlin-based company GASAG. Both offer electric power and natural gas supply. Some of the city's electric energy is imported from nearby power plants in southern Brandenburg.
In 1993 the power grid connections in the Berlin-Brandenburg capital region were renewed. In most of the inner districts of Berlin power lines are underground cables; only a 380 kV and a 110 kV line, which run from Reuter substation to the urban Autobahn, use overhead lines. The Berlin 380-kV electric line is the backbone of the city's energy grid.
Berlin has a long history of discoveries in medicine and innovations in medical technology. The modern history of medicine has been significantly influenced by scientists from Berlin. Rudolf Virchow was the founder of cellular pathology, while Robert Koch developed vaccines for anthrax, cholera, and tuberculosis.
The Charité complex (Universitätsklinik Charité) is the largest university hospital in Europe, tracing back its origins to the year 1710. More than half of all German Nobel Prize winners in Physiology or Medicine, including Emil von Behring, Robert Koch and Paul Ehrlich, have worked at the Charité. The Charité is spread over four campuses and comprises around 3,000 beds, 15,500 staff, 8,000 students, and more than 60 operating theaters, and it has a turnover of two billion euros annually. The Charité is a joint institution of the Freie Universität Berlin and the Humboldt University of Berlin, including a wide range of institutes and specialized medical centers.
Among them are the German Heart Center, one of the most renowned transplantation centers, the Max-Delbrück-Center for Molecular Medicine and the Max-Planck-Institute for Molecular Genetics. The scientific research at these institutions is complemented by many research departments of companies such as Siemens and Bayer. The World Health Summit and several international health related conventions are held annually in Berlin.
Since 2017, the digital television standard in Berlin and Germany is DVB-T2. This system transmits compressed digital audio, digital video and other data in an MPEG transport stream.
Berlin has installed several hundred free public Wireless LAN sites across the capital since 2016. The wireless networks are concentrated mostly in central districts; 650 hotspots (325 indoor and 325 outdoor access points) are installed. Deutsche Bahn is planning to introduce Wi-Fi services in long distance and regional trains in 2017.
The UMTS (3G) and LTE (4G) networks of the three major cellular operators Vodafone, T-Mobile and O2 enable the use of mobile broadband applications citywide.
The Fraunhofer Heinrich Hertz Institute develops mobile and stationary broadband communication networks and multimedia systems. Focal points are photonic components and systems, fiber optic sensor systems, and image signal processing and transmission. Future applications for broadband networks are developed as well.
, Berlin had 878 schools, teaching 340,658 children in 13,727 classes and 56,787 trainees in businesses and elsewhere. The city has a 6-year primary education program. After completing primary school, students continue to the (a comprehensive school) or (college preparatory school). Berlin has a special bilingual school program in the , in which children are taught the curriculum in German and a foreign language, starting in primary school and continuing in high school.
The Französisches Gymnasium Berlin, which was founded in 1689 to teach the children of Huguenot refugees, offers (German/French) instruction. The John F. Kennedy School, a bilingual German–American public school in Zehlendorf, is particularly popular with children of diplomats and the English-speaking expatriate community. 82 teach Latin and 8 teach Classical Greek.
The Berlin-Brandenburg capital region is one of the most prolific centres of higher education and research in Germany and Europe. Historically, 67 Nobel Prize winners are affiliated with the Berlin-based universities.
The city has four public research universities and more than 30 private, professional, and technical colleges "(Hochschulen)", offering a wide range of disciplines. A record number of 175,651 students were enrolled in the winter term of 2015/16. Among them around 18% have an international background.
The three largest universities combined have approximately 103,000 enrolled students. There are the Freie Universität Berlin "(Free University of Berlin, FU Berlin)" with about 33,000 students, the Humboldt Universität zu Berlin "(HU Berlin)" with 35,000 students, and the Technische Universität Berlin "(TU Berlin)" with 35,000 students. The Charité Medical School has around 8,000 students. The FU, the HU, the TU, and the Charité are part of the German Universities Excellence Initiative. The Universität der Künste "(UdK)" has about 4,000 students. The Berlin School of Economics and Law has an enrollment of about 11,000 students, the Beuth University of Applied Sciences Berlin of about 12,000 students, and the Hochschule für Technik und Wirtschaft (University of Applied Sciences for Engineering and Economics) of about 14,000 students.
The city has a high density of internationally renowned research institutions, such as the Fraunhofer Society, the Leibniz Association, the Helmholtz Association, and the Max Planck Society, which are independent of, or only loosely connected to its universities. In 2012, around 65,000 professional scientists were working in research and development in the city.
Berlin is one of the knowledge and innovation communities (KIC) of the European Institute of Innovation and Technology (EIT). The KIC is based at the Centre for Entrepreneurship at TU Berlin and has a focus in the development of IT industries. It partners with major multinational companies such as Siemens, Deutsche Telekom, and SAP.
One of Europe's successful research, business and technology clusters is based at WISTA in Berlin-Adlershof, with more than 1,000 affiliated firms, university departments and scientific institutions.
In addition to the university-affiliated libraries, the Staatsbibliothek zu Berlin is a major research library. Its two main locations are on Potsdamer Straße and on Unter den Linden. There are also 86 public libraries in the city. ResearchGate, a global social networking site for scientists, is based in Berlin.
Berlin is known for its numerous cultural institutions, many of which enjoy international reputation. The diversity and vivacity of the metropolis led to a trendsetting atmosphere. An innovative music, dance and art scene has developed in the 21st century.
Young people, international artists and entrepreneurs continued to settle in the city and made Berlin a popular entertainment center in the world.
The expanding cultural performance of the city was underscored by the relocation of the Universal Music Group who decided to move their headquarters to the banks of the River Spree. In 2005, Berlin was named "City of Design" by UNESCO and has been part of the Creative Cities Network ever since.
Berlin is home to 138 museums and more than 400 art galleries.
The ensemble on the Museum Island is a UNESCO World Heritage Site and is in the northern part of the Spree Island between the Spree and the Kupfergraben. As early as 1841 it was designated a "district dedicated to art and antiquities" by a royal decree. Subsequently, the Altes Museum was built in the Lustgarten. The Neues Museum, which displays the bust of Queen Nefertiti, Alte Nationalgalerie, Pergamon Museum, and Bode Museum were built there.
Apart from the Museum Island, there are many additional museums in the city. The Gemäldegalerie (Painting Gallery) focuses on the paintings of the "old masters" from the 13th to the 18th centuries, while the Neue Nationalgalerie (New National Gallery, built by Ludwig Mies van der Rohe) specializes in 20th-century European painting. The Hamburger Bahnhof, in Moabit, exhibits a major collection of modern and contemporary art. The expanded Deutsches Historisches Museum re-opened in the Zeughaus with an overview of German history spanning more than a millennium. The Bauhaus Archive is a museum of 20th century design from the famous Bauhaus school. Museum Berggruen houses the collection of noted 20th century collector Heinz Berggruen, and features an extensive assortment of works by Picasso, Matisse, Cézanne, and Giacometti, among others.
The Jewish Museum has a standing exhibition on two millennia of German-Jewish history. The German Museum of Technology in Kreuzberg has a large collection of historical technical artifacts. The "Museum für Naturkunde" (Berlin's natural history museum) exhibits natural history near Berlin Hauptbahnhof. It has the largest mounted dinosaur in the world (a "Giraffatitan" skeleton). A well-preserved specimen of "Tyrannosaurus rex" and the early bird "Archaeopteryx" are at display as well.
In Dahlem, there are several museums of world art and culture, such as the Museum of Asian Art, the Ethnological Museum, the Museum of European Cultures, as well as the Allied Museum. The Brücke Museum features one of the largest collection of works by artist of the early 20th-century expressionist movement. In Lichtenberg, on the grounds of the former East German Ministry for State Security, is the Stasi Museum. The site of Checkpoint Charlie, one of the most renowned crossing points of the Berlin Wall, is still preserved. A private museum venture exhibits a comprehensive documentation of detailed plans and strategies devised by people who tried to flee from the East. The Beate Uhse Erotic Museum claims to be the world's largest erotic museum.
The cityscape of Berlin displays large quantities of urban street art. It has become a significant part of the city's cultural heritage and has its roots in the graffiti scene of Kreuzberg of the 1980s. The Berlin Wall itself has become one of the largest open-air canvasses in the world. The leftover stretch along the Spree river in Friedrichshain remains as the East Side Gallery. Berlin today is consistently rated as an important world city for street art culture.
Berlin has galleries which are quite rich in contemporary art. Located in Mitte, KW Institute for Contemporary Art, KOW, Sprüth Magers; Kreuzberg there are a few galleries as well such as Blain Southern, Esther Schipper, Future Gallery, König Gallerie.
Berlin's nightlife has been celebrated as one of the most diverse and vibrant of its kind. In the 1970s and 80s the SO36 in Kreuzberg was a centre for punk music and culture. The "SOUND" and the "Dschungel" gained notoriety. Throughout the 1990s, people in their 20s from all over the world, particularly those in Western and Central Europe, made Berlin's club scene a premier nightlife venue. After the fall of the Berlin Wall in 1989, and became a fertile ground for underground and counterculture gatherings. The central boroughs are home to many nightclubs, including the Watergate, Tresor and Berghain. The KitKatClub and several other locations are known for their sexually uninhibited parties.
Clubs are not required to close at a fixed time during the weekends, and many parties last well into the morning, or even all weekend. The "Weekend Club" near Alexanderplatz features a roof terrace that allows partying at night. Several venues have become a popular stage for the Neo-Burlesque scene.
Berlin has a long history of gay culture, and is an important birthplace of the LGBT rights movement. Same-sex bars and dance halls operated freely as early as the 1880s, and the first gay magazine, "Der Eigene", started in 1896. By the 1920s, gays and lesbians had an unprecedented visibility. Today, in addition to a positive atmosphere in the wider club scene, the city again has a huge number of queer clubs and festivals. The most famous and largest are Berlin Pride, the Christopher Street Day, the Lesbian and Gay City Festival in Berlin-Schöneberg, the Kreuzberg Pride and Hustlaball.
The annual Berlin International Film Festival (Berlinale) with around 500,000 admissions is considered to be the largest publicly attended film festival in the world. The Karneval der Kulturen ("Carnival of Cultures"), a multi-ethnic street parade, is celebrated every Pentecost weekend. Berlin is also well known for the cultural festival, Berliner Festspiele, which includes the jazz festival JazzFest Berlin. Several technology and media art festivals and conferences are held in the city, including Transmediale and Chaos Communication Congress. The annual Berlin Festival focuses on indie rock, electronic music and synthpop and is part of the International Berlin Music Week. Every year Berlin hosts one of the largest New Year's Eve celebrations in the world, attended by well over a million people. The focal point is the Brandenburg Gate, where midnight fireworks are centred, but various private fireworks displays take place throughout the entire city. Partygoers in Germany often toast the New Year with a glass of sparkling wine.
Berlin is home to 44 theaters and stages. The Deutsches Theater in Mitte was built in 1849–50 and has operated almost continuously since then. The Volksbühne at Rosa-Luxemburg-Platz was built in 1913–14, though the company had been founded in 1890. The Berliner Ensemble, famous for performing the works of Bertolt Brecht, was established in 1949. The Schaubühne was founded in 1962 and moved to the building of the former Universum Cinema on Kurfürstendamm in 1981. With a seating capacity of 1,895 and a stage floor of , the Friedrichstadt-Palast in Berlin Mitte is the largest show palace in Europe.
Berlin has three major opera houses: the Deutsche Oper, the Berlin State Opera, and the Komische Oper. The Berlin State Opera on Unter den Linden opened in 1742 and is the oldest of the three. Its musical director is Daniel Barenboim. The Komische Oper has traditionally specialized in operettas and is also at Unter den Linden. The Deutsche Oper opened in 1912 in Charlottenburg.
The city's main venue for musical theater performances are the Theater am Potsdamer Platz and Theater des Westens (built in 1895). Contemporary dance can be seen at the "Radialsystem V". The Tempodrom is host to concerts and circus inspired entertainment. It also houses a multi-sensory spa experience. The Admiralspalast in Mitte has a vibrant program of variety and music events.
There are seven symphony orchestras in Berlin. The Berlin Philharmonic Orchestra is one of the preeminent orchestras in the world; it is housed in the Berliner Philharmonie near Potsdamer Platz on a street named for the orchestra's longest-serving conductor, Herbert von Karajan. Simon Rattle is its principal conductor. The Konzerthausorchester Berlin was founded in 1952 as the orchestra for East Berlin. Ivan Fischer is its principal conductor. The Haus der Kulturen der Welt presents exhibitions dealing with intercultural issues and stages world music and conferences. The "Kookaburra" and the "Quatsch Comedy Club" are known for satire and stand-up comedy shows.
The cuisine and culinary offerings of Berlin vary greatly. Twelve restaurants in Berlin have been included in the Michelin Guide of 2015, which ranks the city at the top for the number of restaurants having this distinction in Germany. Berlin is well known for its offerings of vegetarian and vegan cuisine and is home to an innovative entrepreneurial food scene promoting cosmopolitan flavors, local and sustainable ingredients, pop-up street food markets, supper clubs, as well as food festivals, such as Berlin Food Week.
Many local foods originated from north German culinary traditions and include rustic and hearty dishes with pork, goose, fish, peas, beans, cucumbers, or potatoes. Typical Berliner fare include popular street food like the "Currywurst" (which gained popularity with post-war construction workers rebuilding the city), "Buletten" and the "Berliner" doughnut, known in Berlin as . German bakeries offering a variety of breads and pastries are widespread. One of Europe's largest delicatessen markets is found at the KaDeWe, and among the world's largest chocolate stores is "Fassbender & Rausch".
Berlin is also home to a diverse gastronomy scene reflecting the immigrant history of the city. Turkish and Arab immigrants brought their culinary traditions to the city, such as the lahmajoun and falafel, which have become common fast food staples. The modern fast food version of the doner kebab sandwich which evolved in Berlin in the 1970s, has since become a favorite dish in Germany and elsewhere in the world. Asian cuisine like Chinese, Vietnamese, Thai, Indian, Korean, and Japanese restaurants, as well as Spanish tapas bars, Italian, and Greek cuisine, can be found in many parts of the city.
Zoologischer Garten Berlin, the older of two zoos in the city, was founded in 1844. It is the most visited zoo in Europe and presents the most diverse range of species in the world. It was the home of the captive-born celebrity polar bear Knut. The city's other zoo, Tierpark Friedrichsfelde, was founded in 1955.
Berlin's Botanischer Garten includes the Botanic Museum Berlin. With an area of and around 22,000 different plant species, it is one of the largest and most diverse collections of botanical life in the world. Other gardens in the city include the Britzer Garten, and the Gärten der Welt (Gardens of the World) in Marzahn.
The Tiergarten park in Mitte, with landscape design by Peter Joseph Lenné, is one of Berlin's largest and most popular parks. In Kreuzberg, the Viktoriapark provides a viewing point over the southern part of inner-city Berlin. Treptower Park, beside the Spree in Treptow, features a large Soviet War Memorial. The Volkspark in Friedrichshain, which opened in 1848, is the oldest park in the city, with monuments, a summer outdoor cinema and several sports areas. Tempelhofer Feld, the site of the former city airport, is the world's largest inner-city open space.
Potsdam is on the southwestern periphery of Berlin. The city was a residence of the Prussian kings and the German Kaiser, until 1918. The area around Potsdam in particular Sanssouci is known for a series of interconnected lakes and cultural landmarks. The Palaces and Parks of Potsdam and Berlin are the largest World Heritage Site in Germany.
Berlin is also well known for its numerous cafés, street musicians, beach bars along the Spree River, flea markets, boutique shops and pop up stores, which are a source for recreation and leisure.
Berlin has established a high-profile as a host city of major international sporting events. The city hosted the 1936 Summer Olympics and was the host city for the 2006 FIFA World Cup final. The IAAF World Championships in Athletics was held in the Olympiastadion in 2009. The city hosted the Basketball Euroleague Final Four in 2009 and 2016. and was one of the hosts of the FIBA EuroBasket 2015. In 2015 Berlin became the venue for the UEFA Champions League Final.
Berlin will host the 2023 Special Olympics World Summer Games. This will be the first time Germany has ever hosted the Special Olympics World Games.
The annual Berlin Marathon a course that holds the most top-10 world record runs and the ISTAF are well-established athletic events in the city. The Mellowpark in Köpenick is one of the biggest skate and BMX parks in Europe. A Fan Fest at Brandenburg Gate, which attracts several hundred-thousand spectators, has become popular during international football competitions, like the UEFA European Championship.
In 2013 around 600,000 Berliners were registered in one of the more than 2,300 sport and fitness clubs. The city of Berlin operates more than 60 public indoor and outdoor swimming pools. Berlin is the largest Olympic training centre in Germany. About 500 top athletes (15% of all German top athletes) are based there. Forty-seven elite athletes participated in the 2012 Summer Olympics. Berliners would achieve seven gold, twelve silver and three bronze medals.
Several professional clubs representing the most important spectator team sports in Germany have their base in Berlin. The oldest and most popular division-1 team based in Berlin is the football club Hertha BSC. The team represented Berlin as a founding member of the Bundesliga, Germany's highest football league, in 1963. Other professional team sport clubs include: | https://en.wikipedia.org/wiki?curid=3354 |
Benjamin Lee Whorf
Benjamin Lee Whorf (; April 24, 1897 – July 26, 1941) was an American linguist and fire prevention engineer. Whorf is widely known as an advocate for the idea that differences between the structures of different languages shape how their speakers perceive and conceptualize the world. This principle has frequently been called the "Sapir–Whorf hypothesis", after him and his mentor Edward Sapir, but Whorf called it the principle of linguistic relativity, because he saw the idea as having implications similar to Einstein's principle of physical relativity.
Throughout his life Whorf was a chemical engineer by profession, but as a young man he took up an interest in linguistics. At first this interest drew him to the study of Biblical Hebrew, but he quickly went on to study the indigenous languages of Mesoamerica on his own. Professional scholars were impressed by his work and in 1930 he received a grant to study the Nahuatl language in Mexico; on his return home he presented several influential papers on the language at linguistics conferences.
This led him to begin studying linguistics with Edward Sapir at Yale University while still maintaining his day job at the Hartford Fire Insurance Company. During his time at Yale he worked on the description of the Hopi language, and the historical linguistics of the Uto-Aztecan languages, publishing many influential papers in professional journals. He was chosen as the substitute for Sapir during his medical leave in 1938. Whorf taught his seminar on "Problems of American Indian Linguistics". In addition to his well-known work on linguistic relativity, he wrote a grammar sketch of Hopi and studies of Nahuatl dialects, proposed a deciphering of Maya hieroglyphic writing, and published the first attempt towards a reconstruction of Uto-Aztecan.
After his death from cancer in 1941 his manuscripts were curated by his linguist friends who also worked to spread the influence of Whorf's ideas on the relation between language, culture and cognition. Many of his works were published posthumously in the first decades after his death. In the 1960s Whorf's views fell out of favor and he became the subject of harsh criticisms by scholars who considered language structure to primarily reflect cognitive universals rather than cultural differences. Critics argued that Whorf's ideas were untestable and poorly formulated and that they were based on badly analyzed or misunderstood data.
In the late 20th century, interest in Whorf's ideas experienced a resurgence, and a new generation of scholars began reading Whorf's works, arguing that previous critiques had only engaged superficially with Whorf's actual ideas, or had attributed to him ideas he had never expressed. The field of linguistic relativity studies remains an active focus of research in psycholinguistics and linguistic anthropology, and continues to generate debate and controversy between proponents of relativism and proponents of universalism. By comparison, Whorf's other work in linguistics, the development of such concepts as the allophone and the cryptotype, and the formulation of "Whorf's law" in Uto-Aztecan historical linguistics, have met with broad acceptance.
The son of Harry Church Whorf and Sarah Edna Lee Whorf, Benjamin Lee Whorf was born on April 24, 1897 in Winthrop, Massachusetts. Harry Church Whorf was an artist, intellectual and designer – first working as a commercial artist and later as a dramatist. Benjamin had two younger brothers, John and Richard, who both went on to become notable artists. John became an internationally renowned painter and illustrator; Richard was an actor in films such as "Yankee Doodle Dandy" and later an Emmy-nominated television director of such shows as "The Beverly Hillbillies". Benjamin was the intellectual of the three and at a young age he conducted chemical experiments with his father's photographic equipment. He was also an avid reader, interested in botany, astrology, and Middle American prehistory. He read William H. Prescott's "Conquest of Mexico" several times. At the age of 17 he began to keep a copious diary in which he recorded his thoughts and dreams.
Whorf graduated from the Massachusetts Institute of Technology in 1918 with a degree in chemical engineering where his academic performance was of average quality. In 1920 he married Celia Inez Peckham, who became the mother of his three children, Raymond Ben, Robert Peckham and Celia Lee. Around the same time he began work as a fire prevention engineer (an inspector) for the Hartford Fire Insurance Company. He was particularly good at the job and was highly commended by his employers. His job required him to travel to production facilities throughout New England to be inspected. One anecdote describes him arriving at a chemical plant in which he was denied access by the director because he would not allow anyone to see the production procedure which was a trade secret. Having been told what the plant produced, Whorf wrote a chemical formula on a piece of paper, saying to the director: "I think this is what you're doing". The surprised director asked Whorf how he knew about the secret procedure, and he simply answered: "You couldn't do it in any other way."
Whorf helped to attract new customers to the Fire Insurance Company; they favored his thorough inspections and recommendations. Another famous anecdote from his job was used by Whorf to argue that language use affects habitual behavior. Whorf described a workplace in which full gasoline drums were stored in one room and empty ones in another; he said that because of flammable vapor the "empty" drums were more dangerous than those that were full, although workers handled them less carefully to the point that they smoked in the room with "empty" drums, but not in the room with full ones. Whorf argued that by habitually speaking of the vapor-filled drums as empty and by extension as inert, the workers were oblivious to the risk posed by smoking near the "empty drums".
Whorf was a spiritual man throughout his lifetime although what religion he followed has been the subject of debate. As a young man he produced a manuscript titled "Why I have discarded evolution", causing some scholars to describe him as a devout Methodist Episcopalian, who was impressed with fundamentalism, and perhaps supportive of creationism. However, throughout his life Whorf's main religious interest was theosophy, a nonsectarian organization based on Buddhist and Hindu teachings that promotes the view of the world as an interconnected whole and the unity and brotherhood of humankind "without distinction of race, creed, sex, caste or color". Some scholars have argued that the conflict between spiritual and scientific inclinations has been a driving force in Whorf's intellectual development, particularly in the attraction by ideas of linguistic relativity. Whorf said that "of all groups of people with whom I have come in contact, Theosophical people seem the most capable of becoming excited about ideas—new ideas."
Around 1924 Whorf first became interested in linguistics. Originally he analyzed Biblical texts, seeking to uncover hidden layers of meaning. Inspired by the esoteric work "La langue hebraïque restituée" by Antoine Fabre d'Olivet, he began a semantic and grammatical analysis of Biblical Hebrew. Whorf's early manuscripts on Hebrew and Maya have been described as exhibiting a considerable degree of mysticism, as he sought to uncover esoteric meanings of glyphs and letters.
Whorf studied Biblical linguistics mainly at the Watkinson Library (now Hartford Public Library). This library had an extensive collection of materials about Native American linguistics and folklore, originally collected by James Hammond Trumbull. It was at the Watkinson library that Whorf became friends with the young boy, John B. Carroll, who later went on to study psychology under B. F. Skinner, and who in 1956 edited and published a selection of Whorf's essays as "Language, Thought and Reality" . The collection rekindled Whorf's interest in Mesoamerican antiquity. He began studying the Nahuatl language in 1925, and later, beginning in 1928, he studied the collections of Maya hieroglyphic texts. Quickly becoming conversant with the materials, he began a scholarly dialog with Mesoamericanists such as Alfred Tozzer, the Maya archaeologist at Harvard University, and Herbert J. Spinden of the Brooklyn Museum.
In 1928 he first presented a paper at the International Congress of Americanists in which he presented his translation of a Nahuatl document held at the Peabody Museum at Harvard. He also began to study the comparative linguistics of the Uto-Aztecan language family, which Edward Sapir had recently demonstrated to be a linguistic family. In addition to Nahuatl, Whorf studied the Piman and Tepecano languages, while in close correspondence with linguist J. Alden Mason.
Because of the promise shown by his work on Uto-Aztecan, Tozzer and Spinden advised Whorf to apply for a grant with the Social Science Research Council (SSRC) to support his research. Whorf considered using the money to travel to Mexico to procure Aztec manuscripts for the Watkinson library, but Tozzer suggested he spend the time in Mexico documenting modern Nahuatl dialects. In his application Whorf proposed to establish the oligosynthetic nature of the Nahuatl language. Before leaving Whorf presented the paper "Stem series in Maya" at the Linguistic Society of America conference, in which he argued that in the Mayan languages syllables carry symbolic content. The SSRC awarded Whorf the grant and in 1930 he traveled to Mexico City where Professor Robert H Barlow put him in contact with several speakers of Nahuatl to serve as his informants, among whom were Mariano Rojas of Tepoztlán and Luz Jimenez of Milpa Alta. The outcome of the trip to Mexico was Whorf's sketch of Milpa Alta Nahuatl, published only after his death, and an article on a series of Aztec pictograms found at the Tepozteco monument at Tepoztlán, Morelos in which he noted similarities in form and meaning between Aztec and Maya day signs.
Until his return from Mexico in 1930 Whorf had been entirely an autodidact in linguistic theory and field methodology, yet he had already made a name for himself in Middle American linguistics. Whorf had met Sapir, the leading US linguist of the day, at professional conferences, and in 1931 Sapir came to Yale from the University of Chicago to take a position as Professor of Anthropology. Alfred Tozzer sent Sapir a copy of Whorf's paper on "Nahuatl tones and saltillo". Sapir replied stating that it "should by all means be published"; however, it was not until 1993 that it was prepared for publication by Lyle Campbell and Frances Karttunen.
Whorf took Sapir's first course at Yale on "American Indian Linguistics". He enrolled in a program of graduate studies, nominally working towards a PhD in linguistics, but he never actually attempted to obtain a degree, satisfying himself with participating in the intellectual community around Sapir. At Yale, Whorf joined the circle of Sapir's students that included such luminary linguists as Morris Swadesh, Mary Haas, Harry Hoijer, G. L. Trager and Charles F. Voegelin. Whorf took on a central role among Sapir's students and was well respected.
Sapir had a profound influence on Whorf's thinking. Sapir's earliest writings had espoused views of the relation between thought and language stemming from the Humboldtian tradition he acquired through Franz Boas, which regarded language as the historical embodiment of "volksgeist", or ethnic world view. But Sapir had since become influenced by a current of logical positivism, such as that of Bertrand Russell and the early Ludwig Wittgenstein, particularly through Ogden and Richards' "The Meaning of Meaning", from which he adopted the view that natural language potentially obscures, rather than facilitates, the mind to perceive and describe the world as it really is. In this view, proper perception could only be accomplished through formal logics. During his stay at Yale, Whorf acquired this current of thought partly from Sapir and partly through his own readings of Russell and Ogden and Richards. As Whorf became more influenced by positivist science he also distanced himself from some approaches to language and meaning that he saw as lacking in rigor and insight. One of these was Polish philosopher Alfred Korzybski's General semantics, which was espoused in the US by Stuart Chase. Chase admired Whorf's work and frequently sought out a reluctant Whorf, who considered Chase to be "utterly incompetent by training and background to handle such a subject." Ironically, Chase would later write the foreword for Carroll's collection of Whorf's writings.
Sapir also encouraged Whorf to continue his work on the historical and descriptive linguistics of Uto-Aztecan. Whorf published several articles on that topic in this period, some of them with G. L. Trager, who had become his close friend. Whorf took a special interest in the Hopi language and started working with Ernest Naquayouma, a speaker of Hopi from Toreva village living in Manhattan, New York. Whorf credited Naquayouma as the source of most of his information on the Hopi language, although in 1938 he took a short field trip to the village of Mishongnovi, on the Second Mesa of the Hopi Reservation in Arizona.
In 1936, Whorf was appointed Honorary Research Fellow in Anthropology at Yale, and he was invited by Franz Boas to serve on the committee of the Society of American Linguistics (later Linguistic Society of America). In 1937, Yale awarded him the Sterling Fellowship. He was a lecturer in Anthropology from 1937 through 1938, replacing Sapir, who was gravely ill. Whorf gave graduate level lectures on "Problems of American Indian Linguistics". In 1938 with Trager's assistance he elaborated a report on the progress of linguistic research at the department of anthropology at Yale. The report includes some of Whorf's influential contributions to linguistic theory, such as the concept of the allophone and of covert grammatical categories. has argued, that in this report Whorf's linguistic theories exist in a condensed form, and that it was mainly through this report that Whorf exerted influence on the discipline of descriptive linguistics.
In late 1938, Whorf's own health declined. After an operation for cancer he fell into an unproductive period. He was also deeply influenced by Sapir's death in early 1939. It was in the writings of his last two years that he laid out the research program of linguistic relativity. His 1939 memorial article for Sapir, "The Relation of Habitual Thought And Behavior to Language", in particular has been taken to be Whorf's definitive statement of the issue, and is his most frequently quoted piece.
In his last year Whorf also published three articles in the "MIT Technology Review" titled "Science and Linguistics", "Linguistics as an Exact Science" and "Language and Logic". He was also invited to contribute an article to a theosophical journal, "Theosophist", published in Madras, India, for which he wrote "Language, Mind and Reality". In these final pieces he offered a critique of Western science in which he suggested that non-European languages often referred to physical phenomena in ways that more directly reflected aspects of reality than many European languages, and that science ought to pay attention to the effects of linguistic categorization in its efforts to describe the physical world. He particularly criticized the Indo-European languages for promoting a mistaken essentialist world view, which had been disproved by advances in the sciences, whereas he suggested that other languages dedicated more attention to processes and dynamics rather than stable essences. Whorf argued that paying attention to how other physical phenomena are described in the study of linguistics could make valuable contributions to science by pointing out the ways in which certain assumptions about reality are implicit in the structure of language itself, and how language guides the attention of speakers towards certain phenomena in the world which risk becoming overemphasized while leaving other phenomena at risk of being overlooked.
At Whorf's death his friend G. L. Trager was appointed as curator of his unpublished manuscripts. Some of them were published in the years after his death by another of Whorf's friends, Harry Hoijer. In the decade following, Trager and particularly Hoijer did much to popularize Whorf's ideas about linguistic relativity, and it was Hoijer who coined the term "Sapir–Whorf hypothesis" at a 1954 conference. Trager then published an article titled "The systematization of the Whorf hypothesis", which contributed to the idea that Whorf had proposed a hypothesis that should be the basis for a program of empirical research. Hoijer also published studies of Indigenous languages and cultures of the American South West in which Whorf found correspondences between cultural patterns and linguistic ones. The term, even though technically a misnomer, went on to become the most widely known label for Whorf's ideas. According to John A. Lucy "Whorf's work in linguistics was and still is recognized as being of superb professional quality by linguists".
Whorf's work began to fall out of favor less than a decade after his death, and he was subjected to severe criticism from scholars of language, culture and psychology. In 1953 and 1954 psychologists Roger Brown and Eric Lenneberg criticized Whorf for his reliance on anecdotal evidence, formulating a hypothesis to scientifically test his ideas, which they limited to an examination of a causal relation between grammatical or lexical structure and cognition or perception. Whorf himself did not advocate a straight causality between language and thought; instead he wrote that "Language and culture had grown up together"; that both were mutually shaped by the other. Hence, has argued that because the aim of the formulation of the Sapir–Whorf hypothesis was to test simple causation, from the outset it failed to test Whorf's ideas.
Focusing on color terminology, with easily discernible differences between perception and vocabulary, Brown and Lenneberg published in 1954 a study of Zuni color terms that slightly support a weak effect of semantic categorization of color terms on color perception. In doing so they began a line of empirical studies that investigated the principle of linguistic relativity.
Empirical testing of the Whorfian hypothesis declined in the 1960s to 1980s as Noam Chomsky began to redefine linguistics and much of psychology in formal universalist terms. Several studies from that period refuted Whorf's hypothesis, demonstrating that linguistic diversity is a surface veneer that masks underlying universal cognitive principles. Many studies were highly critical and disparaging in their language, ridiculing Whorf's analyses and examples or his lack of an academic degree. Throughout the 1980s most mentions of Whorf or of the Sapir–Whorf hypotheses continued to be disparaging, and led to a widespread view that Whorf's ideas had been proven wrong. Because Whorf was treated so severely in the scholarship during those decades, he has been described as "one of the prime whipping boys of introductory texts to linguistics". In the late 1980s, with the advent of cognitive linguistics and psycholinguistics some linguists sought to rehabilitate Whorf's reputation, as scholarship began to question whether earlier critiques of Whorf were justified.
By the 1960s analytical philosophers also became aware of the Sapir–Whorf hypothesis, and philosophers such as Max Black and Donald Davidson published scathing critiques of Whorf's strong relativist viewpoints. Black characterized Whorf's ideas about metaphysics as demonstrating "amateurish crudity". According to Black and Davidson, Whorf's viewpoint and the concept of linguistic relativity meant that translation between languages with different conceptual schemes would be impossible. Recent assessments such as those by Leavitt and Lee, however, consider Black and Davidson's interpretation to be based on an inaccurate characterization of Whorf's viewpoint, and even rather absurd given the time he spent trying to translate between different conceptual schemes. In their view the critiques are based on a lack of familiarity with Whorf's writings; according to these recent Whorf scholars a more accurate description of his viewpoint is that he thought translation to be possible, but only through careful attention to the subtle differences between conceptual schemes.
Eric Lenneberg, Noam Chomsky, and Steven Pinker have also criticized Whorf for failing to be sufficiently clear in his formulation of how language influences thought, and for failing to provide real evidence to support his assumptions. Generally Whorf's arguments took the form of examples that were anecdotal or speculative, and functioned as attempts to show how "exotic" grammatical traits were connected to what were considered equally exotic worlds of thought. Even Whorf's defenders admitted that his writing style was often convoluted and couched in neologisms – attributed to his awareness of language use, and his reluctance to use terminology that might have pre-existing connotations. argues that Whorf was mesmerized by the foreignness of indigenous languages, and exaggerated and idealized them. According to Lakoff, Whorf's tendency to exoticize data must be judged in the historical context: Whorf and the other Boasians wrote at a time in which racism and jingoism were predominant, and when it was unthinkable to many that "savages" had redeeming qualities, or that their languages were comparable in complexity to those of Europe. For this alone Lakoff argues, Whorf can be considered to be "Not just a pioneer in linguistics, but a pioneer as a human being".
Today many followers of universalist schools of thought continue to oppose the idea of linguistic relativity, seeing it as unsound or even ridiculous. For example, Steven Pinker argues in his book "The Language Instinct" that thought exists prior to language and independently of it, a view also espoused by philosophers of language such as Jerry Fodor, John Locke and Plato. In this interpretation, language is inconsequential to human thought because humans do not think in "natural" language, i.e. any language used for communication. Rather, we think in a meta-language that precedes natural language, which Pinker following Fodor calls "mentalese." Pinker attacks what he calls "Whorf's radical position", declaring, "the more you examine Whorf's arguments, the less sense they make." Scholars of a more "relativist" bent such as John A. Lucy and Stephen C. Levinson have criticized Pinker for misrepresenting Whorf's views and arguing against strawmen.
Linguistic relativity studies have experienced a resurgence since the 1990s, and a series of favorable experimental results have brought Whorfianism back into favor, especially in cultural psychology and linguistic anthropology. The first study directing positive attention towards Whorf's relativist position was George Lakoff's "Women, Fire and Dangerous Things", in which he argued that Whorf had been on the right track in his focus on differences in grammatical and lexical categories as a source of differences in conceptualization. In 1992 psychologist John A. Lucy published two books on the topic, one analyzing the intellectual genealogy of the hypothesis, arguing that previous studies had failed to appreciate the subtleties of Whorf's thinking; they had been unable to formulate a research agenda that would actually test Whorf's claims. Lucy proposed a new research design so that the hypothesis of linguistic relativity could be tested empirically, and to avoid the pitfalls of earlier studies which Lucy claimed had tended to presuppose the universality of the categories they were studying. His second book was an empirical study of the relation between grammatical categories and cognition in the Yucatec Maya language of Mexico.
In 1996 Penny Lee's reappraisal of Whorf's writings was published, reinstating Whorf as a serious and capable thinker. Lee argued that previous explorations of the Sapir–Whorf hypothesis had largely ignored Whorf's actual writings, and consequently asked questions very unlike those Whorf had asked. Also in that year a volume, "Rethinking Linguistic Relativity" edited by John J. Gumperz and Stephen C. Levinson gathered a range of researchers working in psycholinguistics, sociolinguistics and linguistic anthropology to bring renewed attention to the issue of how Whorf's theories could be updated, and a subsequent review of the new direction of the linguistic relativity paradigm cemented the development. Since then considerable empirical research into linguistic relativity has been carried out, especially at the Max Planck Institute for Psycholinguistics with scholarship motivating two edited volumes of linguistic relativity studies, and in American Institutions by scholars such as Lera Boroditsky and Dedre Gentner.
In turn universalist scholars frequently dismiss as "dull" or "boring", positive findings of influence of linguistic categories on thought or behavior, which are often subtle rather than spectacular, suggesting that Whorf's excitement about linguistic relativity had promised more spectacular findings than it was able to provide.
Whorf's views have been compared to those of philosophers such as Friedrich Nietzsche and the late Ludwig Wittgenstein, both of whom considered language to have important bearing on thought and reasoning. His hypotheses have also been compared to the views of psychologists such as Lev Vygotsky, whose social constructivism considers the cognitive development of children to be mediated by the social use of language. Vygotsky shared Whorf's interest in gestalt psychology, and he also read Sapir's works. Others have seen similarities between Whorf's work and the ideas of literary theorist Mikhail Bakhtin, who read Whorf and whose approach to textual meaning was similarly holistic and relativistic. Whorf's ideas have also been interpreted as a radical critique of positivist science.
Whorf is best known as the main proponent of what he called the principle of linguistic relativity, but which is often known as "the Sapir–Whorf hypothesis", named for him and Edward Sapir. Whorf never stated the principle in the form of a hypothesis, and the idea that linguistic categories influence perception and cognition was shared by many other scholars before him. But because Whorf, in his articles, gave specific examples of how he saw the grammatical categories of specific languages related to conceptual and behavioral patterns, he pointed towards an empirical research program that has been taken up by subsequent scholars, and which is often called "Sapir–Whorf studies".
Whorf and Sapir both drew explicitly on Albert Einstein's principle of general relativity; hence linguistic relativity refers to the concept of grammatical and semantic categories of a specific language providing a frame of reference as a medium through which observations are made. Following an original observation by Boas, Sapir demonstrated that speakers of a given language perceive sounds that are acoustically different as the same, if the sound comes from the underlying phoneme and does not contribute to changes in semantic meaning. Furthermore, speakers of languages are attentive to sounds, particularly if the same two sounds come from different phonemes. Such differentiation is an example of how various observational frames of reference leads to different patterns of attention and perception.
Whorf was also influenced by gestalt psychology, believing that languages require their speakers to describe the same events as different gestalt constructions, which he called "isolates from experience". An example is how the action of cleaning a gun is different in English and Shawnee: English focuses on the instrumental relation between two objects and the purpose of the action (removing dirt); whereas the Shawnee language focuses on the movement—using an arm to create a dry space in a hole. The event described is the same, but the attention in terms of figure and ground are different.
If read superficially, some of Whorf's statements lend themselves to the interpretation that he supported linguistic determinism. For example, in an often-quoted passage Whorf writes:
The statements about the obligatory nature of the terms of language have been taken to suggest that Whorf meant that language completely determined the scope of possible conceptualizations. However neo-Whorfians argue that here Whorf is writing about the terms in which we speak of the world, not the terms in which we think of it. Whorf noted that to communicate thoughts and experiences with members of a speech community speakers must use the linguistic categories of their shared language, which requires moulding experiences into the shape of language to speak them—a process called "thinking for speaking". This interpretation is supported by Whorf's subsequent statement that "No individual is free to describe nature with absolute impartiality, but is constrained by certain modes of interpretation even when he thinks himself most free". Similarly the statement that observers are led to different pictures of the universe has been understood as an argument that different conceptualizations are incommensurable making translation between different conceptual and linguistic systems impossible. Neo-Whorfians argue this to be a misreading since throughout his work one of his main points was that such systems could be "calibrated" and thereby be made commensurable, but only when we become aware of the differences in conceptual schemes through linguistic analysis.
Whorf's study of Hopi time has been the most widely discussed and criticized example of linguistic relativity. In his analysis he argues that there is a relation between how the Hopi people conceptualize time, how they speak of temporal relations, and the grammar of the Hopi language. Whorf's most elaborate argument for the existence of linguistic relativity was based on what he saw as a fundamental difference in the understanding of time as a conceptual category among the Hopi. He argued that the Hopi language, in contrast to English and other SAE languages, does not treat the flow of time as a sequence of distinct countable instances, like "three days" or "five years", but rather as a single process. Because of this difference, the language lacks nouns that refer to units of time. He proposed that the Hopi view of time was fundamental in all aspects of their culture and furthermore explained certain patterns of behavior. In his 1939 memorial essay to Sapir he wrote that “... the Hopi language is seen to contain no words, grammatical forms, construction or expressions that refer directly to what we call 'time', or to past, present, or future...”
Linguist Ekkehart Malotki challenged Whorf's analyses of Hopi temporal expressions and concepts with numerous examples how the Hopi language refers to time. Malotki argues that in the Hopi language the system of tenses consists of future and non-future and that the single difference between the three-tense system of European languages and the Hopi system, is that the latter combines past and present to form a single category.
Malotki's critique was widely cited as the final piece of evidence in refuting Whorf's ideas and his concept of linguistic relativity while other scholars defended the analysis of Hopi, arguing that Whorf's claim was not that Hopi lacked words or categories to describe temporality, but that the Hopi concept of time is altogether different from that of English speakers. Whorf described the Hopi categories of tense, noting that time is not divided into past, present and future, as is common in European languages, but rather a single tense refers to both present and past while another refers to events that have not yet happened and may or may not happen in the future. He also described a large array of stems that he called "tensors" which describes aspects of temporality, but without referring to countable units of time as in English and most European languages.
Whorf's distinction between "overt" (phenotypical) and "covert" (cryptotypical) grammatical categories has become widely influential in linguistics and anthropology. British linguist Michael Halliday wrote about Whorf's notion of the "cryptotype", and the conception of "how grammar models reality", that it would "eventually turn out to be among the major contributions of twentieth century linguistics".
Furthermore, Whorf introduced the concept of the allophone, a word that describes positional phonetic variants of a single superordinate phoneme; in doing so he placed a cornerstone in consolidating early phoneme theory. The term was popularized by G. L. Trager and Bernard Bloch in a 1941 paper on English phonology and went on to become part of standard usage within the American structuralist tradition. Whorf considered allophones to be another example of linguistic relativity. The principle of allophony describes how acoustically different sounds can be treated as reflections of a single phoneme in a language. This sometimes makes the different sound appear similar to native speakers of the language, even to the point that they are unable to distinguish them auditorily without special training. Whorf wrote that: "[allophones] are also relativistic. Objectively, acoustically, and physiologically the allophones of [a] phoneme may be extremely unlike, hence the impossibility of determining what is what. You always have to keep the observer in the picture. What linguistic pattern makes like is like, and what it makes unlike is unlike".(Whorf, 1940)
Central to Whorf's inquiries was the approach later described as metalinguistics by G. L. Trager, who in 1950 published four of Whorf's essays as "Four articles on Metalinguistics". Whorf was crucially interested in the ways in which speakers come to be aware of the language that they use, and become able to describe and analyze language using language itself to do so. Whorf saw that the ability to arrive at progressively more accurate descriptions of the world hinged partly on the ability to construct a metalanguage to describe how language affects experience, and thus to have the ability to calibrate different conceptual schemes. Whorf's endeavors have since been taken up in the development of the study of metalinguistics and metalinguistic awareness, first by Michael Silverstein who published a radical and influential rereading of Whorf in 1979 and subsequently in the field of linguistic anthropology.
Whorf conducted important work on the Uto-Aztecan languages, which Sapir had conclusively demonstrated as a valid language family in 1915. Working first on Nahuatl, Tepecano, Tohono O'odham he established familiarity with the language group before he met Sapir in 1928. During Whorf's time at Yale he published several articles on Uto-Aztecan linguistics, such as "Notes on the Tübatulabal language". In 1935 he published "The Comparative Linguistics of Uto-Aztecan", and a review of Kroeber's survey of Uto-Aztecan linguistics. Whorf's work served to further cement the foundations of the comparative Uto-Aztecan studies.
The first Native American language Whorf studied was the Uto-Aztecan language Nahuatl which he studied first from colonial grammars and documents, and later became the subject of his first field work experience in 1930. Based on his studies of Classical Nahuatl Whorf argued that Nahuatl was an oligosynthetic language, a typological category that he invented. In Mexico working with native speakers, he studied the dialects of Milpa Alta and Tepoztlán. His grammar sketch of the Milpa Alta dialect of Nahuatl was not published during his lifetime, but it was published posthumously by Harry Hoijer and became quite influential and used as the basic description of "Modern Nahuatl" by many scholars. The description of the dialect is quite condensed and in some places difficult to understand because of Whorf's propensity of inventing his own unique terminology for grammatical concepts, but the work has generally been considered to be technically advanced. He also produced an analysis of the prosody of these dialects which he related to the history of the glottal stop and vowel length in Nahuan languages. This work was prepared for publication by Lyle Campbell and Frances Karttunen in 1993, who also considered it a valuable description of the two endangered dialects, and the only one of its kind to include detailed phonetic analysis of supra-segmental phenomena.
In Uto-Aztecan linguistics one of Whorf's achievements was to determine the reason the Nahuatl language has the phoneme , not found in the other languages of the family. The existence of in Nahuatl had puzzled previous linguists and caused Sapir to reconstruct a phoneme for proto-Uto-Aztecan based only on evidence from Aztecan. In a 1937 paper published in the journal American Anthropologist, Whorf argued that the phoneme resulted from some of the Nahuan or Aztecan languages having undergone a sound change from the original * to in the position before *. This sound law is known as "Whorf's law", considered valid although a more detailed understanding of the precise conditions under which it took place has since been developed.
Also in 1937, Whorf and his friend G. L. Trager, published a paper in which they elaborated on the Azteco-Tanoan language family, proposed originally by Sapir as a family comprising the Uto-Aztecan and the Kiowa-Tanoan languages—(the Tewa and Kiowa languages).
In a series of published and unpublished studies in the 1930s, Whorf argued that Mayan writing was to some extent phonetic. While his work on deciphering the Maya script gained some support from Alfred Tozzer at Harvard, the main authority on Ancient Maya culture, J. E. S. Thompson, strongly rejected Whorf's ideas, saying that Mayan writing lacked a phonetic component and is therefore impossible to decipher based on a linguistic analysis. Whorf argued that it was exactly the reluctance to apply linguistic analysis of Maya languages that had held the decipherment back. Whorf sought for cues to phonetic values within the elements of the specific signs, and never realized that the system was logo-syllabic. Although Whorf's approach to understanding the Maya script is now known to have been misguided, his central claim that the script was phonetic and should be deciphered as such was vindicated by Yuri Knorozov's syllabic decipherment of Mayan writing in the 1950s. | https://en.wikipedia.org/wiki?curid=3355 |
Colin Powell
Colin Luther Powell (; born April 5, 1937) is an American politician and retired four-star general in the United States Army. During his military career, Powell also served as National Security Advisor (1987–1989), as Commander of the U.S. Army Forces Command (1989) and as Chairman of the Joint Chiefs of Staff (1989–1993). He played major roles in the invasion of Panama in 1989 and especially the Persian Gulf War against Iraq in 1990–1991. He was the 65th United States Secretary of State, serving under Republican President George W. Bush. He was the first African-American to serve as Secretary of State. His term was highly controversial regarding his inaccurate justification for America's Iraq War in 2003. He was fired after Bush was reelected in 2004.
Powell was born in New York City in 1937 and was raised in the South Bronx. His parents, Luther and Maud Powell, immigrated to the United States from Jamaica. Powell was educated in the New York City public schools, graduating from the City College of New York (CCNY), where he earned a bachelor's degree in geology. He also participated in ROTC at CCNY and received a commission as an Army second lieutenant upon graduation in June 1958. Powell was a professional soldier for 35 years, during which time he held many command and staff positions and rose to the rank of 4-star general. His last assignment, from October 1, 1989, to September 30, 1993, was as the 12th Chairman of the Joint Chiefs of Staff, the highest military position in the Department of Defense. During this time, he oversaw 28 crises, including Operation Desert Storm in the 1991 Persian Gulf War. He formulated the Powell Doctrine which limits American military action unless it satisfies criteria regarding American national security interests, overwhelming force, and widespread public support.
In retirement, Powell wrote his autobiography, "My American Journey". He pursued a career as a public speaker, addressing audiences across the country and abroad. Prior to his appointment as Secretary of State, Powell was the chairman of America's Promise – The Alliance for Youth, a national nonprofit organization dedicated to mobilizing people from every sector of American life to build the character and competence of young people. Powell is the recipient of numerous U.S. and foreign military awards and decorations. Powell's civilian awards include the Presidential Medal of Freedom (twice), the President's Citizens Medal, the Congressional Gold Medal, the Secretary of State Distinguished Service Medal, and the Secretary of Energy Distinguished Service Medal. Several schools and other institutions have been named in his honor and he holds honorary degrees from universities and colleges across the country.
In 2016, while not a candidate for that year's election, Powell received three electoral votes for the office of President of the United States. On June 7, 2020, Powell announced that he will be voting for former Vice President Joe Biden in the 2020 presidential election.
Powell was born on April 5, 1937, in Harlem, a neighborhood in the New York City borough of Manhattan, to Jamaican immigrants, Maud Arial (née McKoy) and Luther Theophilus Powell. His parents were both of mixed African and Scottish ancestry. Luther worked as a shipping clerk and Maud as a seamstress. Powell was raised in the South Bronx and attended Morris High School, from which he graduated in 1954. (This school has since closed.)
While at school, Powell worked at a local baby furniture store, where he picked up Yiddish from the eastern European Jewish shopkeepers and some of the customers. (He once spoke to a Jewish reporter in Yiddish, much to the man's surprise.) He also served as a Shabbos goy, helping Orthodox families with needed tasks on the Sabbath. He received a Bachelor of Science degree in Geology from the City College of New York in 1958 and has said he was a "C average" student. He later earned an MBA degree from the George Washington University in 1971, after his second tour in Vietnam.
Despite his parents' pronunciation of his name as , Powell has pronounced his name since childhood, after the World War II flyer Colin P. Kelly Jr. Public officials and radio and television reporters have used Powell's preferred pronunciation.
Powell was a professional soldier for 35 years, holding a variety of command and staff positions and rising to the rank of general.
Powell described joining the Reserve Officers' Training Corps (ROTC) during college as one of the happiest experiences of his life; discovering something he loved and could do well, he felt he had "found himself." According to Powell: It was only once I was in college, about six months into college when I found something that I liked, and that was ROTC, Reserve Officer Training Corps in the military. And I not only liked it, but I was pretty good at it. That's what you really have to look for in life, something that you like, and something that you think you're pretty good at. And if you can put those two things together, then you're on the right track, and just drive on. Cadet Powell joined the Pershing Rifles, the ROTC fraternal organization and drill team begun by General John Pershing. Even after he had become a general, Powell kept on his desk a pen set he had won for a drill team competition.
Upon graduation, he received a commission as an Army second lieutenant. After attending basic training at Fort Benning, Powell was assigned to the 48th Infantry, in West Germany, as a platoon leader.
In his autobiography, Powell said he is haunted by the nightmare of the Vietnam War and felt that the leadership was very ineffective.
Captain Powell served a tour in Vietnam as a South Vietnamese Army (ARVN) advisor from 1962 to 1963. While on patrol in a Viet Cong-held area, he was wounded by stepping on a punji stake. The large infection made it difficult for him to walk, and caused his foot to swell for a short time, shortening his first tour.
Powell returned to Vietnam as a major in 1968, serving as assistant chief of staff of operations for the in the 23rd (Americal) Infantry Division. During the second tour in Vietnam he was decorated with the Soldier's Medal for bravery after he survived a helicopter crash and single-handedly rescued three others, including division commander Major General Charles M. Gettys, from the burning wreckage.
Powell was charged with investigating a detailed letter by 11th Light Infantry Brigade soldier Tom Glen, which backed up rumored allegations of the My Lai Massacre. He wrote: "In direct refutation of this portrayal is the fact that relations between American soldiers and the Vietnamese people are excellent." Later, Powell's assessment would be described as whitewashing the news of the massacre, and questions would continue to remain undisclosed to the public. In May 2004, Powell said to television and radio host Larry King, "I was in a unit that was responsible for My Lai. I got there after My Lai happened. So, in war, these sorts of horrible things happen every now and again, but they are still to be deplored."
Powell served a White House Fellowship under President Richard Nixon from 1972 to 1973. During 1975–1976 he attended the National War College, Washington, D.C.
In his autobiography, "My American Journey", Powell named several officers he served under who inspired and mentored him. As a lieutenant colonel serving in South Korea, Powell was very close to General Henry "Gunfighter" Emerson. Powell said he regarded Emerson as one of the most caring officers he ever met. Emerson insisted his troops train at night to fight a possible North Korean attack, and made them repeatedly watch the television film "Brian's Song" to promote racial harmony. Powell always professed that what set Emerson apart was his great love of his soldiers and concern for their welfare. After a race riot occurred, in which African American soldiers almost killed a White officer, Powell was charged by Emerson to crack down on black militants; Powell's efforts led to the discharge of one soldier, and other efforts to reduce racial tensions. During 1976–1977 he commanded the 2nd Brigade of the 101st Airborne Division.
In the early 1980s, Powell served at Fort Carson, Colorado. After he left Fort Carson, Powell became senior military assistant to Secretary of Defense Caspar Weinberger, whom he assisted during the 1983 invasion of Grenada and the 1986 airstrike on Libya.
In 1986, Powell took over the command of V Corps in Frankfurt, Germany, from Robert Lewis "Sam" Wetzel. The next year, he served as United States Deputy National Security Advisor, under Frank Carlucci.
Following the Iran–Contra scandal, Powell became, at the age of 49, Ronald Reagan's National Security Advisor, serving from 1987 to 1989 while retaining his Army commission as a lieutenant general.
In April 1989, after his tenure with the National Security Council, Powell was promoted to four-star general under President George H. W. Bush and briefly served as the Commander in Chief, Forces Command (FORSCOM), headquartered at Fort McPherson, Georgia, overseeing all Army, Army Reserve, and National Guard units in the Continental U.S., Alaska, Hawaii, and Puerto Rico. He became the third general since World War II to reach four-star rank without ever serving as a division commander, joining Dwight D. Eisenhower and Alexander Haig.
Later that year, President George H. W. Bush selected him as Chairman of the Joint Chiefs of Staff.
Powell's last military assignment, from October 1, 1989, to September 30, 1993, was as the 12th Chairman of the Joint Chiefs of Staff, the highest military position in the Department of Defense. At age 52, he became the youngest officer, and first Afro-Caribbean American, to serve in this position. Powell was also the first JCS Chair who received his commission through ROTC.
During this time, he oversaw responses to 28 crises, including the invasion of Panama in 1989 to remove General Manuel Noriega from power and Operation Desert Storm in the 1991 Persian Gulf War. During these events, Powell earned his nickname, "the reluctant warrior." He rarely advocated military intervention as the first solution to an international crisis, and instead usually prescribed diplomacy and containment.
As a military strategist, Powell advocated an approach to military conflicts that maximizes the potential for success and minimizes casualties. A component of this approach is the use of overwhelming force, which he applied to Operation Desert Storm in 1991. His approach has been dubbed the "Powell Doctrine." Powell continued as chairman of the JCS into the Clinton presidency but as a dedicated "realist" he considered himself a bad fit for an administration largely made up of liberal internationalists. He clashed with then-U.S. ambassador to the United Nations Madeleine Albright over the Bosnian crisis, as he opposed any military interventions that didn't involve US interests.
During his chairmanship of the JCS, there was discussion of awarding Powell a fifth star, granting him the rank of General of the Army. But even in the wake of public and Congressional pressure to do so, Clinton-Gore presidential transition team staffers decided against it.
First printed in the August 13, 1989, issue of "Parade" magazine, these are Colin Powell's 13 Rules of Leadership.
Powell's experience in military matters made him a very popular figure with both American political parties. Many Democrats admired his moderate stance on military matters, while many Republicans saw him as a great asset associated with the successes of past Republican administrations. Put forth as a potential Democratic Vice Presidential nominee in the 1992 U.S. presidential election or even potentially replacing Vice President Dan Quayle as the Republican Vice Presidential nominee, Powell eventually declared himself a Republican and began to campaign for Republican candidates in 1995. He was touted as a possible opponent of Bill Clinton in the 1996 U.S. presidential election, possibly capitalizing on a split conservative vote in Iowa and even leading New Hampshire polls for the GOP nomination, but Powell declined, citing a lack of passion for politics. Powell defeated Clinton 50–38 in a hypothetical match-up proposed to voters in the exit polls conducted on Election Day. Despite not standing in the race, Powell won the Republican New Hampshire Vice-Presidential primary on write-in votes.
In 1997, Powell founded America's Promise with the objective of helping children from all socioeconomic sectors. That same year saw the establishment of The Colin L. Powell Center for Leadership and Service. The mission of the Center is to "prepare new generations of publicly engaged leaders from populations previously underrepresented in public service and policy circles, to build a strong culture of civic engagement at City College, and to mobilize campus resources to meet pressing community needs and serve the public good."
Powell was mentioned as a potential candidate in the 2000 U.S. presidential election, but again decided against running. Once Texas Governor George W. Bush secured the Republican nomination, Powell endorsed him for president and spoke at the 2000 Republican National Convention. Bush won the general election and appointed Powell as Secretary of State.
In the electoral college vote count of 2016, Powell received three votes for President from faithless electors from Washington.
As Secretary of State in the Bush administration, Powell was perceived as moderate. Powell was unanimously confirmed by the United States Senate. Over the course of his tenure he traveled less than any other U.S. Secretary of State in 30 years.
On September 11, 2001, Powell was in Lima, Peru, meeting with President Alejandro Toledo and US Ambassador John Hamilton, and attending the special session of the OAS General Assembly that subsequently adopted the Inter-American Democratic Charter. After the September 11 attacks, Powell's job became of critical importance in managing America's relationships with foreign countries in order to secure a stable coalition in the War on Terrorism.
Powell came under fire for his role in building the case for the 2003 Invasion of Iraq. In a press statement on February 24, 2001, he had said that sanctions against Iraq had prevented the development of any weapons of mass destruction by Saddam Hussein. As was the case in the days leading up to the Persian Gulf War, Powell was initially opposed to a forcible overthrow of Saddam, preferring to continue a policy of containment. However, Powell eventually agreed to go along with the Bush administration's determination to remove Saddam. He had often clashed with others in the administration, who were reportedly planning an Iraq invasion even before the September 11 attacks, an insight supported by testimony by former terrorism czar Richard Clarke in front of the 9/11 Commission. The main concession Powell wanted before he would offer his full support for the Iraq War was the involvement of the international community in the invasion, as opposed to a unilateral approach. He was also successful in persuading Bush to take the case of Iraq to the United Nations, and in moderating other initiatives. Powell was placed at the forefront of this diplomatic campaign.
Powell's chief role was to garner international support for a multi-national coalition to mount the invasion. To this end, Powell addressed a plenary session of the United Nations Security Council on February 5, 2003, to argue in favor of military action. Citing numerous anonymous Iraqi defectors, Powell asserted that "there can be no doubt that Saddam Hussein has biological weapons and the capability to rapidly produce more, many more." Powell also stated that there was "no doubt in my mind" that Saddam was working to obtain key components to produce nuclear weapons.
Most observers praised Powell's oratorical skills. However, Britain's "Channel 4 News" reported soon afterwards that a UK intelligence dossier that Powell had referred to as a "fine paper" during his presentation had been based on old material and plagiarized an essay by American graduate student Ibrahim al-Marashi.
A 2004 report by the Iraq Survey Group concluded that the evidence that Powell offered to support the allegation that the Iraqi government possessed weapons of mass destruction (WMDs) was inaccurate.
In an interview with Charlie Rose, Powell contended that prior to his UN presentation, he had merely four days to review the data concerning WMD in Iraq.
A Senate report on intelligence failures would later detail the intense debate that went on behind the scenes on what to include in Powell's speech. State Department analysts had found dozens of factual problems in drafts of the speech. Some of the claims were taken out, but others were left in, such as claims based on the yellowcake forgery. The administration came under fire for having acted on faulty intelligence, particularly what was single-sourced to the informant known as Curveball. Powell later recounted how Vice President Dick Cheney had joked with him before he gave the speech, telling him, "You've got high poll ratings; you can afford to lose a few points." Powell's longtime aide-de-camp and Chief of Staff from 1989–2003, Colonel Lawrence Wilkerson, later characterized Cheney's view of Powell's mission as to "go up there and sell it, and we'll have moved forward a peg or two. Fall on your damn sword and kill yourself, and I'll be happy, too."
In September 2005, Powell was asked about the speech during an interview with Barbara Walters and responded that it was a "blot" on his record. He went on to say, "It will always be a part of my record. It was painful. It's painful now."
Wilkerson said that he inadvertently participated in a hoax on the American people in preparing Powell's erroneous testimony before the United Nations Security Council.
Because Powell was seen as more moderate than most figures in the administration, he was spared many of the attacks that have been leveled at more controversial advocates of the invasion, such as Donald Rumsfeld and Paul Wolfowitz. At times, infighting among the Powell-led State Department, the Rumsfeld-led Defense Department, and Cheney's office had the effect of polarizing the administration on crucial issues, such as what actions to take regarding Iran and North Korea.
After Saddam Hussein had been deposed, Powell's new role was to once again establish a working international coalition, this time to assist in the rebuilding of post-war Iraq. On September 13, 2004, Powell testified before the Senate Governmental Affairs Committee, acknowledging that the sources who provided much of the information in his February 2003 UN presentation were "wrong" and that it was "unlikely" that any stockpiles of WMDs would be found. Claiming that he was unaware that some intelligence officials questioned the information prior to his presentation, Powell pushed for reform in the intelligence community, including the creation of a national intelligence director who would assure that "what one person knew, everyone else knew."
Additionally, Powell has been critical of other aspects of U.S. foreign policy in the past, such as its support for the 1973 Chilean coup d'état. From two separate interviews in 2003, Powell stated in one about the 1973 event "I can't justify or explain the actions and decisions that were made at that time. It was a different time. There was a great deal of concern about communism in this part of the world. Communism was a threat to the democracies in this part of the world. It was a threat to the United States." In another interview, however, he also simply stated "With respect to your earlier comment about Chile in the 1970s and what happened with Mr. Allende, it is not a part of American history that we're proud of."
In November the president "forced Powell to resign," says Walter LaFeber. Powell announced his resignation as Secretary of State on November 15, 2004. shortly after Bush was reelected. He had been asked to resign by the president's chief of staff, Andrew Card. Powell announced that he would stay on until the end of Bush's first term or until his replacement's confirmation by Congress. The following day, Bush nominated National Security Advisor Condoleezza Rice as Powell's successor. News of Powell's expulsion spurred mixed reactions from politicians around the world — some upset at the loss of a statesman seen as a moderating factor within the Bush administration, but others hoping for Powell's successor to wield more influence within the cabinet.
In mid-November, Powell stated that he had seen new evidence suggesting that Iran was adapting missiles for a nuclear delivery system. The accusation came at the same time as the settlement of an agreement between Iran, the IAEA, and the European Union.
On December 31, 2004, Powell rang in the New Year by pressing a button in Times Square with New York City Mayor Michael Bloomberg to initiate the ball drop and 60 second countdown, ushering in the year 2005. He appeared on the networks that were broadcasting New Year's Eve specials and talked about this honor, as well as being a native of New York City.
Although biographer Jeffrey J. Matthews is highly critical of how Powell misled the United Nations Security Council regarding weapons of mass destruction in Iraq, he credits Powell with a series of achievements at the State Department. These include restoration of morale to a psychologically demoralized professional diplomats, leadership of the international HIV/AIDs initiative, resolving a crisis with China, and blocking efforts to tie Saddam Hussein to the 9/11 attacks on the United States.
After retiring from the role of Secretary of State, Powell returned to private life. In April 2005, he was privately telephoned by Republican senators Lincoln Chafee and Chuck Hagel, at which time Powell expressed reservations and mixed reviews about the nomination of John R. Bolton as ambassador to the United Nations, but refrained from advising the senators to oppose Bolton (Powell had clashed with Bolton during Bush's first term). The decision was viewed as potentially dealing significant damage to Bolton's chances of confirmation. Bolton was put into the position via a recess appointment because of the strong opposition in the Senate.
On April 28, 2005, an opinion piece in "The Guardian" by Sidney Blumenthal (a former top aide to President Bill Clinton) claimed that Powell was in fact "conducting a campaign" against Bolton because of the acrimonious battles they had had while working together, which among other things had resulted in Powell cutting Bolton out of talks with Iran and Libya after complaints about Bolton's involvement from the British. Blumenthal added that "The foreign relations committee has discovered that Bolton made a highly unusual request and gained access to 10 intercepts by the National Security Agency. Staff members on the committee believe that Bolton was probably spying on Powell, his senior advisors and other officials reporting to him on diplomatic initiatives that Bolton opposed."
In July 2005, Powell joined Kleiner Perkins, a well-known Silicon Valley venture capital firm, with the title of "strategic limited partner."
In September 2005, Powell criticized the response to Hurricane Katrina. Powell said that thousands of people were not properly protected, but because they were poor rather than because they were black.
On January 5, 2006, he participated in a meeting at the White House of former Secretaries of Defense and State to discuss United States foreign policy with Bush administration officials. In September 2006, Powell sided with more moderate Senate Republicans in supporting more rights for detainees and opposing President Bush's terrorism bill. He backed Senators John Warner, John McCain and Lindsey Graham in their statement that U.S. military and intelligence personnel in future wars will suffer for abuses committed in 2006 by the U.S. in the name of fighting terrorism. Powell stated that "The world is beginning to doubt the moral basis of [America's] fight against terrorism."
Also in 2006, Powell began appearing as a speaker at a series of motivational events called "Get Motivated", along with former New York Mayor Rudy Giuliani. In his speeches for the tour, he openly criticized the Bush Administration on a number of issues. Powell has been the recipient of mild criticism for his role with "Get Motivated" which has been called a "get-rich-quick-without-much-effort, feel-good schemology."
In 2007, he joined the board of directors of Steve Case's new company Revolution Health. Powell also serves on the Council on Foreign Relations Board of directors.
Powell, in honor of Martin Luther King Day, dropped the ceremonial first puck at a New York Islanders ice hockey game at Nassau Coliseum on January 21, 2008. On November 11, 2008, Powell again dropped the puck in recognition of Military Appreciation Day and Veterans Day.
In 2008, Powell encouraged young people to continue to use new technologies to their advantage in the future. In a speech at the Center for Strategic and International Studies to a room of young professionals, he said, "That's your generation...a generation that is hard-wired digital, a generation that understands the power of the information revolution and how it is transforming the world. A generation that you represent, and you're coming together to share; to debate; to decide; to connect with each other." At this event, he encouraged the next generation to involve themselves politically on the upcoming Next America Project, which uses online debate to provide policy recommendations for the upcoming administration.
In 2008, Powell served as a spokesperson for National Mentoring Month, a campaign held each January to recruit volunteer mentors for at-risk youth.
Soon after Barack Obama's 2008 election, Powell began being mentioned as a possible cabinet member. He was not nominated.
In September 2009, Powell advised President Obama against surging US forces in Afghanistan. The president announced the surge the following December.
On March 14, 2014, Salesforce.com announced that Powell had joined its board of directors.
A liberal Republican, Powell is known for his willingness to support liberal or centrist causes. He is pro-choice regarding abortion, and in favor of "reasonable" gun control. He stated in his autobiography that he supports affirmative action that levels the playing field, without giving a leg up to undeserving persons because of racial issues. Powell was also instrumental in the 1993 implementation of the military's don't ask, don't tell policy, though he later supported its repeal as proposed by Robert Gates and Admiral Mike Mullen in January 2010, saying "circumstances had changed."
The Vietnam War had a profound effect on Powell's views of the proper use of military force. These views are described in detail in the autobiography "My American Journey". The Powell Doctrine, as the views became known, was a central component of U.S. policy in the Persian Gulf War (the first U.S. war in Iraq) and U.S. invasion of Afghanistan (the overthrow of the Taliban regime in Afghanistan following the September 11 attacks). The hallmark of both operations was strong international cooperation, and the use of overwhelming military force.
Powell gained attention in 2004 when, in a conversation with British Foreign Secretary Jack Straw, he reportedly referred to neoconservatives within the Bush administration as "fucking crazies." In addition to being reported in the press (although the expletive was generally censored in the U.S. press), the quotation was used by British journalist James Naughtie in his book, "The Accidental American: Tony Blair and the Presidency", and by former Hong Kong governor Chris Patten in his book, "Cousins and Strangers: America, Britain, and Europe in a New Century".
In a September 2006 letter to Sen. John McCain, General Powell expressed opposition to President Bush's push for military tribunals of those formerly and currently classified as enemy combatants. Specifically, he objected to the effort in Congress to "redefine Common Article 3 of the Geneva Convention." He also asserted: "The world is beginning to doubt the moral basis of our fight against terrorism."
While Powell was wary of a military solution, he supported the decision to invade Iraq after the Bush administration concluded that diplomatic efforts had failed. After his departure from the State Department, Powell repeatedly emphasized his continued support for American involvement in the Iraq War.
At the 2007 Aspen Ideas Festival in Colorado, Powell revealed that he had spent two and a half hours explaining to President Bush "the consequences of going into an Arab country and becoming the occupiers." During this discussion, he insisted that the U.S. appeal to the United Nations first, but if diplomacy failed, he would support the invasion: "I also had to say to him that you are the President, you will have to make the ultimate judgment, and if the judgment is this isn't working and we don't think it is going to solve the problem, then if military action is undertaken I'm with you, I support you."
In a 2008 interview on CNN, Powell reiterated his support for the 2003 decision to invade Iraq in the context of his endorsement of Barack Obama, stating: "My role has been very, very straightforward. I wanted to avoid a war. The president [Bush] agreed with me. We tried to do that. We couldn't get it through the U.N. and when the president made the decision, I supported that decision. And I've never blinked from that. I've never said I didn't support a decision to go to war."
Powell's position on the Iraq War troop surge of 2007 has been less consistent. In December 2006, he expressed skepticism that the strategy would work and whether the U.S. military had enough troops to carry it out successfully. He stated: "I am not persuaded that another surge of troops into Baghdad for the purposes of suppressing this communitarian violence, this civil war, will work." Following his endorsement of Barack Obama in October 2008, however, Powell praised General David Petraeus and U.S. troops, as well as the Iraqi government, concluding that "it's starting to turn around." By mid-2009, he had concluded a surge of U.S. forces in Iraq should have come sooner, perhaps in late 2003. Throughout this period, Powell consistently argued that Iraqi political progress was essential, not just military force.
Powell donated the maximum allowable amount to John McCain's campaign in the summer of 2007 and in early 2008, his name was listed as a possible running mate for Republican nominee McCain's bid during the 2008 U.S. presidential election.
McCain won the Republican presidential nomination, but the Democrats nominated the first black candidate, Senator Barack Obama of Illinois. On October 19, 2008, Powell announced his endorsement of Obama during a "Meet the Press" interview, citing "his ability to inspire, because of the inclusive nature of his campaign, because he is reaching out all across America, because of who he is and his rhetorical abilities", in addition to his "style and substance." He additionally referred to Obama as a "transformational figure." Powell further questioned McCain's judgment in appointing Sarah Palin as the vice presidential candidate, stating that despite the fact that she is admired, "now that we have had a chance to watch her for some seven weeks, I don't believe she's ready to be president of the United States, which is the job of the vice president." He said that Obama's choice for vice-president, Joe Biden, was ready to be president. He also added that he was "troubled" by the "false intimations that Obama was Muslim." Powell stated that "[Obama] is a Christian—he's always been a Christian... But the really right answer is, what if he is? Is there something wrong with being a Muslim in this country? The answer's no, that's not America." Powell then mentioned Kareem Rashad Sultan Khan, a Muslim American soldier in the U.S. Army who served and died in the Iraq War. He later stated, "Over the last seven weeks, the approach of the Republican Party has become narrower and narrower [...] I look at these kind of approaches to the campaign, and they trouble me." Powell concluded his Sunday morning talk show comments, "It isn't easy for me to disappoint Sen. McCain in the way that I have this morning, and I regret that [...] I think we need a transformational figure. I think we need a president who is a generational change and that's why I'm supporting Barack Obama, not out of any lack of respect or admiration for Sen. John McCain." Later in a December 12, 2008, CNN interview with Fareed Zakaria, Powell reiterated his belief that during the last few months of the campaign, Palin pushed the Republican party further to the right and had a polarizing impact on it.
When asked why he is still a Republican on "Meet the Press" he said, "I'm still a Republican. And I think the Republican Party needs me more than the Democratic Party needs me. And you can be a Republican and still feel strongly about issues such as immigration, and improving our education system, and doing something about some of the social problems that exist in our society and our country. I don't think there's anything inconsistent with this."
In a July 2009 CNN interview with John King, Powell expressed concern over President Obama growing the size of the federal government and the size of the federal budget deficit. In September 2010, he criticized the Obama administration for not focusing "like a razor blade" on the economy and job creation. Powell reiterated that Obama was a "transformational figure." In a video that aired on CNN.com in November 2011, Colin Powell said in reference to Barack Obama, "many of his decisions have been quite sound. The financial system was put back on a stable basis."
On October 25, 2012, 12 days before the presidential election, he gave his endorsement to President Obama for re-election during a broadcast of CBS This Morning. He cited success and forward progress in foreign and domestic policy arenas under the Obama administration, and made the following statement: "I voted for him in 2008 and I plan to stick with him in 2012 and I'll be voting for and for Vice President Joe Biden next month."
As additional reason for his endorsement, Powell cited the changing positions and perceived lack of thoughtfulness of Mitt Romney on foreign affairs, and a concern for the validity of Romney's economic plans.
In an interview with ABC's Diane Sawyer and George Stephanopoulos during ABC's coverage of President Obama's second inauguration, Powell criticized members of the Republican Party who "demonize[d] the president." He called on GOP leaders to publicly denounce such talk.
Powell has been very vocal on the state of the Republican Party. Speaking at a Washington Ideas forum in early October 2015, he warned the audience that the Republican Party had begun a move to the fringe right, lessening the chances of a Republican White House in the future. He also remarked on Republican presidential candidate Donald Trump's statements regarding immigrants, noting that there were many immigrants working in Trump hotels.
In March 2016, Powell denounced the "nastiness" of the 2016 Republican primaries during an interview on CBS "This Morning". He compared the race to reality television, and stated that the campaign had gone "into the mud."
In August 2016, Powell accused the Hillary Clinton campaign of trying to pin her email controversy on him. Speaking to "People" magazine, Powell said, "The truth is, she was using [the private email server] for a year before I sent her a memo telling her what I did."
On September 13, 2016, emails were obtained that revealed Powell's private communications regarding both Donald Trump and Hillary Clinton. Powell privately reiterated his comments regarding Clinton's email scandal, writing, "I have told Hillary's minions repeatedly that they are making a mistake trying to drag me in, yet they still try," and complaining that "Hillary's mafia keeps trying to suck me into it" in another email. In another email discussing Clinton's controversy, Powell said she should have told everyone what she did "two years ago", and said that she has not "been covering herself with glory." Writing on the 2012 Benghazi attack controversy surrounding Clinton, Powell said to then U.S. Ambassador Susan Rice, "Benghazi is a stupid witch hunt." Commenting on Clinton in a general sense, Powell mused that "Everything [Clinton] touches she kind of screws up with hubris", and in another email stated "I would rather not have to vote for her, although she is a friend I respect."
Powell referred to Donald Trump as a "national disgrace", with "no sense of shame." He wrote of Trump's role in the birther movement, which he referred to as "racist." Powell suggested that the media ignore Trump, saying, "To go on and call him an idiot just emboldens him." The emails were obtained by the media as the result of a hack.
Powell endorsed Clinton on October 25, 2016, stating it was "because I think she's qualified, and the other gentleman is not qualified."
Despite not running in the election, Powell received three electoral votes for president from faithless electors in Washington who had pledged to vote for Clinton, coming in third overall. After Barack Obama, Powell was only the second Black person to receive electoral votes in a presidential election. He was also the first Republican since 1984 to receive electoral votes from Washington in a presidential election, as well as the first Republican black person to do so.
In an interview in October 2019, Powell warned that the GOP needed to “get a grip" and put the country before their party, standing up to President Trump rather than worrying about political fallout. “When they see things that are not right, they need to say something about it because our foreign policy is in shambles right now, in my humble judgment, and I see things happening that are hard to understand,” Powell said. On June 7, 2020, Powell announced he would be voting for Joe Biden in the 2020 presidential election.
Powell married Alma Johnson on August 25, 1962. Their son, Michael Powell, was the chairman of the Federal Communications Commission (FCC) from 2001 to 2005. His daughters are Linda Powell, an actress, and Annemarie Powell. As a hobby, Powell restores old Volvo and Saab cars. In 2013, he faced questions about a relationship with a Romanian diplomat, after a hacked AOL email account had been made public. He acknowledged a "very personal" email relationship but denied further involvement.
Powell's civilian awards include two Presidential Medals of Freedom (the second with distinction), the President's Citizens Medal, the Congressional Gold Medal, the Secretary of State Distinguished Service Medal, the Secretary of Energy Distinguished Service Medal, and the Ronald Reagan Freedom Award. Several schools and other institutions have been named in his honor and he holds honorary degrees from universities and colleges across the country.
Azure, two swords in saltire points downwards between four mullets Argent, on a chief of the Second a lion passant Gules. On a wreath of the Liveries is set for Crest the head of an American bald-headed eagle erased Proper. And in an escrol over the same this motto, "DEVOTED TO PUBLIC SERVICE."
The swords and stars refer to the former general's career, as does the crest, which is the badge of the 101st Airborne (which he served as a brigade commander in the mid-1970s). The lion may be an allusion to Scotland. The shield can be shown surrounded by the insignia of an honorary Knight Commander of the Most Honorable Order of the Bath (KCB), an award the General received after the first Gulf War. | https://en.wikipedia.org/wiki?curid=6984 |
Chlorophyll
Chlorophyll (also chlorophyl) is any of several related green pigments found in the mesosomes of cyanobacteria, as well as in the chloroplasts of algae and plants. Its name is derived from the Greek words , ("pale green") and , ("leaf"). Chlorophyll is essential in photosynthesis, allowing plants to absorb energy from light.
Chlorophylls absorb light most strongly in the blue portion of the electromagnetic spectrum as well as the red portion. Conversely, it is a poor absorber of green and near-green portions of the spectrum, which it reflects, producing the green color of chlorophyll-containing tissues. Two types of chlorophyll exist in the photosystems of green plants: chlorophyll a and b.
Chlorophyll was first isolated and named by Joseph Bienaimé Caventou and Pierre Joseph Pelletier in 1817. | https://en.wikipedia.org/wiki?curid=6985 |
Carotene
The term carotene (also carotin, from the Latin "carota", "carrot") is used for many related unsaturated hydrocarbon substances having the formula C40Hx, which are synthesized by plants but in general cannot be made by animals (with the exception of some aphids and spider mites which acquired the synthesizing genes from fungi). Carotenes are photosynthetic pigments important for photosynthesis. Carotenes contain no oxygen atoms. They absorb ultraviolet, violet, and blue light and scatter orange or red light, and (in low concentrations) yellow light.
Carotenes are responsible for the orange colour of the carrot, for which this class of chemicals is named, and for the colours of many other fruits, vegetables and fungi (for example, sweet potatoes, chanterelle and orange cantaloupe melon). Carotenes are also responsible for the orange (but not all of the yellow) colours in dry foliage. They also (in lower concentrations) impart the yellow coloration to milk-fat and butter. Omnivorous animal species which are relatively poor converters of coloured dietary carotenoids to colourless retinoids have yellowed-coloured body fat, as a result of the carotenoid retention from the vegetable portion of their diet. The typical yellow-coloured fat of humans and chickens is a result of fat storage of carotenes from their diets.
Carotenes contribute to photosynthesis by transmitting the light energy they absorb to chlorophyll. They also protect plant tissues by helping to absorb the energy from singlet oxygen, an excited form of the oxygen molecule O2 which is formed during photosynthesis.
β-Carotene is composed of two retinyl groups, and is broken down in the mucosa of the human small intestine by β-carotene 15,15'-monooxygenase to retinal, a form of vitamin A. β-Carotene can be stored in the liver and body fat and converted to retinal as needed, thus making it a form of vitamin A for humans and some other mammals. The carotenes α-carotene and γ-carotene, due to their single retinyl group (β-ionone ring), also have some vitamin A activity (though less than β-carotene), as does the xanthophyll carotenoid β-cryptoxanthin. All other carotenoids, including lycopene, have no beta-ring and thus no vitamin A activity (although they may have antioxidant activity and thus biological activity in other ways).
Animal species differ greatly in their ability to convert retinyl (beta-ionone) containing carotenoids to retinals. Carnivores in general are poor converters of dietary ionone-containing carotenoids. Pure carnivores such as ferrets lack β-carotene 15,15'-monooxygenase and cannot convert any carotenoids to retinals at all (resulting in carotenes not being a form of vitamin A for this species); while cats can convert a trace of β-carotene to retinol, although the amount is totally insufficient for meeting their daily retinol needs.
Chemically, carotenes are polyunsaturated hydrocarbons containing 40 carbon atoms per molecule, variable numbers of hydrogen atoms, and no other elements. Some carotenes are terminated by hydrocarbon rings, on one or both ends of the molecule. All are coloured to the human eye, due to extensive systems of conjugated double bonds. Structurally carotenes are tetraterpenes, meaning that they are synthesized biochemically from four 10-carbon terpene units, which in turn are formed from eight 5-carbon isoprene units.
Carotenes are found in plants in two primary forms designated by characters from the Greek alphabet: alpha-carotene (α-carotene) and beta-carotene (β-carotene). Gamma-, delta-, epsilon-, and zeta-carotene (γ, δ, ε, and ζ-carotene) also exist. Since they are hydrocarbons, and therefore contain no oxygen, carotenes are fat-soluble and insoluble in water (in contrast with other carotenoids, the xanthophylls, which contain oxygen and thus are less chemically hydrophobic).
The following foods contain carotenes in appreciable amounts:
Absorption from these foods is enhanced if eaten with fats, as carotenes are fat soluble, and if the food is cooked for a few minutes until the plant cell wall splits and the color is released into any liquid. 12 μg of dietary β-carotene supplies the equivalent of 1 μg of retinol, and 24 µg of α-carotene or β-cryptoxanthin provides the equivalent of 1 µg of retinol.
The two primary isomers of carotene, α-carotene and β-carotene, differ in the position of a double bond (and thus a hydrogen) in the cyclic group at one end (the right end in the diagram at right).
β-Carotene is the more common form and can be found in yellow, orange, and green leafy fruits and vegetables. As a rule of thumb, the greater the intensity of the orange colour of the fruit or vegetable, the more β-carotene it contains.
Carotene protects plant cells against the destructive effects of ultraviolet light. β-Carotene is an antioxidant.
An article on the American Cancer Society says that The Cancer Research Campaign has called for warning labels on β-carotene supplements to caution smokers that such supplements may increase the risk of lung cancer.
The New England Journal of Medicine published an article in 1994 about a trial which examined the relationship between daily supplementation of β-carotene and vitamin E (α-tocopherol) and the incidence of lung cancer. The study was done using supplements and researchers were aware of the epidemiological correlation between carotenoid-rich fruits and vegetables and lower lung cancer rates. The research concluded that no reduction in lung cancer was found in the participants using these supplements, and furthermore, these supplements may, in fact, have harmful effects.
The Journal of the National Cancer Institute and The New England Journal of Medicine published articles in 1996 about a trial with a goal to determine if vitamin A (in the form of retinyl palmitate) and β-carotene (at about 30 mg/day, which is 10 times the Reference Daily Intake) supplements had any beneficial effects to prevent cancer. The results indicated an "increased" risk of lung and prostate cancers for the participants who consumed the β-carotene supplement and who had lung irritation from smoking or asbestos exposure, causing the trial to be stopped early.
A review of all randomized controlled trials in the scientific literature by the Cochrane Collaboration published in "JAMA" in 2007 found that synthetic β-carotene "increased" mortality by 1-8 % (Relative Risk 1.05, 95% confidence interval 1.01–1.08). However, this meta-analysis included two large studies of smokers, so it is not clear that the results apply to the general population. The review only studied the influence of synthetic antioxidants and the results should not be translated to potential effects of fruits and vegetables.
A recent report demonstrated that 50 mg of β-carotene every other day prevented cognitive decline in a study of over 4000 physicians at a mean treatment duration of 18 years.
Oral β-carotene is prescribed to people suffering from erythropoietic protoporphyria. It provides them some relief from photosensitivity.
Carotenemia or hypercarotenemia is excess carotene, but unlike excess vitamin A, carotene is non-toxic. Although hypercarotenemia is not particularly dangerous, it can lead to an oranging of the skin (carotenodermia), but not the conjunctiva of eyes (thus easily distinguishing it visually from jaundice). It is most commonly associated with consumption of an abundance of carrots, but it also can be a medical sign of more dangerous conditions.
β-Carotene and lycopene molecules can be encapsulated into carbon nanotubes enhancing the optical properties of carbon nanotubes. Efficient energy transfer occurs between the encapsulated dye and nanotube — light is absorbed by the dye and without significant loss is transferred to the single wall carbon nanotube (SWCNT). Encapsulation increases chemical and thermal stability of carotene molecules; it also allows their isolation and individual characterization.
Most of the world's synthetic supply of carotene comes from a manufacturing complex located in Freeport, Texas and owned by DSM. The other major supplier BASF also uses a chemical process to produce β-carotene. Together these suppliers account for about 85% of the β-carotene on the market. In Spain Vitatene produces natural β-carotene from fungus Blakeslea trispora, as does DSM but at much lower amount when compared to its synthetic β-carotene operation. In Australia, organic β-carotene is produced by Aquacarotene Limited from dried marine algae "Dunaliella salina" grown in harvesting ponds situated in Karratha, Western Australia. BASF Australia is also producing β-carotene from microalgae grown in two sites in Australia that are the world's largest algae farms. In Portugal, the industrial biotechnology company Biotrend is producing natural all-"trans"-β-carotene from a non-genetically-modified bacteria of the genus "Sphingomonas" isolated from soil.
Carotenes are also found in palm oil, corn, and in the milk of dairy cows, causing cow's milk to be light yellow, depending on the feed of the cattle, and the amount of fat in the milk (high-fat milks, such as those produced by Guernsey cows, tend to be yellower because their fat content causes them to contain more carotene).
Carotenes are also found in some species of termites, where they apparently have been picked up from the diet of the insects.
There are currently two commonly used methods of total synthesis of β-carotene. The first was developed by BASF and is based on the Wittig reaction with Wittig himself as patent holder:
The second is a Grignard reaction, elaborated by Hoffman-La Roche from the original synthesis of Inhoffen et al. They are both symmetrical; the BASF synthesis is C20 + C20, and the Hoffman-La Roche synthesis is C19 + C2 + C19.
Carotenes are carotenoids containing no oxygen. Carotenoids containing some oxygen are known as xanthophylls.
The two ends of the β-carotene molecule are structurally identical, and are called β-rings. Specifically, the group of nine carbon atoms at each end form a β-ring.
The α-carotene molecule has a β-ring at one end; the other end is called an ε-ring. There is no such thing as an "α-ring".
These and similar names for the ends of the carotenoid molecules form the basis of a systematic naming scheme, according to which:
ζ-Carotene is the biosynthetic precursor of neurosporene, which is the precursor of lycopene, which, in turn, is the precursor of the carotenes α through ε.
Carotene is also used as a substance to colour products such as juice, cakes, desserts, butter and margarine. It is approved for use as a food additive in the EU (listed as additive E160a) Australia and New Zealand (listed as 160a) and the US. | https://en.wikipedia.org/wiki?curid=6986 |
Cyclic adenosine monophosphate
Cyclic adenosine monophosphate (cAMP, cyclic AMP, or 3',5'-cyclic adenosine monophosphate) is a second messenger important in many biological processes. cAMP is a derivative of adenosine triphosphate (ATP) and used for intracellular signal transduction in many different organisms, conveying the cAMP-dependent pathway. It should not be confused with 5'-AMP-activated protein kinase (AMP-activated protein kinase).
Earl Sutherland of Vanderbilt University won a Nobel Prize in Physiology or Medicine in 1971 "for his discoveries concerning the mechanisms of the action of hormones", especially epinephrine, via second messengers (such as cyclic adenosine monophosphate, cyclic AMP).
Cyclic AMP is synthesized from ATP by adenylate cyclase located on the inner side of the plasma membrane and anchored at various locations in the interior of the cell. Adenylate cyclase is "activated" by a range of signaling molecules through the activation of adenylate cyclase stimulatory G (Gs)-protein-coupled receptors. Adenylate cyclase is "inhibited" by agonists of adenylate cyclase inhibitory G (Gi)-protein-coupled receptors. Liver adenylate cyclase responds more strongly to glucagon, and muscle adenylate cyclase responds more strongly to adrenaline.
cAMP decomposition into AMP is catalyzed by the enzyme phosphodiesterase.
cAMP is a second messenger, used for intracellular signal transduction, such as transferring into cells the effects of hormones like glucagon and adrenaline, which cannot pass through the plasma membrane. It is also involved in the activation of protein kinases. In addition, cAMP binds to and regulates the function of ion channels such as the HCN channels and a few other cyclic nucleotide-binding proteins such as Epac1 and RAPGEF2.
cAMP is associated with kinases function in several biochemical processes, including the regulation of glycogen, sugar, and lipid metabolism.
In eukaryotes, cyclic AMP works by activating protein kinase A (PKA, or cAMP-dependent protein kinase). PKA is normally inactive as a tetrameric holoenzyme, consisting of two catalytic and two regulatory units (C2R2), with the regulatory units blocking the catalytic centers of the catalytic units.
Cyclic AMP binds to specific locations on the regulatory units of the protein kinase, and causes dissociation between the regulatory and catalytic subunits, thus enabling those catalytic units to phosphorylate substrate proteins.
The active subunits catalyze the transfer of phosphate from ATP to specific serine or threonine residues of protein substrates. The phosphorylated proteins may act directly on the cell's ion channels, or may become activated or inhibited enzymes. Protein kinase A can also phosphorylate specific proteins that bind to promoter regions of DNA, causing increases in transcription. Not all protein kinases respond to cAMP. Several classes of protein kinases, including protein kinase C, are not cAMP-dependent.
Further effects mainly depend on cAMP-dependent protein kinase, which vary based on the type of cell.
Still, there are some minor PKA-independent functions of cAMP, e.g., activation of calcium channels, providing a minor pathway by which growth hormone-releasing hormone causes a release of growth hormone.
However, the view that the majority of the effects of cAMP are controlled by PKA is an outdated one. In 1998 a family of cAMP-sensitive proteins with guanine nucleotide exchange factor (GEF) activity was discovered. These are termed Exchange proteins activated by cAMP (Epac) and the family comprises Epac1 and Epac2. The mechanism of activation is similar to that of PKA: the GEF domain is usually masked by the N-terminal region containing the cAMP binding domain. When cAMP binds, the domain dissociates and exposes the now-active GEF domain, allowing Epac to activate small Ras-like GTPase proteins, such as Rap1.
In the species "Dictyostelium discoideum", cAMP acts outside the cell as a secreted signal. The chemotactic aggregation of cells is organized by periodic waves of cAMP that propagate between cells over distances as large as several centimetres. The waves are the result of a regulated production and secretion of extracellular cAMP and a spontaneous biological oscillator that initiates the waves at centers of territories.
In bacteria, the level of cAMP varies depending on the medium used for growth. In particular, cAMP is low when glucose is the carbon source. This occurs through inhibition of the cAMP-producing enzyme, adenylate cyclase, as a side-effect of glucose transport into the cell. The transcription factor cAMP receptor protein (CRP) also called CAP (catabolite gene activator protein) forms a complex with cAMP and thereby is activated to bind to DNA. CRP-cAMP increases expression of a large number of genes, including some encoding enzymes that can supply energy independent of glucose.
cAMP, for example, is involved in the positive regulation of the lac operon. In an environment with a low glucose concentration, cAMP accumulates and binds to the allosteric site on CRP (cAMP receptor protein), a transcription activator protein. The protein assumes its active shape and binds to a specific site upstream of the lac promoter, making it easier for RNA polymerase to bind to the adjacent promoter to start transcription of the lac operon, increasing the rate of lac operon transcription. With a high glucose concentration, the cAMP concentration decreases, and the CRP disengages from the lac operon.
Since cyclic AMP is a second messenger and plays vital role in cell signalling, it has been implicated in various disorders but not restricted to the roles given below:
Some research has suggested that a deregulation of cAMP pathways and an aberrant activation of cAMP-controlled genes is linked to the growth of some cancers.
Recent research suggests that cAMP affects the function of higher-order thinking in the prefrontal cortex through its regulation of ion channels called hyperpolarization-activated cyclic nucleotide-gated channels (HCN). When cAMP stimulates the HCN, the channels open, closing the brain cell to communication and thus interfering with the function of the prefrontal cortex. This research, especially the cognitive deficits in age-related illnesses and ADHD, is of interest to researchers studying the brain. | https://en.wikipedia.org/wiki?curid=6988 |
Corporatocracy
Corporatocracy (, from corporate and ; short form corpocracy) is a recent term used to refer to an economic and political system controlled by corporations or corporate interests. It is a form of Plutocracy.
The concept has been used in explanations of bank bailouts, excessive pay for CEOs as well as complaints such as the exploitation of national treasuries, people and natural resources. It has been used by critics of globalization, sometimes in conjunction with criticism of the World Bank or unfair lending practices as well as criticism of "free trade agreements".
Historian Howard Zinn argues that during the Gilded Age in the United States, the U.S. government was acting exactly as Karl Marx described capitalist states: "pretending neutrality to maintain order, but serving the interests of the rich".
According to economist Joseph Stiglitz, there has been a severe increase in market power of corporations, largely due to U.S. antitrust laws being weakened by neoliberal reforms, leading to growing income inequality and a generally underperforming economy. He states that to improve the economy, it is necessary to decrease the influence of money on U.S. politics.
In his 1956 book "The power elite", sociologist C Wright Mills states that together with the military and political establishment, leaders of the biggest corporations form a "power elite" that is in control of the U.S.
Economist Jeffrey Sachs described the United States as a corporatocracy in "The Price of Civilization" (2011). He suggested that it arose from four trends: weak national parties and strong political representation of individual districts, the large U.S. military establishment after World War II, large corporations using money to finance election campaigns, and globalization tilting the balance of power away from workers.
In 2013, economist Edmund Phelps criticised the economic system of the U.S. and other western countries in recent decades as being what he calls "the new corporatism", which he characterises as a system in which the state is far too involved in the economy, tasked with "protecting everyone against everyone else", but in which at the same time big companies have a great deal of influence on the government, with lobbyists' suggestions being "welcome, especially if they come with bribes".
During the Gilded Age in the United States, corruption was rampant as business leaders spent significant amounts of money ensuring that government did not regulate their activities.
Corporations have significant influence on the regulations and regulators that monitor them. For example, Senator Elizabeth Warren explained in December 2014 how an omnibus spending bill required to fund the government was modified late in the process to weaken banking regulations. The modification made it easier to allow taxpayer-funded bailouts of banking "swaps entities", which the Dodd-Frank banking regulations prohibited. She singled out Citigroup, one of the largest banks, which had a role in modifying the legislation. She also explained how both Wall Street bankers and members of the government that formerly had worked on Wall Street stopped bi-partisan legislation that would have broken up the largest banks. She repeated President Theodore Roosevelt's warnings regarding powerful corporate entities that threatened the "very foundations of Democracy."
In a 2015 interview, former President Jimmy Carter stated that the United States is now "an oligarchy with unlimited political bribery" due to the "Citizens United v. FEC" ruling which effectively removed limits on donations to political candidates. Wall Street spent a record $2 billion trying to influence the 2016 United States elections.
With regard to income inequality, the 2014 income analysis of University of California, Berkeley economist Emmanuel Saez confirms that relative growth of income and wealth is not occurring among small and mid-sized entrepreneurs and business owners (who generally populate the lower half of top one per-centers in income), but instead only among the top .1 percent of income distribution, who earn $2,000,000 or more every year.
Corporate power can also increase income inequality. Nobel Prize winner of economics Joseph Stiglitz wrote in May 2011: "Much of today’s inequality is due to manipulation of the financial system, enabled by changes in the rules that have been bought and paid for by the financial industry itself—one of its best investments ever. The government lent money to financial institutions at close to zero percent interest and provided generous bailouts on favorable terms when all else failed. Regulators turned a blind eye to a lack of transparency and to conflicts of interest." Stiglitz explained that the top 1% got nearly "one-quarter" of the income and own approximately 40% of the wealth.
Measured relative to GDP, total compensation and its component wages and salaries have been declining since 1970. This indicates a shift in income from labor (persons who derive income from hourly wages and salaries) to capital (persons who derive income via ownership of businesses, land and assets).
Larry Summers estimated in 2007 that the lower 80% of families were receiving $664 billion less income than they would be with a 1979 income distribution, or approximately $7,000 per family. Not receiving this income may have led many families to increase their debt burden, a significant factor in the 2007–2009 subprime mortgage crisis, as highly leveraged homeowners suffered a much larger reduction in their net worth during the crisis. Further, since lower income families tend to spend relatively more of their income than higher income families, shifting more of the income to wealthier families may slow economic growth.
Some large U.S. corporations have used a strategy called tax inversion to change their headquarters to a non-U.S. country to reduce their tax liability. About 46 companies have reincorporated in low-tax countries since 1982, including 15 since 2012. Six more also planned to do so in 2015.
One indication of increasing corporate power was the removal of restrictions on their ability to buy back stock, contributing to increased income inequality. Writing in the "Harvard Business Review" in September 2014, William Lazonick blamed record corporate stock buybacks for reduced investment in the economy and a corresponding impact on prosperity and income inequality. Between 2003 and 2012, the 449 companies in the S&P 500 used 54% of their earnings ($2.4 trillion) to buy back their own stock. An additional 37% was paid to stockholders as dividends. Together, these were 91% of profits. This left little for investment in productive capabilities or higher income for employees, shifting more income to capital rather than labor. He blamed executive compensation arrangements, which are heavily based on stock options, stock awards and bonuses for meeting earnings per share (EPS) targets. EPS increases as the number of outstanding shares decreases. Legal restrictions on buybacks were greatly eased in the early 1980s. He advocates changing these incentives to limit buybacks.
In the 12 months to March 31, 2014, S&P 500 companies increased their stock buyback payouts by 29% year on year, to $534.9 billion. U.S. companies are projected to increase buybacks to $701 billion in 2015 according to Goldman Sachs, an 18% increase over 2014. For scale, annual non-residential fixed investment (a proxy for business investment and a major GDP component) was estimated to be about $2.1 trillion for 2014.
Brid Brennan of the Transnational Institute explained how concentration of corporations increases their influence over government: "It’s not just their size, their enormous wealth and assets that make the TNCs [transnational corporations] dangerous to democracy. It’s also their concentration, their capacity to influence, and often infiltrate, governments and their ability to act as a genuine international social class in order to defend their commercial interests against the common good. It is such decision making power as well as the power to impose deregulation over the past 30 years, resulting in changes to national constitutions, and to national and international legislation which has created the environment for corporate crime and impunity." Brennan concludes that this concentration in power leads to again more concentration of income and wealth.
An example of such industry concentration is in banking. The top 5 U.S. banks had approximately 30% of the U.S. banking assets in 1998; this rose to 45% by 2008 and to 48% by 2010, before falling to 47% in 2011.
The Economist also explained how an increasingly profitable corporate financial and banking sector caused Gini coefficients to rise in the U.S. since 1980: "Financial services' share of GDP in America doubled to 8% between 1980 and 2000; over the same period their profits rose from about 10% to 35% of total corporate profits, before collapsing in 2007–09. Bankers are being paid more, too. In America the compensation of workers in financial services was similar to average compensation until 1980. Now it is twice that average." | https://en.wikipedia.org/wiki?curid=6997 |
Culture of Canada
The culture of Canada embodies the artistic, culinary, literary, humour, musical, political and social elements that are representative of Canada and Canadians. Throughout Canada's history, its culture has been influenced by European culture and traditions, especially British and French, and by its own indigenous cultures. Over time, elements of the cultures of Canada's immigrant populations have become incorporated to form a Canadian cultural mosaic. The population has also been influenced by American culture because of a shared language, proximity, television and migration between the two countries.
Canada is often characterized as being "very progressive, diverse, and multicultural". Canada's federal government has often been described as the instigator of multicultural ideology because of its public emphasis on the social importance of immigration. Canada's culture draws from its broad range of constituent nationalities, and policies that promote a just society are constitutionally protected. Canadian Government policies—such as publicly funded health care; higher and more progressive taxation; outlawing capital punishment; strong efforts to eliminate poverty; an emphasis on cultural diversity; strict gun control; the legalization of same-sex marriage, pregnancy terminations, euthanasia and cannabis — are social indicators of the country's political and cultural values. Canadians identify with the country's institutions of health care, military peacekeeping, the national park system and the "Canadian Charter of Rights and Freedoms".
The Canadian government has influenced culture with programs, laws and institutions. It has created crown corporations to promote Canadian culture through media, such as the Canadian Broadcasting Corporation (CBC) and the National Film Board of Canada (NFB), and promotes many events which it considers to promote Canadian traditions. It has also tried to protect Canadian culture by setting legal minimums on Canadian content in many media using bodies like the Canadian Radio-television and Telecommunications Commission (CRTC).
For thousands of years Canada has been inhabited by indigenous peoples from a variety of different cultures and of several major linguistic groupings. Although not without conflict and bloodshed, early European interactions with First Nations and Inuit populations in what is now Canada were arguably peaceful. First Nations and Métis peoples played a critical part in the development of European colonies in Canada, particularly for their role in assisting European coureur des bois and voyageurs in the exploration of the continent during the North American fur trade. Combined with late economic development in many regions, this comparably nonbelligerent early history allowed indigenous Canadians to have a lasting influence on the national culture (see: The Canadian Crown and Aboriginal peoples). Over the course of three centuries, countless North American Indigenous words, inventions, concepts, and games have become an everyday part of Canadian language and use. Many places in Canada, both natural features and human habitations, use indigenous names. The name "Canada" itself derives from the St. Lawrence Iroquoian word meaning "village" or "settlement". The name of Canada's capital city Ottawa comes from the Algonquin language term "adawe" meaning "to trade".
The French originally settled New France along the shores of the Atlantic Ocean and Saint Lawrence River during the early part of the 17th century. The British conquest of New France during the mid-18th century brought 70,000 Francophones under British rule, creating a need for compromise and accommodation. The migration of 40,000 to 50,000 United Empire Loyalists from the Thirteen Colonies during the American Revolution (1775–1783) brought American colonial influences. Following the War of 1812, a large wave of Irish, Scottish and English settlers arrived in Upper Canada and Lower Canada.
The Canadian Forces and overall civilian participation in the First World War and Second World War helped to foster Canadian nationalism; however, in 1917 and 1944, conscription crises highlighted the considerable rift along ethnic lines between Anglophones and Francophones. As a result of the First and Second World Wars, the Government of Canada became more assertive and less deferential to British authority. Canada until the 1940s saw itself in terms of English and French cultural, linguistic and political identities, and to some extent aboriginal.
Legislative restrictions on immigration (such as the Continuous journey regulation and "Chinese Immigration Act") that had favoured British, American and other European immigrants (such as Dutch, German, Italian, Polish, Swedish and Ukrainian) were amended during the 1960s, resulting in an influx of diverse people from Asia, Africa, and the Caribbean. By the end of the 20th century, immigrants were increasingly Chinese, Indian, Vietnamese, Jamaican, Filipino, Lebanese and Haitian. As of 2006, Canada has grown to have thirty four ethnic groups with at least one hundred thousand members each, of which eleven have over 1,000,000 people and numerous others are represented in smaller numbers. 16.2% of the population self identify as a visible minority. The Canadian public as-well as the major political parties support immigration.
Themes and symbols of pioneers, trappers, and traders played an important part in the early development of Canadian culture. Modern Canadian culture as it is understood today can be traced to its time period of westward expansion and nation building. Contributing factors include Canada's unique geography, climate, and cultural makeup. Being a cold country with long winter nights for most of the year, certain unique leisure activities developed in Canada during this period including hockey and embracement of the summer indigenous game of lacrosse.
By the 19th century Canadians came to believe themselves possessed of a unique "northern character," due to the long, harsh winters that only those of hardy body and mind could survive. This hardiness was claimed as a Canadian trait, and such sports as snowshoeing and cross-country skiing that reflected this were asserted as characteristically Canadian. During this period the churches tried to steer leisure activities, by preaching against drinking and scheduling annual revivals and weekly club activities. In a society in which most middle-class families now owned a harmonium or piano, and standard education included at least the rudiments of music, the result was often an original song. Such stirrings frequently occurred in response to noteworthy events, and few local or national excitements were allowed to pass without some musical comment.
By the 1930s radio played a major role in uniting Canadians behind their local or regional teams. Rural areas were especially influenced by sports coverage and the propagation of national myths. Outside the sports and music arena Canadians express the national characteristics of being hard working, peaceful, orderly and polite.
French Canada's early development was relatively cohesive during the 17th and 18th centuries, and this was preserved by the "Quebec Act" of 1774, which allowed Roman Catholics to hold offices and practice their faith. In 1867, the "Constitution Act" was thought to meet the growing calls for Canadian autonomy while avoiding the overly strong decentralization that contributed to the Civil War in the United States. The compromises reached during this time between the English- and French-speaking Fathers of Confederation set Canada on a path to bilingualism which in turn contributed to an acceptance of diversity. The English and French languages have had limited constitutional protection since 1867 and full official status since 1969. Section 133 of the Constitution Act of 1867 (BNA Act) guarantees that both languages may be used in the Parliament of Canada. Canada adopted its "first Official Languages Act" in 1969, giving English and French equal status in the government of Canada. Doing so makes them "official" languages, having preferred status in law over all other languages used in Canada.
Prior to the advent of the "Canadian Bill of Rights" in 1960 and its successor the "Canadian Charter of Rights and Freedoms" in 1982, the laws of Canada did not provide much in the way of civil rights and this issue was typically of limited concern to the courts. Canada since the 1960s has placed emphasis on equality and inclusiveness for all people. Multiculturalism in Canada was adopted as the official policy of the Canadian government and is enshrined in Section 27 of the Canadian Charter of Rights and Freedoms. In 1995, the Supreme Court of Canada ruled in "Egan v. Canada" that sexual orientation should be "read in" to Section Fifteen of the Canadian Charter of Rights and Freedoms, a part of the Constitution of Canada guaranteeing equal rights to all Canadians. Following a series of decisions by provincial courts and the Supreme Court of Canada, on July 20, 2005, the "Civil Marriage Act" (Bill C-38) received Royal Assent, legalizing same-sex marriage in Canada. Furthermore, sexual orientation was included as a protected status in the human-rights laws of the federal government and of all provinces and territories.
Canadian governments at the federal level have a tradition of liberalism, and govern with a moderate, centrist political ideology. Canada's egalitarian approach to governance emphasizing social justice and multiculturalism, is based on selective immigration, social integration, and suppression of far-right politics that has wide public and political support. Peace, order, and good government are constitutional goals of the Canadian government.
Canada has a multi-party system in which many of its legislative customs derive from the unwritten conventions of and precedents set by the Westminster parliament of the United Kingdom. The country has been dominated by two parties, the centre-left Liberal Party of Canada and the centre-right Conservative Party of Canada. The historically predominant Liberals position themselves at the centre of the political scale, with the Conservatives sitting on the right and the New Democratic Party occupying the left. Smaller parties like the Quebec nationalist Bloc Québécois and the Green Party of Canada have also been able to exert their influence over the political process by representation at the federal level.
In general, Canadian nationalists are highly concerned about the protection of Canadian sovereignty and loyalty to the Canadian State, placing them in the civic nationalist category. It has likewise often been suggested that anti-Americanism plays a prominent role in Canadian nationalist ideologies. A unified, bi-cultural, tolerant and sovereign Canada remains an ideological inspiration to many Canadian nationalists. Alternatively French Canadian nationalism and support for maintaining French Canadian culture would inspire Quebec nationalists, many of whom were supporters of the Quebec sovereignty movement during the late-20th century.
Cultural protectionism in Canada has, since the mid-20th century, taken the form of conscious, interventionist attempts on the part of various Canadian governments to promote Canadian cultural production. Sharing a large border and (for the majority) a common language with the United States, Canada faces a difficult position in regard to American culture, be it direct attempts at the Canadian market or the general diffusion of American culture in the globalized media arena. While Canada tries to maintain its cultural differences, it also must balance this with responsibility in trade arrangements such as the General Agreement on Tariffs and Trade (GATT) and the North American Free Trade Agreement (NAFTA).
Canadian values are the perceived commonly shared ethical and human values of Canadians. The major political parties have claimed explicitly that they uphold Canadian values, but use generalities to specify them. Historian Ian MacKay argues that, thanks to the long-term political impact of "Rebels, Reds, and Radicals", and allied leftist political elements, "egalitarianism, social equality, and peace... are now often simply referred to...as 'Canadian values.'"
A 2013 Statistics Canada survey found that an "overwhelming majority" of Canadians shared the values of human rights (with 92% of respondents agreeing that they are a shared Canadian value), respect for the law (92%) and gender equality (91%). Universal access to publicly funded health services "is often considered by Canadians as a fundamental value that ensures national health care insurance for everyone wherever they live in the country."
The Canadian Charter of Rights and Freedoms, was intended to be a source for Canadian values and national unity. The 15th Prime Minister Pierre Trudeau wrote in his "Memoirs" that:
Numerous scholar, beginning in the 1940s with American sociologist Seymour Martin Lipset; have tried to identify, measure and compare them with other countries, especially the United States. However, there are critics who say that such a task is practically impossible.
Denis Stairs a professor of political Science at Dalhousie University; links the concept of Canadian values with nationalism. [Canadians typically]...believe, in particular, that they subscribe to a distinctive set of values - "Canadian" values - and that those values are special in the sense of being unusually virtuous.
Canada's large geographic size, the presence of a significant number of indigenous peoples, the conquest of one European linguistic population by another and relatively open immigration policy have led to an extremely diverse society. As a result, the issue of Canadian identity remains under scrutiny.
Canada has constitutional protection for policies that promote multiculturalism rather than cultural assimilation or a single national myth. In Quebec, cultural identity is strong, and many commentators speak of a French Canadian culture as distinguished from English Canadian culture. However, as a whole, Canada is in theory, a cultural mosaic—a collection of several regional, and ethnic subcultures. Political philosopher Charles Blattberg suggests that Canada is a "multinational country"; as all Canadians are members of Canada as a civic or political community, a community of citizens, and this is a community that contains many other kinds within it. These include not only communities of ethnic, regional, religious, and civic (the provincial and municipal governments) sorts, but also national communities, which often include or overlap with many of the other kinds.
Journalist and author Richard Gwyn has suggested that "tolerance" has replaced "loyalty" as the touchstone of Canadian identity. Journalist and professor Andrew Cohen wrote in 2007:
Canada's 15th prime minister Pierre Trudeau in regards to uniformity stated:
The question of Canadian identity was traditionally dominated by three fundamental themes: first, the often conflicted relations between English Canadians and French Canadians stemming from the French Canadian imperative for cultural and linguistic survival; secondly, the generally close ties between English Canadians and the British Empire, resulting in a gradual political process towards complete independence from the imperial power; and finally, the close proximity of English-speaking Canadians to the United States. Much of the debate over contemporary Canadian identity is argued in political terms, and defines Canada as a country defined by its government policies, which are thought to reflect deeper cultural values.
In 2013, more than 90% of Canadians believed that the "Canadian Charter of Rights and Freedoms" and the national flag were the top symbols of Canadian identity. Next highest were the national anthem, the Royal Canadian Mounted Police and hockey.
Western alienation is the notion that the western provinces have historically been alienated, and in extreme cases excluded, from mainstream Canadian political affairs in favour of Eastern Canada or more specifically the central provinces. Western alienation claims that these latter two are politically represented, and economically favoured, more significantly than the former, which has given rise to the sentiment of alienation among many western Canadians. Likewise; the Quebec sovereignty movement of the late 20th century that lead to the Québécois nation being recognized as a "distinct society" within Canada, highlights the sharp divisions between the Anglo and Francophone population.
Though more than half of Canadians live in just two provinces: Ontario and Quebec, each province is largely self-contained due to provincial economic self-sufficiency. Only 15 percent of Canadians live in a different province from where they were born, and only 10 percent go to another province for university. Canada has always been like that, and stands in sharp contrast to the United States' internal mobility which is much higher. For example 30 percent live in a different state from where they were born, and 30 percent go away for university. Scott Gilmore in "Maclean's" argues that "Canada is a nation of strangers", in the sense that for most individuals, the rest of Canada outside their province is little-known. Another factor is the cost of internal travel. Intra-Canadian airfares are high—it is cheaper and more common to visit the United States than to visit another province. Gilmore argues that the mutual isolation makes it difficult to muster national responses to major national issues.
Canadian humour is an integral part of the Canadian Identity. There are several traditions in Canadian humour in both English and French. While these traditions are distinct and at times very different, there are common themes that relate to Canadians' shared history and geopolitical situation in the Western Hemisphere and the world. Various trends can be noted in Canadian comedy. One trend is the portrayal of a "typical" Canadian family in an ongoing radio or television series. Other trends include outright absurdity, and political and cultural satire. Irony, parody, satire, and self-deprecation are arguably the primary characteristics of Canadian humour.
The beginnings of Canadian national radio comedy date to the late 1930s with the debut of "The Happy Gang", a long-running weekly variety show that was regularly sprinkled with corny jokes in between tunes. Canadian television comedy begins with Wayne and Shuster, a sketch comedy duo who performed as a comedy team during the Second World War, and moved their act to radio in 1946 before moving on to television. "Second City Television", otherwise known as "SCTV", "Royal Canadian Air Farce", "This Hour Has 22 Minutes", "The Kids in the Hall" and more recently "Trailer Park Boys" are regarded as television shows which were very influential on the development of Canadian humour. Canadian comedians have had great success in the film industry and are amongst the most recognized in the world.
Humber College in Toronto and the École nationale de l'humour in Montreal offer post-secondary programmes in comedy writing and performance. Montreal is also home to the bilingual (English and French) Just for Laughs festival and to the Just for Laughs Museum, a bilingual, international museum of comedy. Canada has a national television channel, The Comedy Network, devoted to comedy. Many Canadian cities feature comedy clubs and showcases, most notable, The Second City branch in Toronto (originally housed at The Old Fire Hall) and the Yuk Yuk's national chain. The Canadian Comedy Awards were founded in 1999 by the Canadian Comedy Foundation for Excellence, a not-for-profit organization.
Predominant symbols of Canada include the maple leaf, beaver, and the Canadian horse. Many official symbols of the country such as the Flag of Canada have been changed or modified over the past few decades to Canadianize them and de-emphasise or remove references to the United Kingdom. Other prominent symbols include the sports of hockey and lacrosse, the Canadian Goose, the Royal Canadian Mounted Police, the Canadian Rockies, and more recently the totem pole and Inuksuk. With material items such as Canadian beer, maple syrup, tuques, canoes, nanaimo bars, butter tarts and the Quebec dish of poutine being defined as uniquely Canadian. Symbols of the Canadian monarchy continue to be featured in, for example, the Arms of Canada, the armed forces, and the prefix Her Majesty's Canadian Ship. The designation "Royal" remains for institutions as varied as the Royal Canadian Armed Forces, Royal Canadian Mounted Police and the Royal Winnipeg Ballet.
Indigenous artists were producing art in the territory that is now called Canada for thousands of years prior to the arrival of European settler colonists and the eventual establishment of Canada as a nation state. Like the peoples that produced them, indigenous art traditions spanned territories that extended across the current national boundaries between Canada and the United States. The majority of indigenous artworks preserved in museum collections date from the period after European contact and show evidence of the creative adoption and adaptation of European trade goods such as metal and glass beads. Canadian sculpture has been enriched by the walrus ivory, muskox horn and caribou antler and soapstone carvings by the Inuit artists. These carvings show objects and activities from the daily life, myths and legends of the Inuit. Inuit art since the 1950s has been the traditional gift given to foreign dignitaries by the Canadian government.
The works of most early Canadian painters followed European trends. During the mid-19th century, Cornelius Krieghoff, a Dutch-born artist in Quebec, painted scenes of the life of the "habitants" (French-Canadian farmers). At about the same time, the Canadian artist Paul Kane painted pictures of indigenous life in western Canada. A group of landscape painters called the Group of Seven developed the first distinctly Canadian style of painting. All these artists painted large, brilliantly coloured scenes of the Canadian wilderness.
Since the 1930s, Canadian painters have developed a wide range of highly individual styles. Emily Carr became famous for her paintings of totem poles in British Columbia. Other noted painters have included the landscape artist David Milne, the painters Jean-Paul Riopelle, Harold Town and Charles Carson and multi-media artist Michael Snow. The abstract art group Painters Eleven, particularly the artists William Ronald and Jack Bush, also had an important impact on modern art in Canada. Government support has played a vital role in their development enabling visual exposure through publications and periodicals featuring Canadian art, as has the establishment of numerous art schools and colleges across the country.
Canadian literature is often divided into French- and English-language literatures, which are rooted in the literary traditions of France and Britain, respectively. Canada's early literature, whether written in English or French, often reflects the Canadian perspective on nature, frontier life, and Canada's position in the world, for example the poetry of Bliss Carman or the memoirs of Susanna Moodie and Catherine Parr Traill. These themes, and Canada's literary history, inform the writing of successive generations of Canadian authors, from Leonard Cohen to Margaret Atwood.
By the mid-20th century, Canadian writers were exploring national themes for Canadian readers. Authors were trying to find a distinctly Canadian voice, rather than merely emulating British or American writers. Canadian identity is closely tied to its literature. The question of national identity recurs as a theme in much of Canada's literature, from Hugh MacLennan's "Two Solitudes" (1945) to Alistair MacLeod's "No Great Mischief" (1999). Canadian literature is often categorized by region or province; by the socio-cultural origins of the author (for example, Acadians, indigenous peoples, LGBT, and Irish Canadians); and by literary period, such as "Canadian postmoderns" or "Canadian Poets Between the Wars".
Canadian authors have accumulated numerous international awards. In 1992, Michael Ondaatje became the first Canadian to win the Man Booker Prize for "The English Patient". Margaret Atwood won the Booker in 2000 for "The Blind Assassin" and Yann Martel won it in 2002 for the "Life of Pi". Carol Shields's "The Stone Diaries" won the Governor General's Awards in Canada in 1993, the 1995 Pulitzer Prize for Fiction, and the 1994 National Book Critics Circle Award. In 2013, Alice Munro was the first Canadian to be awarded the Nobel Prize in Literature for her work as "master of the modern short story". Munro is also a recipient of the Man Booker International Prize for her lifetime body of work, and three-time winner of Canada's Governor General's Award for fiction.
Canada has had a thriving stage theatre scene since the late 1800s. Theatre festivals draw many tourists in the summer months, especially the Stratford Shakespeare Festival in Stratford, Ontario, and the Shaw Festival in Niagara-on-the-Lake, Ontario. The Famous People Players are only one of many touring companies that have also developed an international reputation. Canada also hosts one of the largest fringe festivals, the Edmonton International Fringe Festival.
Canada's largest cities host a variety of modern and historical venues. The Toronto Theatre District is Canada's largest, as well as being the third largest English-speaking theatre district in the world. In addition to original Canadian works, shows from the West End and Broadway frequently tour in Toronto. Toronto's Theatre District includes the venerable Roy Thomson Hall; the Princess of Wales Theatre; the Tim Sims Playhouse; The Second City; the Canon Theatre; the Panasonic Theatre; the Royal Alexandra Theatre; historic Massey Hall; and the city's new opera house, the Sony Centre for the Performing Arts. Toronto's Theatre District also includes the Theatre Museum Canada.
Montreal's theatre district ("Quartier des Spectacles") is the scene of performances that are mainly French-language, although the city also boasts a lively anglophone theatre scene, such as the Centaur Theatre. Large French theatres in the city include Théâtre Saint-Denis and Théâtre du Nouveau Monde.
Vancouver is host to, among others, the Vancouver Fringe Festival, the Arts Club Theatre Company, Carousel Theatre, Bard on the Beach, Theatre Under the Stars and Studio 58.
Calgary is home to Theatre Calgary, a mainstream regional theatre; Alberta Theatre Projects, a major centre for new play development in Canada; the Calgary Animated Objects Society; and One Yellow Rabbit, a touring company.
There are three major theatre venues in Ottawa; the Ottawa Little Theatre, originally called the Ottawa Drama League at its inception in 1913, is the longest-running community theatre company in Ottawa. Since 1969, Ottawa has been the home of the National Arts Centre, a major performing-arts venue that houses four stages and is home to the National Arts Centre Orchestra, the Ottawa Symphony Orchestra and Opera Lyra Ottawa. Established in 1975, the Great Canadian Theatre Company specializes in the production of Canadian plays at a local level.
Canadian television, especially supported by the Canadian Broadcasting Corporation, is the home of a variety of locally produced shows. French-language television, like French Canadian film, is buffered from excessive American influence by the fact of language, and likewise supports a host of home-grown productions. The success of French-language domestic television in Canada often exceeds that of its English-language counterpart. In recent years nationalism has been used to prompt products on television. The "I Am Canadian" campaign by Molson beer, most notably the commercial featuring Joe Canadian, infused domestically brewed beer and nationalism.
Canada's television industry is in full expansion as a site for Hollywood productions. Since the 1980s, Canada, and Vancouver in particular, has become known as Hollywood North. The American TV series "Queer as Folk" was filmed in Toronto. Canadian producers have been very successful in the field of science fiction since the mid-1990s, with such shows as "The X-Files", "Stargate SG-1", "", the new "Battlestar Galactica", "My Babysitter's A Vampire", "Smallville", and "The Outer Limits", all filmed in Vancouver.
The CRTC's Canadian content regulations dictate that a certain percentage of a domestic broadcaster's transmission time must include content that is produced by Canadians, or covers Canadian subjects. These regulations also apply to US cable television channels such as MTV and the Discovery Channel, which have local versions of their channels available on Canadian cable networks. Similarly, BBC Canada, while showing primarily BBC shows from the United Kingdom, also carries Canadian output.
A number of Canadian pioneers in early Hollywood significantly contributed to the creation of the motion picture industry in the early days of the 20th century. Over the years, many Canadians have made enormous contributions to the American entertainment industry, although they are frequently not recognized as Canadians.
Canada has developed a vigorous film industry that has produced a variety of well-known films, actors and actresses. In fact, this eclipsing may sometimes be creditable for the bizarre and innovative directions of some works, such as auteurs Atom Egoyan ("The Sweet Hereafter", 1997) and David Cronenberg ("The Fly", "Naked Lunch", "A History of Violence") and the "avant-garde" work of Michael Snow and Jack Chambers. Also, the distinct French-Canadian society permits the work of directors such as Denys Arcand and Denis Villeneuve, while First Nations cinema includes the likes of "". At the 76th Academy Awards, Arcand's "The Barbarian Invasions" became Canada's first film to win the Academy Award for Best Foreign Language Film.
The National Film Board of Canada is 'a public agency that produces and distributes films and other audiovisual works which reflect Canada to Canadians and the rest of the world'. Canada has produced many popular documentaries such as "The Corporation", "Nanook of the North", "Final Offer", and "". The Toronto International Film Festival (TIFF) is considered by many to be one of the most prevalent film festivals for Western cinema. It is the première film festival in North America from which the Oscars race begins.
The music of Canada has reflected the multi-cultural influences that have shaped the country. Indigenous, the French, and the British have all made historical contributions to the musical heritage of Canada. The country has produced its own composers, musicians and ensembles since the mid-1600s. From the 17th century onward, Canada has developed a music infrastructure that includes church halls; chamber halls; conservatories; academies; performing arts centres; record companys; radio stations, and television music-video channels. The music has subsequently been heavily influenced by American culture because of its proximity and migration between the two countries. Canadian rock has had a considerable impact on the development of modern popular music and the development of the most popular subgenres.
Patriotic music in Canada dates back over 200 years as a distinct category from British patriotism, preceding the first legal steps to independence by over 50 years. The earliest known song, "The Bold Canadian", was written in 1812. The national anthem of Canada, "O Canada" adopted in 1980, was originally commissioned by the Lieutenant Governor of Quebec, the Honourable Théodore Robitaille, for the 1880 Saint-Jean-Baptiste Day ceremony. Calixa Lavallée wrote the music, which was a setting of a patriotic poem composed by the poet and judge Sir Adolphe-Basile Routhier. The text was originally only in French, before English lyrics were written in 1906.
Music broadcasting in the country is regulated by the Canadian Radio-television and Telecommunications Commission (CRTC). The Canadian Academy of Recording Arts and Sciences presents Canada's music industry awards, the Juno Awards, which were first awarded in a ceremony during the summer of 1970.
Canada has a well-developed media sector, but its cultural output—particularly in English films, television shows, and magazines—is often overshadowed by imports from the United States. Television, magazines, and newspapers are primarily for-profit corporations based on advertising, subscription, and other sales-related revenues. Nevertheless, both the television broadcasting and publications sectors require a number of government interventions to remain profitable, ranging from regulation that bars foreign companies in the broadcasting industry to tax laws that limit foreign competition in magazine advertising.
The promotion of multicultural media in Canada began in the late 1980s as the multicultural policy was legislated in 1988. In the "Multiculturalism Act", the federal government proclaimed the recognition of the diversity of Canadian culture. Thus, multicultural media became an integral part of Canadian media overall. Upon numerous government reports showing lack of minority representation or minority misrepresentation, the Canadian government stressed separate provision be made to allow minorities and ethnicities of Canada to have their own voice in the media.
Sports in Canada consists of a variety of games. Although there are many contests that Canadians value, the most common are ice hockey, box lacrosse, Canadian football, basketball, soccer, curling, baseball and ringette. All but curling and soccer are considered domestic sports as they were either invented by Canadians or trace their roots to Canada.
Ice hockey, referred to as simply "hockey", is Canada's most prevalent winter sport, its most popular spectator sport, and its most successful sport in international competition. It is Canada's official national winter sport. Lacrosse, a sport with indigenous origins, is Canada's oldest and official summer sport. Canadian football is Canada's second most popular spectator sport, and the Canadian Football League's annual championship, the Grey Cup, is the country's largest annual sports event.
While other sports have a larger spectator base, association football, known in Canada as "soccer" in both English and French, has the most registered players of any team sport in Canada, and is the most played sport with all demographics, including ethnic origin, ages and genders. Professional teams exist in many cities in Canada – with a trio of teams in North America's top pro league, Major League Soccer – and international soccer competitions such as the FIFA World Cup, UEFA Euro and the UEFA Champions League attract some of the biggest audiences in Canada. Other popular team sports include curling, street hockey, cricket, rugby league, rugby union, softball and Ultimate frisbee. Popular individual sports include auto racing, boxing, karate, kickboxing, hunting, sport shooting, fishing, cycling, golf, hiking, horse racing, ice skating, skiing, snowboarding, swimming, triathlon, disc golf, water sports, and several forms of wrestling.
As a country with a generally cool climate, Canada has enjoyed greater success at the Winter Olympics than at the Summer Olympics, although significant regional variations in climate allow for a wide variety of both team and individual sports. Great achievements in Canadian sports are recognized by Canada's Sports Hall of Fame, while the Lou Marsh Trophy is awarded annually to Canada's top athlete by a panel of journalists. There are numerous other Sports Halls of Fame in Canada.
Canadian cuisine varies widely depending on the region. The former Canadian prime minister Joe Clark has been paraphrased to have noted: "Canada has a cuisine of cuisines. Not a stew pot, but a smorgasbord." There are considerable overlaps between Canadian food and the rest of the cuisine in North America, many unique dishes (or versions of certain dishes) are found and available only in the country. Common contenders for the Canadian national food include poutine and butter tarts. Other popular Canadian made foods include indigenous fried bread bannock, French tourtière, Kraft Dinner, ketchup chips, date squares, nanaimo bars, back bacon, and the caesar cocktail. Canada is the birthplace and world's largest producer of maple syrup.
The three earliest cuisines of Canada have First Nations, English, and French roots, with the traditional cuisine of English Canada closely related to British and American cuisine, while the traditional cuisine of French Canada has evolved from French cuisine and the winter provisions of fur traders. With subsequent waves of immigration in the 18th and 19th century from Central, Southern, and Eastern Europe, and then from Asia, Africa and Caribbean, the regional cuisines were subsequently augmented. The Jewish immigrants to Canada during the late 1800s also play a significant role to foods in Canada. The Montreal-style bagel and Montreal-style smoked meat are both food items originally developed by Jewish communities living in Montreal.
In a 2002 interview with the "Globe and Mail", Aga Khan, the 49th Imam of the Ismaili Muslims, described Canada as "the most successful pluralist society on the face of our globe", citing it as "a model for the world". A 2007 poll ranked Canada as the country with the most positive influence in the world. 28,000 people in 27 countries were asked to rate 12 countries as either having a positive or negative worldwide influence. Canada's overall influence rating topped the list with 54 per cent of respondents rating it mostly positive and only 14 per cent mostly negative. A global opinion poll for the BBC saw Canada ranked the second most positively viewed nation in the world (behind Germany) in 2013 and 2014.
The United States is home to a number of perceptions about Canadian culture, due to the countries' partially shared heritage and the relatively large number of cultural features common to both the US and Canada. For example, the average Canadian may be perceived as more reserved than his or her American counterpart. Canada and the United States are often inevitably compared as sibling countries, and the perceptions that arise from this oft-held contrast have gone to shape the advertised worldwide identities of both nations: the United States is seen as the rebellious child of the British Crown, forged in the fires of violent revolution; Canada is the calmer offspring of the United Kingdom, known for a more relaxed national demeanour. | https://en.wikipedia.org/wiki?curid=6999 |
List of companies of Canada
Canada is a country in the northern part of North America.
Canada is the world's tenth-largest economy , with a nominal GDP of approximately US$1.52 trillion. It is a member of the Organisation for Economic Co-operation and Development (OECD) and the Group of Seven (G7), and is one of the world's top ten trading nations, with a highly globalized economy. Canada is a mixed economy, ranking above the US and most western European nations on The Heritage Foundation's index of economic freedom, and experiencing a relatively low level of income disparity. The country's average household disposable income per capita is over US$23,900, higher than the OECD average. Furthermore, the Toronto Stock Exchange is the seventh-largest stock exchange in the world by market capitalization, listing over 1,500 companies with a combined market capitalization of over US$2 trillion .
For further information on the types of business entities in this country and their abbreviations, see "Business entities in Canada".
This list shows firms in the Fortune Global 500, which ranks firms by total revenues reported before March 31, 2017. Only the top five firms (if available) are included as a sample.
This list includes notable companies with primary headquarters located in the country. The industry and sector follow the Industry Classification Benchmark taxonomy. Organizations which have ceased operations are included and noted as defunct. | https://en.wikipedia.org/wiki?curid=7000 |
Cauchy distribution
The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution formula_1 is the distribution of the -intercept of a ray issuing from formula_2 with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero.
The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined (but see below). The Cauchy distribution does not have finite moments of order greater than or equal to one; only fractional absolute moments exist. The Cauchy distribution has no moment generating function.
In mathematics, it is closely related to the Poisson kernel, which is the fundamental solution for the Laplace equation in the upper half-plane.
It is one of the few distributions that is stable and has a probability density function that can be expressed analytically, the others being the normal distribution and the Lévy distribution.
Functions with the form of the density function of the Cauchy distribution were studied by mathematicians in the 17th century, but in a different context and under the title of the witch of Agnesi. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematician Poisson in 1824, with Cauchy only becoming associated with it during an academic controversy in 1853. As such, the name of the distribution is a case of Stigler's Law of Eponymy. Poisson noted that if the mean of observations following such a distribution were taken, the mean error did not converge to any finite number. As such, Laplace's use of the Central Limit Theorem with such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter.
The Cauchy distribution has the probability density function (PDF)
where formula_4 is the location parameter, specifying the location of the peak of the distribution, and formula_5 is the scale parameter which specifies the half-width at half-maximum (HWHM), alternatively formula_6 is full width at half maximum (FWHM). formula_5 is also equal to half the interquartile range and is sometimes called the probable error. Augustin-Louis Cauchy exploited such a density function in 1827 with an infinitesimal scale parameter, defining what would now be called a Dirac delta function.
The maximum value or amplitude of the Cauchy PDF is formula_8, located at formula_9.
It is sometimes convenient to express the PDF in terms of the complex parameter formula_10
The special case when formula_12 and formula_13 is called the standard Cauchy distribution with the probability density function
In physics, a three-parameter Lorentzian function is often used:
where formula_16 is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case where formula_17
The cumulative distribution function of the Cauchy distribution is:
and the quantile function (inverse cdf) of the Cauchy distribution is
It follows that the first and third quartiles are formula_20, and hence the interquartile range is formula_6.
For the standard distribution, the cumulative distribution function simplifies to arctangent function formula_22:
The entropy of the Cauchy distribution is given by:
The derivative of the quantile function, the quantile density function, for the Cauchy distribution is:
The differential entropy of a distribution can be defined in terms of its quantile density, specifically:
The Cauchy distribution is the maximum entropy probability distribution for a random variate formula_27 for which
or, alternatively, for a random variate formula_27 for which
In its standard form, it is the maximum entropy probability distribution for a random variate formula_27 for which
The Kullback-Leibler divergence between two Cauchy distributions has the following symmetric closed-form formula:
The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are both equal to formula_4.
When formula_35 and formula_36 are two independent normally distributed random variables with expected value 0 and variance 1, then the ratio formula_37 has the standard Cauchy distribution.
If formula_38 is a formula_39 positive-semidefinite covariance matrix with strictly positive diagonal entries, then for independent and identically distributed formula_40 and any random formula_41-vector formula_42 independent of formula_27 and formula_44 such that formula_45 and formula_46 (defining a categorical distribution) it holds that
If formula_48 are independent and identically distributed random variables, each with a standard Cauchy distribution, then the sample mean formula_49 has the same standard Cauchy distribution. To see that this is true, compute the characteristic function of the sample mean:
where formula_51 is the sample mean. This example serves to show that the hypothesis of finite variance in the central limit theorem cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all stable distributions, of which the Cauchy distribution is a special case.
The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly stable distribution.
The standard Cauchy distribution coincides with the Student's "t"-distribution with one degree of freedom.
Like all stable distributions, the location-scale family to which the Cauchy distribution belongs is closed under linear transformations with real coefficients. In addition, the Cauchy distribution is closed under linear fractional transformations with real coefficients. In this connection, see also McCullagh's parametrization of the Cauchy distributions.
Let formula_27 denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by
which is just the Fourier transform of the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform:
The "n"th moment of a distribution is the "n"th derivative of the characteristic function evaluated at formula_55. Observe that the characteristic function is not differentiable at the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment.
If a probability distribution has a density function formula_56, then the mean, if it exists, is given by
We may evaluate this two-sided improper integral by computing the sum of two one-sided improper integrals. That is,
for an arbitrary real number formula_59.
For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum (2) are infinite and have opposite sign. Hence (1) is undefined, and thus so is the mean.
Note that the Cauchy principal value of the mean of the Cauchy distribution is
which is zero. On the other hand, the related integral
is "not" zero, as can be seen easily by computing the integral. This again shows that the mean (1) cannot exist.
Various results in probability theory about expected values, such as the strong law of large numbers, fail to hold for the Cauchy distribution.
The Cauchy distribution does not have finite moments of any order. Some of the higher raw moments do exist and have a value of infinity, for example the raw second moment:
By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to formula_63 since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of the central moments and standardized moments are undefined, since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity).
The results for higher moments follow from Hölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do.
Consider the truncated distribution defined by restricting the standard Cauchy distribution to the interval . Such a truncated distribution has all moments (and the central limit theorem applies for i.i.d. observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution.
Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if an i.i.d. sample of size "n" is taken from a Cauchy distribution, one may calculate the sample mean as:
Although the sample values formula_65 will be concentrated about the central value formula_4, the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of formula_4 than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken.
Therefore, more robust means of estimating the central value formula_4 and the scaling parameter formula_5 are needed. One simple method is to take the median value of the sample as an estimator of formula_4 and half the sample interquartile range as an estimator of formula_5. Other, more precise and robust methods have been developed For example, the truncated mean of the middle 24% of the sample order statistics produces an estimate for formula_4 that is more efficient than using either the sample median or the full sample mean. However, because of the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.
Maximum likelihood can also be used to estimate the parameters formula_4 and formula_5. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. The log-likelihood function for the Cauchy distribution for sample size formula_75 is:
Maximizing the log likelihood function with respect to formula_4 and formula_5 produces the following system of equations:
Note that
is a monotone function in formula_5 and that the solution formula_5 must satisfy
Solving just for formula_4 requires solving a polynomial of degree formula_86, and solving just for formula_87 requires solving a polynomial of degree formula_88. Therefore, whether solving for one parameter or for both parameters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating formula_4 using the sample median is only about 81% as asymptotically efficient as estimating formula_4 by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of formula_4 as the maximum likelihood estimate. When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for formula_4.
A random vector formula_93 is said to have the multivariate Cauchy distribution if every linear combination of its components formula_94 has a Cauchy distribution. That is, for any constant vector formula_95, the random variable formula_96 should have a univariate Cauchy distribution. The characteristic function of a multivariate Cauchy distribution is given by:
where formula_98 and formula_99 are real functions with formula_98 a homogeneous function of degree one and formula_99 a positive homogeneous function of degree one. More formally:
for all formula_104.
An example of a bivariate Cauchy distribution can be given by:
Note that in this example, even though there is no analogue to a covariance matrix, formula_106 and formula_107 are not statistically independent.
We also can write this formula for complex variable. Then the probability density function of complex cauchy is :
Analogous to the univariate density, the multidimensional Cauchy density also relates to the multivariate Student distribution. They are equivalent when the degrees of freedom parameter is equal to one. The density of a formula_109 dimension Student distribution with one degree of freedom becomes:
Properties and details for this density can be obtained by taking it as a particular case of the multivariate Student density.
where formula_59, formula_125, formula_126 and formula_127 are real numbers.
The Cauchy distribution is the stable distribution of index 1. The Lévy–Khintchine representation of such a stable distribution of parameter formula_131 is given, for formula_132 by:
where
and formula_135 can be expressed explicitly. In the case formula_136 of the Cauchy distribution, one has formula_137.
This last representation is a consequence of the formula
In nuclear and particle physics, the energy profile of a resonance is described by the relativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution.
https://doi.org/10.1214/aoms/1177706450 derived the test statistic for estimators of formula_153 for the equation formula_154 and where the maximum likelihood estimator is found using ordinary least squares showed the sampling distribution of the statistic is the Cauchy distribution. | https://en.wikipedia.org/wiki?curid=7003 |
Control engineering
Control engineering or control systems engineering is an engineering discipline that applies control theory to design systems with desired behaviors in control environments. The discipline of controls overlaps and is usually taught along with electrical engineering and mechanical engineering at many institutions around the world.
The practice uses sensors and detectors to measure the output performance of the process being controlled; these measurements are used to provide corrective feedback helping to achieve the desired performance. Systems designed to perform without requiring human input are called automatic control systems (such as cruise control for regulating the speed of a car). Multi-disciplinary in nature, control systems engineering activities focus on implementation of control systems mainly derived by mathematical modeling of a diverse range of systems.
Modern day control engineering is a relatively new field of study that gained significant attention during the 20th century with the advancement of technology. It can be broadly defined or classified as practical application of control theory. Control engineering plays an essential role in a wide range of control systems, from simple household washing machines to high-performance F-16 fighter aircraft. It seeks to understand physical systems, using mathematical modelling, in terms of inputs, outputs and various components with different behaviors; to use control system design tools to develop controllers for those systems; and to implement controllers in physical systems employing available technology. A system can be mechanical, electrical, fluid, chemical, financial or biological, and its mathematical modelling, analysis and controller design uses control theory in one or many of the time, frequency and complex-s domains, depending on the nature of the design problem.
Automatic control systems were first developed over two thousand years ago. The first feedback control device on record is thought to be the ancient Ktesibios's water clock in Alexandria, Egypt around the third century B.C.E. It kept time by regulating the water level in a vessel and, therefore, the water flow from that vessel. This certainly was a successful device as water clocks of similar design were still being made in Baghdad when the Mongols captured the city in 1258 A.D. A variety of automatic devices have been used over the centuries to accomplish useful tasks or simply just to entertain. The latter includes the automata, popular in Europe in the 17th and 18th centuries, featuring dancing figures that would repeat the same task over and over again; these automata are examples of open-loop control. Milestones among feedback, or "closed-loop" automatic control devices, include the temperature regulator of a furnace attributed to Drebbel, circa 1620, and the centrifugal flyball governor used for regulating the speed of steam engines by James Watt in 1788.
In his 1868 paper "On Governors", James Clerk Maxwell was able to explain instabilities exhibited by the flyball governor using differential equations to describe the control system. This demonstrated the importance and usefulness of mathematical models and methods in understanding complex phenomena, and it signaled the beginning of mathematical control and systems theory. Elements of control theory had appeared earlier but not as dramatically and convincingly as in Maxwell's analysis.
Control theory made significant strides over the next century. New mathematical techniques, as well as advancements in electronic and computer technologies, made it possible to control significantly more complex dynamical systems than the original flyball governor could stabilize. New mathematical techniques included developments in optimal control in the 1950s and 1960s followed by progress in stochastic, robust, adaptive, nonlinear control methods in the 1970s and 1980s. Applications of control methodology have helped to make possible space travel and communication satellites, safer and more efficient aircraft, cleaner automobile engines, and cleaner and more efficient chemical processes.
Before it emerged as a unique discipline, control engineering was practiced as a part of mechanical engineering and control theory was studied as a part of electrical engineering since electrical circuits can often be easily described using control theory techniques. In the very first control relationships, a current output was represented by a voltage control input. However, not having adequate technology to implement electrical control systems, designers were left with the option of less efficient and slow responding mechanical systems. A very effective mechanical controller that is still widely used in some hydro plants is the governor. Later on, previous to modern power electronics, process control systems for industrial applications were devised by mechanical engineers using pneumatic and hydraulic control devices, many of which are still in use today.
There are two major divisions in control theory, namely, classical and modern, which have direct implications for the control engineering applications.
The scope of classical control theory is limited to single-input and single-output (SISO) system design, except when analyzing for disturbance rejection using a second input. The system analysis is carried out in the time domain using differential equations, in the complex-s domain with the Laplace transform, or in the frequency domain by transforming from the complex-s domain. Many systems may be assumed to have a second order and single variable system response in the time domain. A controller designed using classical theory often requires on-site tuning due to incorrect design approximations. Yet, due to the easier physical implementation of classical controller designs as compared to systems designed using modern control theory, these controllers are preferred in most industrial applications. The most common controllers designed using classical control theory are PID controllers. A less common implementation may include either or both a Lead or Lag filter. The ultimate end goal is to meet requirements typically provided in the time-domain called the step response, or at times in the frequency domain called the open-loop response. The step response characteristics applied in a specification are typically percent overshoot, settling time, etc. The open-loop response characteristics applied in a specification are typically Gain and Phase margin and bandwidth. These characteristics may be evaluated through simulation including a dynamic model of the system under control coupled with the compensation model.
Modern control theory is carried out in the state space, and can deal with multiple-input and multiple-output (MIMO) systems. This overcomes the limitations of classical control theory in more sophisticated design problems, such as fighter aircraft control, with the limitation that no frequency domain analysis is possible. In modern design, a system is represented to the greatest advantage as a set of decoupled first order differential equations defined using state variables. Nonlinear, multivariable, adaptive and robust control theories come under this division. Matrix methods are significantly limited for MIMO systems where linear independence cannot be assured in the relationship between inputs and outputs . Being fairly new, modern control theory has many areas yet to be explored. Scholars like Rudolf E. Kalman and Aleksandr Lyapunov are well known among the people who have shaped modern control theory.
Control engineering is the engineering discipline that focuses on the modeling of a diverse range of dynamic systems (e.g. mechanical systems) and the design of controllers that will cause these systems to behave in the desired manner. Although such controllers need not be electrical, many are and hence control engineering is often viewed as a subfield of electrical engineering. However, the falling price of microprocessors is making the actual implementation of a control system essentially trivial. As a result, focus is shifting back to the mechanical and process engineering discipline, as intimate knowledge of the physical system being controlled is often desired.
Electrical circuits, digital signal processors and microcontrollers can all be used to implement control systems. Control engineering has a wide range of applications from the flight and propulsion systems of commercial airliners to the cruise control present in many modern automobiles.
In most cases, control engineers utilize feedback when designing control systems. This is often accomplished using a PID controller system. For example, in an automobile with cruise control the vehicle's speed is continuously monitored and fed back to the system, which adjusts the motor's torque accordingly. Where there is regular feedback, control theory can be used to determine how the system responds to such feedback. In practically all such systems stability is important and control theory can help ensure stability is achieved.
Although feedback is an important aspect of control engineering, control engineers may also work on the control of systems without feedback. This is known as open loop control. A classic example of open loop control is a washing machine that runs through a pre-determined cycle without the use of sensors.
At many universities around the world, control engineering courses are taught primarily in electrical engineering and mechanical engineering, but some courses can be instructed in mechatronics engineering, and aerospace engineering. In others, control engineering is connected to computer science, as most control techniques today are implemented through computers, often as embedded systems (as in the automotive field). The field of control within chemical engineering is often known as process control. It deals primarily with the control of variables in a chemical process in a plant. It is taught as part of the undergraduate curriculum of any chemical engineering program and employs many of the same principles in control engineering. Other engineering disciplines also overlap with control engineering as it can be applied to any system for which a suitable model can be derived. However, specialised control engineering departments do exist, for example, the Department of Automatic Control and Systems Engineering at the University of Sheffield and the Department of Robotics and Control Engineering at the United States Naval Academy.
Control engineering has diversified applications that include science, finance management, and even human behavior. Students of control engineering may start with a linear control system course dealing with the time and complex-s domain, which requires a thorough background in elementary mathematics and Laplace transform, called classical control theory. In linear control, the student does frequency and time domain analysis. Digital control and nonlinear control courses require Z transformation and algebra respectively, and could be said to complete a basic control education.
A control engineer's career starts with a bachelor's degree and can continue through the college process. Control engineer degrees are well paired with an electrical or mechanical engineering degree. Control engineers usually get jobs in technical managing where they typically lead interdisciplinary projects. There are many job opportunities in aerospace companies, manufacturing companies, automobile companies, power companies, and government agencies. Some places that hire Control Engineers include companies such as Rockwell Automation, NASA, Ford, and Goodrich. Control Engineers can possibly earn $66k annually from Lockheed Martin Corp. They can also earn up to $96k annually from General Motors Corporation.
According to a "Control Engineering" survey, most of the people who answered were control engineers in various forms of their own career. There are not very many careers that are classified as "control engineer," most of them are specific careers that have a small semblance to the overarching career of control engineering. A majority of the control engineers that took the survey in 2019 are system or product designers, or even control or instrument engineers. Most of the jobs involve process engineering or production or even maintenance, they are some variation of control engineering.
Originally, control engineering was all about continuous systems. Development of computer control tools posed a requirement of discrete control system engineering because the communications between the computer-based digital controller and the physical system are governed by a computer clock. The equivalent to Laplace transform in the discrete domain is the Z-transform. Today, many of the control systems are computer controlled and they consist of both digital and analog components.
Therefore, at the design stage either digital components are mapped into the continuous domain and the design is carried out in the continuous domain, or analog components are mapped into discrete domain and design is carried out there. The first of these two methods is more commonly encountered in practice because many industrial systems have many continuous systems components, including mechanical, fluid, biological and analog electrical components, with a few digital controllers.
Similarly, the design technique has progressed from paper-and-ruler based manual design to computer-aided design and now to computer-automated design or CAD which has been made possible by evolutionary computation. CAD can be applied not just to tuning a predefined control scheme, but also to controller structure optimisation, system identification and invention of novel control systems, based purely upon a performance requirement, independent of any specific control scheme.
Resilient control systems extend the traditional focus of addressing only planned disturbances to frameworks and attempt to address multiple types of unexpected disturbance; in particular, adapting and transforming behaviors of the control system in response to malicious actors, abnormal failure modes, undesirable human action, etc. | https://en.wikipedia.org/wiki?curid=7011 |
Chagas disease
Chagas disease, also known as American trypanosomiasis, is a tropical parasitic disease caused by "Trypanosoma cruzi". It is spread mostly by insects known as "Triatominae", or "kissing bugs". The symptoms change over the course of the infection. In the early stage, symptoms are typically either not present or mild, and may include fever, swollen lymph nodes, headaches, or swelling at the site of the bite. After four to eight weeks, individuals enter the chronic phase of disease, which in most cases does not result in further symptoms. Up to 45% of people develop heart disease 10–30 years after the initial infection, which can lead to heart failure. Digestive complications, including an enlarged esophagus or an enlarged colon, may also occur in up to 21% of people, and up to 10% of people may experience nerve damage.
Prevention focuses on eliminating kissing bugs and avoiding their bites. This may involve the use of insecticides or bed-nets. Other preventive efforts include screening blood used for transfusions. , a vaccine has not been developed. Early infections are treatable with the medications benznidazole or nifurtimox, which usually cure the disease if given shortly after the person is infected, but become less effective the longer a person has had Chagas disease. When used in chronic disease, medication may delay or prevent the development of end–stage symptoms. Benznidazole and nifurtimox often cause side effects, including skin disorders, digestive system irritation, and neurological symptoms, which can result in treatment being discontinued. , new drugs for Chagas disease are under development, and experimental vaccines have been studied in animal models.
It is estimated that 6.2 million people, mostly in Mexico, Central America and South America, have Chagas disease as of 2017, resulting in an estimated 7,900 deaths. Most people with the disease are poor, and most do not realize they are infected. Large-scale population movements have increased the areas where Chagas disease is found and these include many European countries and the United States. The disease was first described in 1909 by the Brazilian physician Carlos Chagas, after whom it is named. Chagas disease is classified as a neglected tropical disease.
Chagas disease occurs in two stages: an acute stage, which develops one to two weeks after the insect bite, and a chronic stage that develops over many years. The acute stage is often symptom-free. When present, the symptoms are typically minor and not specific to any particular disease. Signs and symptoms include fever, malaise, headache, and enlargement of the liver, spleen, and lymph nodes. Rarely, people develop a swollen nodule at the site of infection, which is called "Romaña's sign" if it is on the eyelid, or a "chagoma" if it is elsewhere on the skin. In rare cases (less than 1–5%), infected individuals develop severe acute disease, which can cause life-threatening fluid accumulation around the heart, or inflammation of the heart or brain and surrounding tissues. The acute phase typically lasts four to eight weeks and resolves without treatment.
Unless they are treated with antiparasitic drugs, individuals remain chronically infected with after recovering from the acute phase. Most chronic infections are asymptomatic, which is referred to as "indeterminate" chronic Chagas disease. However, over decades with chronic Chagas disease, 30–40% of people develop organ dysfunction ("determinate" chronic Chagas disease), which most often affects the heart or digestive system.
The most common manifestation is heart disease, which occurs in 14–45% of people with chronic Chagas disease. People with Chagas heart disease often experience heart palpitations and sometimes fainting due to irregular heart function. By electrocardiogram, people with Chagas heart disease most frequently have arrhythmias. As the disease progresses, the heart's ventricles become enlarged (dilated cardiomyopathy), which reduces its ability to pump blood. In many cases the first sign of Chagas heart disease is heart failure, thromboembolism, or chest pain associated with abnormalities in the microvasculature.
Also common in chronic Chagas disease is damage to the digestive system, particularly enlargement of the esophagus or colon, which affects 10–21% of people. Those with enlarged esophagus often experience pain (odynophagia) or trouble swallowing (dysphagia), acid reflux, cough, and weight loss. Individuals with enlarged colon often experience constipation, which can lead to severe blockage of the intestine or its blood supply. Up to 10% of chronically infected individuals develop nerve damage that can result in numbness and altered reflexes or movement. While chronic disease typically develops over decades, some individuals with Chagas disease (less than 10%) progress to heart damage directly after acute disease.
Signs and symptoms differ for people infected with through less common routes. People infected through ingestion of parasites tend to develop severe disease within three weeks of consumption, with symptoms including fever, vomiting, shortness of breath, cough, and pain in the chest, abdomen, and muscles. Those infected congenitally typically have few to no symptoms, but can have mild non-specific symptoms, or severe symptoms such as jaundice, respiratory distress, and heart problems. People infected through organ transplant or blood transfusion tend to have symptoms similar to those of vector-borne disease, but the symptoms may not manifest for anywhere from a week to five months. Chronically infected individuals who become immunosuppressed due to HIV infection can suffer particularly severe and distinct disease, most commonly characterized by inflammation in the brain and surrounding tissue or brain abscesses. Symptoms vary widely based on the size and location of brain abscesses, but typically include fever, headaches, seizures, loss of sensation, or other neurological issues that indicate particular sites of nervous system damage. Occasionally, these individuals also experience acute heart inflammation, skin lesions, and disease of the stomach, intestine, or peritoneum.
Chagas disease is caused by infection with the protozoan parasite , which is typically introduced into humans through the bite of triatomine bugs, also called "kissing bugs". At the bite site, motile forms called trypomastigotes invade various host cells. Inside a host cell, the parasite transforms into a replicative form called an amastigote, which undergoes several rounds of replication. The replicated amastigotes transform back into trypomastigotes, which burst the host cell and are released into the bloodstream. Trypomastigotes then disseminate throughout the body to various tissues, where they invade cells and replicate. Over many years, cycles of parasite replication and immune response can severely damage these tissues, particularly the heart and digestive tract.
"T. cruzi" can be transmitted by various triatomine bugs in the genera "Triatoma", "Panstrongylus", and "Rhodnius". The primary vectors for human infection are the species of triatomine bugs that inhabit human dwellings, namely "Triatoma infestans", "Rhodnius prolixus", "Triatoma dimidiata" and "Panstrongylus megistus". These insects are known by a number of local names, including "vinchuca" in Argentina, Bolivia, Chile and Paraguay, "barbeiro" (the barber) in Brazil, "pito" in Colombia, "chinche" in Central America, and "chipo" in Venezuela. The bugs tend to feed at night, preferring moist surfaces near the eyes or mouth. A triatomine bug can become infected with when it feeds on an infected host. replicates in the insect's intestinal tract and is shed in the bug's feces. When an infected triatomine feeds, it pierces the skin and takes in a blood meal, defecating at the same time to make room for the new meal. The bite is typically painless, but causes itching. Scratching at the bite introduces the -laden feces into the bite wound, initiating infection.
In addition to classical vector spread, Chagas disease can be transmitted through food or drink contaminated with triatomine insects or their feces. Since heating or drying kills the parasites, drinks and especially fruit juices are the most frequent source of infection. This route of transmission has been implicated in several outbreaks, where it led to unusually severe symptoms, likely due to infection with a higher parasite load than from the bite of a triatomine bug.
"T. cruzi" can also be transmitted independent of the triatomine bug during blood transfusion, following organ transplantation, or across the placenta during pregnancy. Transfusion with the blood of an infected donor infects the recipient 10–25% of the time. To prevent this, blood donations are screened for in many countries with endemic Chagas disease, as well as the United States. Similarly, transplantation of solid organs from an infected donor can transmit to the recipient. This is especially true for heart transplant, which transmits "T. cruzi" 75–100% of the time, and less so for transplantation of the liver (0–29%) or a kidney (0–19%). An infected mother can also pass to her child through the placenta; this occurs in up to 15% of births by infected mothers. As of 2019, 22.5% of new infections occurred through congenital transmission.
In the acute phase of the disease, signs and symptoms are caused directly by the replication of and the immune system's response to it. During this phase, can be found in various tissues throughout the body and circulating in the blood. During the initial weeks of infection, parasite replication is brought under control by production of antibodies and activation of the host's inflammatory response, particularly cells that target intracellular pathogens such as NK cells and macrophages, driven by inflammation-signaling molecules like TNF-α and IFN-γ.
During chronic Chagas disease, long-term organ damage develops over years due to continued replication of the parasite and damage from the immune system. Early in the course of the disease, is found frequently in the striated muscle fibers of the heart. As disease progresses, the heart becomes generally enlarged, with substantial regions of cardiac muscle fiber replaced by scar tissue and fat. Areas of active inflammation are scattered throughout the heart, with each housing inflammatory immune cells, typically macrophages and T cells. Late in the disease, parasites are rarely detected in the heart, and may be present at only very low levels.
In the heart, colon, and esophagus, chronic disease also leads to a massive loss of nerve endings. In the heart, this may contribute to arrythmias and other cardiac dysfunction. In the colon and esophagus, loss of nervous system control is the major driver of organ dysfunction. Loss of nerves impairs the movement of food through the digestive tract, which can lead to blockage of the esophagus or colon and restriction of their blood supply.
The presence of "T. cruzi" is diagnostic of Chagas disease. During the acute phase of infection, it can be detected by microscopic examination of fresh anticoagulated blood, or its buffy coat, for motile parasites; or by preparation of thin and thick blood smears stained with Giemsa, for direct visualization of parasites. Blood smear examination detects parasites in 34–85% of cases. Techniques such as microhematocrit centrifugation can be used to concentrate the blood, which makes the test more sensitive. On microscopic examination, trypomastigotes have a slender body, often in the shape of an S or U, with a flagellum connected to the body by an undulating membrane.
Alternatively, "T. cruzi" DNA can be detected by polymerase chain reaction (PCR). In acute and congenital Chagas disease, PCR is more sensitive than microscopy, and it is more reliable than antibody-based tests for the diagnosis of congenital disease because it is not affected by transfer of antibodies against from a mother to her baby (passive immunity). PCR is also used to monitor levels in organ transplant recipients and immunosuppressed people, which allows infection or reactivation to be detected at an early stage.
During the chronic phase, microscopic diagnosis is unreliable and PCR is less sensitive because the level of parasites in the blood is low. Chronic Chagas disease is usually diagnosed using serological tests, which detect Immunoglobulin G antibodies against in the person's blood. The most common test methodologies are ELISA, indirect immunofluorescence, and indirect hemagglutination. Two positive test results are required to confirm the diagnosis. If the test results are inconclusive, additional testing methods such as Western blot can be used. antigens may also be detected in tissue samples using immunohistochemistry techniques.
Various rapid diagnostic tests for Chagas disease are available. These tests are easily transported and can be performed by people without special training. They are useful for screening large numbers of people and testing people who cannot access healthcare facilities, but their sensitivity is relatively low, and it is recommended that a second method is used to confirm a positive result.
"T. cruzi" can be isolated from samples through blood culture or xenodiagnosis, or by inoculating animals with the person's blood. In the blood culture method, the person's red blood cells are separated from the plasma and added to a specialized growth medium to encourage multiplication of the parasite. It can take up to six months to obtain the result. Xenodiagnosis involves feeding the person's blood to triatomine insects, then examining their feces for the parasite 30 to 60 days later. These methods are not routinely used, as they are slow and have low sensitivity.
Efforts to prevent Chagas disease have largely focused on vector control to limit exposure to triatomine bugs. Insecticide-spraying programs have been the mainstay of vector control, consisting of spraying homes and the surrounding areas with residual insecticides. This was originally done with organochlorine, organophosphate, and carbamate insecticides, which were supplanted in the 1980s with pyrethroids. These programs have drastically reduced transmission in Brazil and Chile, and eliminated major vectors from certain regions: "Triatoma infestans" from Brazil, Chile, Uruguay, and parts of Peru and Paraguay, as well as "Rhodnius prolixus" from Central America. Vector control in some regions has been hindered by the development of insecticide resistance among triatomine bugs. In response, vector control programs have implemented alternative insecticides (e.g. fenitrothion and bendiocarb in Argentina and Bolivia), treatment of domesticated animals (which are also fed on by triatomine bugs) with pesticides, pesticide-impregnated paints, and other experimental approaches. In areas with triatomine bugs, transmission of can be prevented by sleeping under bed nets and by housing improvements that prevent triatomine bugs from colonizing houses.
Blood transfusion was formerly the second-most common mode of transmission for Chagas disease. can survive in refrigerated stored blood, and can survive freezing and thawing, allowing it to persist in whole blood, packed red blood cells, granulocytes, cryoprecipitate, and platelets. The development and implementation of blood bank screening tests has dramatically reduced the risk of infection during blood transfusion. Nearly all blood donations in Latin American countries undergo Chagas screening. Widespread screening is also common in non-endemic nations with significant populations of immigrants from endemic areas including the United Kingdom (implemented in 1999), Spain (2005), the United States (2007), France and Sweden (2009), Switzerland (2012), and Belgium (2013). Blood is tested using serological tests, typically ELISAs, to detect antibodies against proteins.
Other modes of transmission have also been targeted by Chagas disease prevention programs. Treating -infected mothers during pregnancy reduces the risk of congenital transmission of the infection. To this end, many countries in Latin America have implemented routine screening of pregnant women and infants for infection, and the World Health Organization recommends screening all children born to infected mothers to prevent congenital infection from developing into chronic disease. Similarly to blood transfusions, many countries with endemic Chagas disease screen organs for transplantation with serological tests.
There is no vaccine against Chagas disease. Several experimental vaccines have been tested in animals infected with and were able to reduce parasite numbers in the blood and heart, but no vaccine candidates had undergone clinical trials in humans as of 2016.
Chagas disease is managed using antiparasitic drugs to eliminate "T. cruzi" from the body and symptomatic treatment to address the effects of the infection. As of 2018, benznidazole and nifurtimox were the antiparasitic drugs of choice for treating Chagas disease, though benznidazole is the only drug available in most of Latin America. For either drug, treatment typically consists of two to three oral doses per day for 60 to 90 days. Antiparasitic treatment is most effective early in the course of infection: it eliminates from 50–80% of people in the acute phase, but only 20–60% of those in the chronic phase. Treatment of chronic disease is more effective in children than in adults, and the cure rate for congenital disease approaches 100% if treated in the first year of life. Antiparasitic treatment can also slow the progression of the disease and reduce the possibility of congenital transmission. Elimination of does not cure the cardiac and gastrointestinal damage caused by chronic Chagas disease, so these conditions must be treated separately. Antiparasitic treatment is not recommended for people who have already developed dilated cardiomyopathy.
Benznidazole is usually considered the first-line treatment because it has milder adverse effects than nifurtimox and its efficacy is better understood. Both benznidazole and nifurtimox have common side effects that can result in treatment being discontinued. The most common side effects of benznidazole are skin rash, digestive problems, decreased appetite, weakness, headache, and sleeping problems. These side effects can sometimes be treated with antihistamines or corticosteroids, and are generally reversed when treatment is stopped. However, benzidazole is discontinued in up to 29% of cases. Nifurtimox has more frequent side effects, affecting up to 97.5% of individuals taking the drug. The most common side effects are loss of appetite, weight loss, nausea and vomiting, and various neurological disorders including mood changes, insomnia, paresthesia and peripheral neuropathy. Treatment is discontinued in up to 75% of cases. Both drugs are contraindicated for use in pregnant women and people with liver or kidney failure. As of 2019, resistance to these drugs has been reported.
In the chronic stage, treatment involves managing the clinical manifestations of the disease. The treatment of Chagas cardiomyopathy is similar to that of other forms of heart disease. Beta blockers and ACE inhibitors may be prescribed, but some people with Chagas disease may not be able to take the standard dose of these drugs because they have low blood pressure or a low heart rate. To manage irregular heartbeats, people may be prescribed anti-arrhythmic drugs such as amiodarone, or have a pacemaker implanted. Blood thinners may be used to prevent thromboembolism and stroke. Chronic heart disease caused by Chagas is a common reason for heart transplantation surgery. Because transplant recipients take immunosuppressive drugs to prevent organ rejection, they are monitored using PCR to detect reactivation of the disease. People with Chagas disease who undergo heart transplantation have higher survival rates than the average heart transplant recipient.
In the early stages of gastrointestinal disease, esophageal symptoms can be managed by taking drugs that relax the esophageal sphincter. Surgical treatment, such as severing the muscles of the lower esophageal sphincter (cardiomyotomy), is indicated in more severe cases. Eating a high-fiber diet and taking laxatives or enemas can mitigate constipation caused by megacolon. Surgical removal of the affected part of the organ may be required for advanced megacolon and megaesophagus.
In 2017, an estimated 6.2 million people worldwide had Chagas disease, with approximately 162,000 new infections and 7,900 deaths each year. This resulted in a global annual economic burden estimated at US$7.2 billion, 86% of which is borne by endemic countries. Chagas disease results in the loss of over 800,000 disability-adjusted life years each year.
Chagas is endemic to 21 countries in continental Latin America: Argentina, Belize, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, El Salvador, French Guiana, Guatemala, Guyana, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru, Suriname, Uruguay, and Venezuela. The endemic area ranges from the southern United States to northern Chile and Argentina, with Bolivia (6.1%), Argentina (3.6%), and Paraguay (2.1%) exhibiting the highest prevalence of the disease. In endemic areas, due largely to vector control efforts and screening of blood donations, annual infections and deaths have fallen by 67% and more than 73% respectively from their peaks in the 1980s to 2010. Transmission by insect vector and blood transfusion has been completely interrupted in Uruguay (1997), Chile (1999), and Brazil (2006), and in Argentina, vectorial transmission has been interrupted in 13 of the 19 endemic provinces. During Venezuela's humanitarian crisis, vectorial transmission has begun occurring in areas where it had previously been interrupted and Chagas disease seroprevalence rates have increased. Transmission rates have also risen in the Gran Chaco region due to insecticide resistance and in the Amazon basin due to oral transmission.
While the rate of vector-transmitted Chagas disease has declined throughout most of Latin America, the rate of orally transmitted disease has risen, possibly due to increasing urbanization and deforestation bringing people into closer contact with triatomines and altering the distribution of triatomine species. Orally transmitted Chagas disease is of particular concern in Venezuela, where 16 outbreaks have been recorded between 2007 and 2018.
Chagas exists in two different ecological zones: In the Southern Cone region, the main vector lives in and around human homes. In Central America and Mexico, the main vector species lives both inside dwellings and in uninhabited areas. In both zones, Chagas occurs almost exclusively in rural areas, where also circulates in wild and domestic animals. commonly infects more than 100 species of mammals across Latin America including opossums, armadillos, marmosets, bats, and various rodents, all of which can be infected by the vectors or orally by eating triatomine bugs and other infected animals.
Though Chagas is traditionally considered a disease of rural Latin America, international migration has dispersed those suffering from the disease to numerous non-endemic countries, primarily in North America and Europe. As of 2020, approximately 300,000 infected people are living in the United States, about 30,000 to 40,000 of whom have Chagas cardiomyopathy. The vast majority of Chagas infections in the United States occur in immigrants from Latin America, but local transmission is possible. Eleven triatomine species are native to the United States and some southern states have persistent cycles of disease transmission between insect vectors and animal reservoirs, which include woodrats, possums, raccoons, armadillos and skunks. However, locally acquired infection is very rare: only 28 cases were documented from 1955 to 2015. As of 2013, the cost of treatment in the United States was estimated to be US$900 million annually (global cost $7 billion), which included hospitalization and medical devices such as pacemakers.
Chagas disease affects approximately 68,000 to 123,000 people in Europe as of 2019. Spain, which has a high rate of immigration from Latin America, has the highest prevalence of the disease. It is estimated that 50,000 to 70,000 Spanish people are living with the disease, which accounts for 75% of European cases. The prevalence of Chagas varies widely within European countries due to differing immigration patterns. Italy has the second highest prevalence, followed by the Netherlands, the United Kingdom, and Germany.
"T. cruzi" likely circulated in South American mammals long before the arrival of humans on the continent. has been detected in ancient human remains across South America, from a 9000-year-old Chinchorro mummy in the Atacama Desert, to remains of various ages in Minas Gerais, to an 1100-year-old mummy as far north as the Chihuahuan Desert near the Rio Grande. Many early written accounts describe symptoms consistent with Chagas disease, with early descriptions of the disease sometimes attributed to Miguel Diaz Pimenta (1707), (1735), and Theodoro J. H. Langgaard (1842).
The formal description of Chagas disease was made by Carlos Chagas in 1909 after examining a two-year-old girl with fever, swollen lymph nodes, and an enlarged spleen and liver. Upon examination of her blood, Chagas saw trypanosomes identical to those he had recently identified from the hindgut of triatomine bugs and named "Trypanosoma cruzi" in honor of his mentor, Brazilian physician Oswaldo Cruz. He sent infected triatomine bugs to Cruz in Rio de Janeiro, who showed the bite of the infected triatomine could transmit to marmoset monkeys as well. In just two years, 1908 and 1909, Chagas published descriptions of the disease, the organism that caused it, and the insect vector required for infection. Almost immediately thereafter, at the suggestion of , then professor of the , the disease was widely referred to as "Chagas disease". Chagas' discovery brought him national and international renown, but in highlighting the inadequacies of the Brazilian government's response to the disease, Chagas attracted criticism to himself and to the disease that bore his name, stifling research on his discovery and likely frustrating his nomination for the Nobel Prize in 1921.
In the 1930s, Salvador Mazza rekindled Chagas disease research, describing over a thousand cases in Argentina's Chaco Province.
In Argentina, the disease is known as "mal de Chagas-Mazza" in his honor. Serological tests for Chagas disease were introduced in the 1940s, demonstrating that infection with was widespread across Latin America. This, combined with successes eliminating the malaria vector through insecticide use, spurred the creation of public health campaigns focused on treating houses with insecticides to eradicate triatomine bugs. The 1950s saw the discovery that treating blood with crystal violet could eradicate the parasite, leading to its widespread use in transfusion screening programs in Latin America. Large-scale control programs began to take form in the 1960s, first in São Paulo, then various locations in Argentina, then national-level programs across Latin America. These programs received a major boost in the 1980s with the introduction of pyrethroid insecticides, which did not leave stains or odors after application and were longer-lasting and more cost-effective. Regional bodies dedicated to controlling Chagas disease arose through support of the Pan American Health Organization, with the Initiative of the Southern Cone for the Elimination of Chagas Diseases launching in 1991, followed by the Initiative of the Andean countries (1997), Initiative of the Central American countries (1997), and the Initiative of the Amazon countries (2004).
Fexinidazole, an antiparasitic drug approved for treating African trypanosomiasis, has shown activity against Chagas disease in animal models. As of 2019, it is undergoing phase II clinical trials for chronic Chagas disease in Spain. Other drug candidates include GNF6702, a proteasome inhibitor that is effective against Chagas disease in mice and is undergoing preliminary toxicity studies, and AN4169, which has had promising results in animal models.
A number of experimental vaccines have been tested in animals. Some approaches have used inoculation with dead or attenuated parasites or non-pathogenic organisms that share antigens with , such as "Trypanosoma rangeli" or "Phytomonas serpens". DNA vaccination has also been explored. As of 2019, vaccine research has mainly been limited to small animal models, and further testing in large animals is needed.
As of 2018, standard diagnostic tests for Chagas disease were limited in their ability to measure response to antiparasitic treatment. Serological tests, for example, may remain positive for years after is eliminated from the body, and PCR may give false negative results when parasitemia is low. Various potential biomarkers of treatment response are under investigation, such as immunoassays against specific antigens, flow cytometry testing to detect antibodies against different life stages of , and markers of physiological changes caused by the parasite, such as alterations in coagulation and lipid metabolism.
Another research area is the use of biomarkers to predict the progression of chronic Chagas disease. Blood levels of tumor necrosis factor alpha, brain and atrial natriuretic peptide, and angiotensin converting enzyme 2, markers of heart damage and inflammation, have been found to correlate with the severity of Chagas cardiomyopathy. Endothelin-1 has been studied as a prognostic marker in animal models.
"T. cruzi" shed acute-phase antigen (SAPA), which can be detected in blood using ELISA or Western blot, has been used as an indicator of early acute and congenital infection. A novel assay for antigens in urine has been developed to diagnose congenital disease. | https://en.wikipedia.org/wiki?curid=7012 |
Christiaan Barnard
Christiaan Neethling Barnard (8 November 1922 – 2 September 2001) was a South African cardiac surgeon who performed the world's first human-to-human heart transplant operation and the first one in which the patient regained consciousness. On 3 December 1967, Barnard transplanted the heart of accident-victim Denise Darvall into the chest of 54-year-old Louis Washkansky, with Washkansky regaining full consciousness and being able to easily talk with his wife, before dying 18 days later of pneumonia. The anti-rejection drugs that suppressed his immune system were a major contributing factor. Barnard had told Mr. and Mrs. Washkansky that the operation had an 80% chance of success, a claim which has been criticised as misleading. Barnard's second transplant patient Philip Blaiberg, whose operation was performed at the beginning of 1968, lived for a year and a half and was able to go home from the hospital.
Born in Beaufort West, Cape Province, Barnard studied medicine and practised for several years in his native South Africa. As a young doctor experimenting on dogs, Barnard developed a remedy for the infant defect of intestinal atresia. His technique saved the lives of ten babies in Cape Town and was adopted by surgeons in Britain and the United States. In 1955, he travelled to the United States and was initially assigned further gastrointestinal work by Owen Harding Wangensteen. He was introduced to the heart-lung machine, and Barnard was allowed to transfer to the service run by open heart surgery pioneer Walt Lillehei. Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at the Groote Schuur Hospital, Cape Town.
He retired as Head of the Department of Cardiothoracic Surgery in Cape Town in 1983 after developing rheumatoid arthritis in his hands which ended his surgical career. He became interested in anti-aging research, and in 1986 his reputation suffered when he promoted "Glycel", an expensive "anti-aging" skin cream, whose approval was withdrawn by the United States Food and Drug Administration soon thereafter. During his remaining years, he established the Christiaan Barnard Foundation, dedicated to helping underprivileged children throughout the world. He died in 2001 at the age of 78 after an asthma attack.
Barnard grew up in Beaufort West, Cape Province, Union of South Africa. His father, Adam Barnard, was a minister in the Dutch Reformed Church. One of his four brothers, Abraham, was a "blue baby" who died of a heart problem at the age of three (Barnard would later guess that it was tetralogy of Fallot). The family also experienced the loss of a daughter who was stillborn and who had been the fraternal twin of Barnard's older brother Johannes, who was twelve years older than Chris. Barnard matriculated from the Beaufort West High School in 1940, and went to study medicine at the University of Cape Town Medical School, where he obtained his MB ChB in 1945.
His father served as a missionary to mixed-race peoples. His mother, the former Maria Elisabeth de Swart, instilled in the surviving brothers the belief that they could do anything they set their minds to.
Barnard did his internship and residency at the Groote Schuur Hospital in Cape Town, after which he worked as a general practitioner in Ceres, a rural town in the Cape Province. In 1951, he returned to Cape Town where he worked at the City Hospital as a Senior Resident Medical Officer, and in the Department of Medicine at the Groote Schuur Hospital as a registrar. He completed his master's degree, receiving Master of Medicine in 1953 from the University of Cape Town. In the same year he obtained a doctorate in medicine (MD) from the same university for a dissertation titled "The treatment of tuberculous meningitis".
Soon after qualifying as a doctor, Barnard performed experiments on dogs while investigating intestinal atresia, a birth defect which allows life-threatening gaps to develop in the intestines. He followed a medical hunch that this was caused by inadequate blood flow to the fetus. After nine months and forty-three attempts, Barnard was able to reproduce this condition in a fetus puppy by tying off some of the blood supply to a puppy's intestines and then placing the animal back in the womb, after which it was born some two weeks later, with the condition of intestinal atresia. He was also able to cure the condition by removing the piece of intestine with inadequate blood supply. The mistake of previous surgeons had been attempting to reconnect ends of intestine which themselves still had inadequate blood supply. To be successful, it was typically necessary to remove between 15 and 20 centimeters of intestine (6 to 8 inches). Jannie Louw used this innovation in a clinical setting, and Barnard's method saved the lives of ten babies in Cape Town. This technique was also adapted by surgeons in Britain and the US. In addition, Barnard analyzed 259 cases of tubercular meningitis.
Owen Wangensteen in Minnesota had been impressed by the work of Alan Thal, a young South African doctor working in Minnesota. He asked Groote Schuur Head of Medicine John Brock if he might recommend any similarly talented South Africans and Brock recommended Barnard. In December 1955, Barnard travelled to the University of Minnesota, Minneapolis, United States, to begin a two-year scholarship under Chief of Surgery Wangensteen, who assigned Barnard more work on the intestines, which Barnard accepted even though he wanted to move onto something new. Simply by luck, whenever Barnard needed a break from this work, he could wander across the hall and talk with Vince Gott who ran the lab for open-heart surgery pioneer Walt Lillehei. Gott had begun to develop a technique of running blood backwards through the veins of the heart so Lillehei could more easily operate on the aortic valve (McRae writes, "It was the type of inspired thinking that entranced Barnard"). In March 1956, Gott asked Barnard to help him run the heart-lung machine for an operation. Shortly thereafter, Wangensteen agreed to let Barnard switch to Lillehei's service. It was during this time that Barnard first became acquainted with fellow future heart transplantation surgeon Norman Shumway. Barnard also became friendly with Gil Campbell who had demonstrated that a dog's lung could be used to oxygenate blood during open-heart surgery. (The year before Barnard arrived, Lillehei and Campbell had used this procedure for twenty minutes during surgery on a 13-year-old boy with ventricular septal defect, and the boy had made a full recovery.) Barnard and Campbell met regularly for early breakfast. In 1958, Barnard received a Master of Science in Surgery for a thesis titled "The aortic valve – problems in the fabrication and testing of a prosthetic valve". The same year he was awarded a Ph.D. for his dissertation titled "The aetiology of congenital intestinal atresia". Barnard described the two years he spent in the United States as "the most fascinating time in my life."
Upon returning to South Africa in 1958, Barnard was appointed head of the Department of Experimental Surgery at Groote Schuur hospital, as well as holding a joint post at the University of Cape Town. He was promoted to full-time lecturer and Director of Surgical Research at the University of Cape Town. In 1960, he flew to Moscow in order to meet Vladimir Demikhov, a top expert on organ transplants (later he credited Demikhov's accomplishment saying that "if there is a father of heart and lung transplantation then Demikhov certainly deserves this title.") In 1961 he was appointed Head of the Division of Cardiothoracic Surgery at the teaching hospitals of the University of Cape Town. He rose to the position of Associate Professor in the Department of Surgery at the University of Cape Town in 1962. Barnard's younger brother Marius, who also studied medicine, eventually became Barnard's right-hand man at the department of Cardiac Surgery. Over time, Barnard became known as a brilliant surgeon with many contributions to the treatment of cardiac diseases, such as the Tetralogy of Fallot and Ebstein's anomaly. He was promoted to Professor of Surgical Science in the Department of Surgery at the University of Cape Town in 1972. In 1981, Barnard became a founding member of the World Cultural Council. Among the many awards he received over the years, he was named Professor Emeritus in 1984.
Following the first successful kidney transplant in 1953, in the United States, Barnard performed the second kidney transplant in South Africa in October 1967, the first being done in Johannesburg the previous year.
On 23 January 1964, James Hardy at the University of Mississippi Medical Center in Jackson, Mississippi, performed the world's first heart transplant and world's first cardiac xenotransplant by transplanting the heart of a chimpanzee into a desperately ill and dying man. This heart did beat in the patient's chest for approximately 60 to 90 minutes. The patient, Boyd Rush, died without ever regaining consciousness.
Barnard had experimentally transplanted forty-eight hearts into dogs, which was about a fifth the number that Adrian Kantrowitz had performed at Maimonides Medical Center in New York and about a sixth the number Norman Shumway had performed at Stanford University in California. Barnard had no dogs which had survived longer than ten days, unlike Kantrowitz and Shumway who had had dogs survive for more than a year.
With the availability of new breakthroughs introduced by several pioneers, also including Richard Lower at the Medical College of Virginia, several surgical teams were in a position to prepare for a human heart transplant. Barnard had a patient willing to undergo the procedure, but as with other surgeons, he needed a suitable donor.
During the Apartheid era in South Africa, non-white persons and citizens were not given equal opportunities in the medical professions. At Groote Schuur Hospital, Hamilton Naki was an informally taught surgeon. He started out as a gardener and cleaner. One day he was asked to help out with an experiment on a giraffe. From this modest beginning, Naki became principal lab technician and taught hundreds of surgeons, and assisted with Barnard's organ transplant program. Barnard said, "Hamilton Naki had better technical skills than I did. He was a better craftsman than me, especially when it came to stitching, and had very good hands in the theatre". A popular myth, propagated principally by a widely discredited documentary film called "Hidden Heart" and an erroneous newspaper article, maintains incorrectly that Naki was present during the Washkansky transplant.
Barnard performed the world's first human-to-human heart transplant operation in the early morning hours of Sunday 3 December 1967. Louis Washkansky, a 54-year-old grocer who was suffering from diabetes and incurable heart disease, was the patient. Barnard was assisted by his brother Marius Barnard, as well as a team of thirty staff members. The operation lasted approximately five hours.
Barnard stated to Washkansky and his wife Ann Washkansky that the transplant had an 80% chance of success. This has been criticised by the ethicists Peter Singer and Helga Kuhse as making claims for chances of success to the patient and family which were "unfounded" and "misleading".
Barnard later wrote, "For a dying man it is not a difficult decision because he knows he is at the end. If a lion chases you to the bank of a river filled with crocodiles, you will leap into the water, convinced you have a chance to swim to the other side." The donor heart came from a young woman, Denise Darvall, who had been rendered brain dead in an accident on 2 December 1967, while crossing a street in Cape Town. On examination at Groote Schuur hospital, Darvall had two serious fractures in her skull, with no electrical activity in her brain detected, and no sign of pain when ice water was poured into her ear. Coert Venter and Bertie Bosman requested permission from Darvall's father for Denise's heart to be used in the transplant attempt. The afternoon before his first transplant, Barnard dozed at his home while listening to music. When he awoke, he decided to modify Shumway and Lower's technique. Instead of cutting straight across the back of the atrial chambers of the donor heart, he would avoid damage to the septum and instead cut two small holes for the venae cavae and pulmonary veins. Prior to the transplant, rather than wait for Darvall's heart to stop beating, at his brother Marius Barnard's urging, Christiaan had injected potassium into her heart to paralyse it and render her technically dead by the whole-body standard. Twenty years later, Marius Barnard recounted, "Chris stood there for a few moments, watching, then stood back and said, 'It works.'"
Washkansky survived the operation and lived for 18 days, having succumbed to pneumonia as he was taking immunosuppressive drugs.
Barnard and his patient received worldwide publicity.
As a 2017 BBC retrospective article describes, "Journalists and film crews flooded into Cape Town's Groote Schuur Hospital, soon making Barnard and Washkansky household names." Barnard himself was described as "charismatic" and "photogenic." And the operation was initially reported as "successful" even though Washkansky only lived a further 18 days.
Worldwide, approximately 100 transplants were performed by various doctors during 1968. However, only a third of these patients lived longer than three months. Many medical centers stopped performing transplants. In fact, a U.S. National Institutes of Health publication states, "Within several years, only Shumway's team at Stanford was attempting transplants."
Barnard's second transplant operation was conducted on 2 January 1968, and the patient, Philip Blaiberg, survived for 19 months. Blaiberg's heart was donated by Clive Haupt, a 24-year-old black man who suffered a stroke, inciting controversy (especially in the African-American press) during the time of South African apartheid. Dirk van Zyl, who received a new heart in 1971, was the longest-lived recipient, surviving over 23 years.
Between December 1967 and November 1974 at Groote Schuur Hospital in Cape Town, South Africa, ten heart transplants were performed, as well as a heart and lung transplant in 1971. Of these ten patients, four lived longer than 18 months, with two of these four becoming long-term survivors. One patient lived for over thirteen years and another for over twenty-four years.
Full recovery of donor heart function often takes place over hours or days, during which time considerable damage can occur. Other deaths to patients can occur from preexisting conditions. For example, in pulmonary hypertension the patient's right ventricle has often adapted to the higher pressure over time and, although diseased and hypertrophied, is often capable of maintaining circulation to the lungs. Barnard designed the idea of the heterotopic (or "piggy back" transplant) in which the patient's diseased heart is left in place while the donor heart is added, essentially forming a "double heart". Barnard performed the first such heterotopic heart transplant in 1974.
From November 1974 through December 1983, 49 consecutive heterotopic heart transplants on 43 patients were performed at Groote Schuur. The survival rate for patients at one year was over 60%, as compared to less than 40% with standard transplants, and the survival rate at five years was over 36% as compared to less than 20% with standard transplants.
Many surgeons gave up cardiac transplantation due to poor results, often due to rejection of the transplanted heart by the patient's immune system. Barnard persisted until the advent of cyclosporine, an effective immunosuppressive drug, which helped revive the operation throughout the world. He also attempted xenotransplantation in a human patient, while attempting to save the life of a girl who was unable to leave artificial life support after her second aortic valve replacement.
Barnard was an outspoken opponent of South Africa's laws of apartheid, and was not afraid to criticise his nation's government, although he had to temper his remarks to some extent to travel abroad. Rather than leaving his homeland, he used his fame to campaign for a change in the law. Christiaan's brother, Marius Barnard, went into politics, and was elected to the legislature from Progressive Federal Party. Barnard later stated that the reason he never won the Nobel Prize in Physiology or Medicine was probably because he was a "white South African".
Shortly before his visit to Kenya in 1978, the following was written about his views regarding race relations in South Africa;
"While he believes in the participation of Africans in the political process of South Africa, he is opposed to a one-man-one-vote system in South Africa".
In answering a hypothetical question on how he would solve the race problem were he a "benevolent dictator in South Africa", Barnard stated the following in a long interview at the Weekly Review:
The interview ended with the following summary from he himself;
"I often say that, like King Lear, South Africa is a country more sinned against than sinning."
Barnard's first marriage was to Aletta Gertruida Louw, a nurse, whom he married in 1948 while practising medicine in Ceres. The couple had two children: Deirdre (born 1950) and Andre (1951–1984). International fame took a toll on his personal life, and in 1969, Barnard and his wife divorced. In 1970, he married heiress Barbara Zoellner when she was 19, the same age as his son, and they had two children: Frederick (born 1972) and Christiaan Jr. (born 1974). He divorced Zoellner in 1982. Barnard married for a third time in 1988 to Karin Setzkorn, a young model. They also had two children, Armin (born 1989) and Lara (born 1997), but this last marriage also ended in divorce in 2000.
Barnard described in his autobiography "The Second Life" a one-night extramarital affair with Italian film star Gina Lollobrigida, that occurred in January 1968. During that visit to Rome he received an audience from Pope Paul VI.
In October 2016, U.S. Congresswoman Ann McLane Kuster (D-NH) stated that Barnard sexually assaulted her when she was 23 years old. According to Kuster, he attempted to grope her under her skirt, while seated at a business luncheon with Rep. Pete McCloskey (R-CA), whom she worked for at the time.
Barnard retired as Head of the Department of Cardiothoracic Surgery in Cape Town in 1983 after developing rheumatoid arthritis in his hands which ended his surgical career. He had struggled with arthritis since 1956, when it was diagnosed during his postgraduate work in the United States. After retirement, he spent two years as the Scientist-In-Residence at the Oklahoma Transplantation Institute in the United States and as an acting consultant for various institutions.
He had by this time become very interested in anti-aging research, and his reputation suffered in 1986 when he promoted "Glycel", an expensive "anti-aging" skin cream, whose approval was withdrawn by the United States Food and Drug Administration soon thereafter. He also spent time as a research advisor to the Clinique la Prairie, in Switzerland, where the controversial "rejuvenation therapy" was practised.
Barnard divided the remainder of his years between Austria, where he established the Christiaan Barnard Foundation, dedicated to helping underprivileged children throughout the world, and his game farm in Beaufort West, South Africa.
Christiaan Barnard died on 2 September 2001, while on holiday in Paphos, Cyprus. Early reports stated that he had died of a heart attack, but an autopsy showed his death was caused by a severe asthma attack.
Christiaan Barnard wrote two autobiographies. His first book, "One Life", was published in 1969 () and sold copies worldwide. Some of the proceeds were used to set up the Chris Barnard Fund for research into heart disease and heart transplants in Cape Town. His second autobiography, "The Second Life", was published in 1993, eight years before his death ().
Apart from his autobiographies, Dr Barnard also wrote several other books including: | https://en.wikipedia.org/wiki?curid=7015 |
Concubinage
Concubinage ( ) is an interpersonal and sexual relationship between a man and a woman in which the couple are not or cannot be married. The inability to marry may be due to multiple factors such as differences in social rank status, an existing marriage, religious or professional prohibitions, or a lack of recognition by appropriate authorities. The woman in such a relationship is referred to as a concubine ( ). A concubine among polygynyous peoples is a secondary wife, usually of inferior rank.
The prevalence of concubinage and the status of rights and expectations of a concubine have varied across cultures and time periods, as have the rights of children of a concubine. Whatever the status and rights of the concubine, they were always inferior to those of the wife, and typically neither she nor her children had rights of inheritance. Especially among royalty and nobility, the woman in such relationships was commonly described as a mistress. The children of such relationships were counted as illegitimate and in some societies were barred from inheriting the father's title or estates, even in the absence of legitimate heirs.
Concubinage was highly popular before the early 20th century all over East Asia. The main function of concubinage was producing additional heirs, as well as bringing males pleasure. Children of concubines had lower rights in account to inheritance, which was regulated by the Dishu system.
Polygyny and concubinage were very common in Mongol society especially for powerful Mongol men. Genghis Khan, Ögedei Khan, Jochi, Tolui, and Kublai Khan (among others) all had many wives and concubines.
Genghis Khan frequently acquired wives and concubines from empires and societies that he had conquered, these women were often princesses or queens that were taken captive or gifted to him. Genghis Khan's most famous concubine was Möge Khatun, according to the Persian historian Ata-Malik Juvayni she was "given to Chinggis Khan by a chief of the Bakrin tribe, and he loved her very much." After Genghis Khan died, Möge Khatun became a wife of Ögedei Khan. Ögedei also favored her as a wife, and she frequently accompanied him on his hunting expeditions.
In China, successful men often had concubines until the practice was outlawed when the Communist Party of China came to power in 1949. The standard Chinese term translated as "concubine" was "qiè" , a term that has been used since ancient times, which means "concubine; I, your servant (deprecating self reference)". Concubinage resembled marriage in that concubines were recognized sexual partners of a man and were expected to bear children for him. Unofficial concubines () were of lower status, and their children were considered illegitimate. The English term concubine is also used for what the Chinese refer to as "pínfēi" (), or "consorts of emperors", an official position often carrying a very high rank.
In premodern China it was illegal and socially disreputable for a man to have more than one wife at a time, but it was acceptable to have concubines. In the earliest records a man could have as many concubines as he could afford. From the Eastern Han period (AD 25–220) onward, the number of concubines a man could have was limited by law. The higher rank and the more noble identity a man possessed, the more concubines he was permitted to have.
A concubine's treatment and situation was variable and was influenced by the social status of the male to whom she was attached, as well as the attitude of his wife. In the "Book of Rites" chapter on "The Pattern of the Family" () it says, “If there were betrothal rites, she became a wife; and if she went without these, a concubine.” Wives brought a dowry to a relationship, but concubines did not. A concubinage relationship could be entered into without the ceremonies used in marriages, and neither remarriage nor a return to her natal home in widowhood were allowed to a concubine.
The position of the concubine was generally inferior to that of the wife. Although a concubine could produce heirs, her children would be inferior in social status to a wife's children, although they were of higher status than illegitimate children. The child of a concubine had to show filial duty to two women, their biological mother and their legal mother—the wife of their father. After the death of a concubine, her sons would make an offering to her, but these offerings were not continued by the concubine's grandsons, who only made offerings to their grandfather's wife.
There are early records of concubines allegedly being buried alive with their masters to "keep them company in the afterlife". Until the Song dynasty (960–1276), it was considered a serious breach of social ethics to promote a concubine to a wife.
During the Qing dynasty (1644–1911), the status of concubines improved. It became permissible to promote a concubine to wife, if the original wife had died and the concubine was the mother of the only surviving sons. Moreover, the prohibition against forcing a widow to remarry was extended to widowed concubines. During this period tablets for concubine-mothers seem to have been more commonly placed in family ancestral altars, and genealogies of some lineages listed concubine-mothers.
Imperial concubines, kept by emperors in the Forbidden City, had different ranks and were traditionally guarded by eunuchs to ensure that they could not be impregnated by anyone but the emperor. In Ming China (1368–1644) there was an official system to select concubines for the emperor. The age of the candidates ranged mainly from 14 to 16. Virtues, behavior, character, appearance and body condition were the selection criteria.
Despite the limitations imposed on Chinese concubines, there are several examples in history and literature of concubines who achieved great power and influence. Lady Yehenara, otherwise known as Empress Dowager Cixi, was arguably one of the most successful concubines in Chinese history. Cixi first entered the court as a concubine to Xianfeng Emperor and gave birth to his only surviving son, who later became Tongzhi Emperor. She eventually became the "de facto" ruler of Qing China for 47 years after her husband's death.
An examination of concubinage features in one of the Four Great Classical Novels, "Dream of the Red Chamber" (believed to be a semi-autobiographical account of author Cao Xueqin's family life). Three generations of the Jia family are supported by one notable concubine of the emperor, Jia Yuanchun, the full elder sister of the male protagonist Jia Baoyu. In contrast, their younger half-siblings by concubine Zhao, Jia Tanchun and Jia Huan, develop distorted personalities because they are the children of a concubine.
Emperors' concubines and harems are emphasized in 21st-century romantic novels written for female readers and set in ancient times. As a plot element, the children of concubines are depicted with a status much inferior to that in actual history. The zhai dou (,residential intrigue) and gong dou(,harem intrigue) genres show concubines and wives, as well as their children, scheming secretly to gain power. Empresses in the Palace, a "gong dou" type novel and TV drama, has had great success in 21st-century China.
Hong Kong officially abolished the Great Qing Legal Code in 1971, thereby making concubinage illegal. Casino magnate Stanley Ho of Macau took his "second wife" as his official concubine in 1957, while his "third and fourth wives" retain no official status.
Before monogamy was legally imposed in the Meiji period, concubinage was common among the nobility. Its purpose was to ensure male heirs. For example, the son of an Imperial concubine often had a chance of becoming emperor. Yanagihara Naruko, a high-ranking concubine of Emperor Meiji, gave birth to Emperor Taishō, who was later legally adopted by Empress Haruko, Emperor Meiji's formal wife. Even among merchant families, concubinage was occasionally used to ensure heirs. Asako Hirooka, an entrepreneur who was the daughter of a concubine, worked hard to help her husband's family survive after the Meiji Restoration. She lost her fertility giving birth to her only daughter, Kameko; so her husband—with whom she got along well—took Asako's maid-servant as a concubine and fathered three daughters and a son with her. Kameko, as the child of the formal wife, married a noble man and matrilineally carried on the family name.
A Samurai could take concubines but their backgrounds were checked by higher-ranked samurai. In many cases, taking a concubine was akin to a marriage. Kidnapping a concubine, although common in fiction, would have been shameful, if not criminal. If the concubine was a commoner, a messenger was sent with betrothal money or a note for exemption of tax to ask for her parents' acceptance. Even though the woman would not be a legal wife, a situation normally considered a demotion, many wealthy merchants believed that being the concubine of a samurai was superior to being the legal wife of a commoner. When a merchant's daughter married a samurai, her family's money erased the samurai's debts, and the samurai's social status improved the standing of the merchant family. If a samurai's commoner concubine gave birth to a son, the son could inherit his father's social status.
Concubines sometimes wielded significant influence. Nene, wife of Toyotomi Hideyoshi, was known to overrule her husband's decisions at times and Yodo-dono, his concubine, became the "de facto" master of Osaka castle and the Toyotomi clan after Hideyoshi's death.
Joseon monarchs had a harem which contained concubines of different ranks. Empress Myeongseong managed to have sons, preventing sons of concubines getting power.
Children of concubines often had lower value in account of marriage. A daughter of concubine could not marry a wife-born son of the same class. For example, Jang Nok-su was a concubine-born daughter of a mayor, who was initially married to a slave-servant, and later became a high-ranking concubine of Yeonsangun.
Polygyny was common among Vikings, rich and powerful Viking men tended to have many wives and concubines. Viking men would often buy or capture women and make them into their wives or concubines. Researchers have suggested that Vikings may have originally started sailing and raiding due to a need to seek out women from foreign lands. Polygynous relationships in Viking society may have led to a shortage of eligible women for the average male, this is because polygyny increases male-male competition in society because it creates a pool of unmarried men who are willing to engage in risky status-elevating and sex seeking behaviors. Due to this, the average Viking man could have been forced to perform riskier actions to gain wealth and power to be able to find suitable women. The concept was expressed in the 11th century by historian Dudo of Saint-Quentin in his semi imaginary "History of The Normans". The Annals of Ulster depicts raptio and states that in 821 the Vikings plundered an Irish village and "carried off a great number of women into captivity".
While most Ancient Egyptians were monogamous, a male pharaoh would have had other, lesser wives and concubines in addition to the Great Royal Wife. This arrangement would allow the pharaoh to enter into diplomatic marriages with the daughters of allies, as was the custom of ancient kings. Concubinage was a common occupation for women in ancient Egypt, especially for talented women. A request for forty concubines by Amenhotep III (c. 1386-1353 BCE) to a man named Milkilu, Prince of Gezer states:"Behold, I have sent you Hanya, the commissioner of the archers, with merchandise in order to have beautiful concubines, i.e. weavers. Silver, gold, garments, all sort of precious stones, chairs of ebony, as well as all good things, worth 160 deben. In total: forty concubines - the price of every concubine is forty of silver. Therefore, send very beautiful concubines without blemish." - "(Lewis, 146)"Concubines would be kept in the pharaoh's harem. Amenhotep III kept his concubines in his palace at Malkata, which was one of the most opulent in the history of Egypt. The king was considered to be deserving of many women as long as he cared for his Great Royal Wife as well.
In Ancient Greece the practice of keeping a concubine ( "pallakís") was common among the upper classes, and they were for the most part women who were slaves or foreigners, but occasional free born based on family arrangements (typically from poor families). Children produced by slaves remained slaves and those by non-slave concubines varied over time; sometimes they had the possibility of citizenship. The law prescribed that a man could kill another man caught attempting a relationship with his concubine. By the mid 4th century concubines could inherit property, but, like wives, they were treated as sexual property. While references to the sexual exploitation of maidservants appear in literature, it was considered disgraceful for a man to keep such women under the same roof as his wife. Apollodorus of Acharnae said that "hetaera" were concubines when they had a permanent relationship with a single man, but nonetheless used the two terms interchangably.
Concubinage was an institution practiced in ancient Rome that allowed a man to enter into an informal but recognized relationship with a woman ("concubina", plural "concubinae") who was not his wife, most often a woman whose lower social status was an obstacle to marriage. Concubinage was "tolerated to the degree that it did not threaten the religious and legal integrity of the family". It was not considered derogatory to be called a "concubina", as the title was often inscribed on tombstones.
In Judaism, a concubine is a marital companion of inferior status to a wife. Among the Israelites, men commonly acknowledged their concubines, and such women enjoyed the same rights in the house as legitimate wives.
The term concubine did not necessarily refer to women after the first wife. A man could have many wives and concubines. Legally, any children born to a concubine were considered to be the children of the wife she was under. Sarah had to get Ishmael out of her house because legally Ishmael would always be the first born son even though Isaac was her natural child.
The concubine may not have commanded the exact amount of respect as the wife. In the Levitical rules on sexual relations, the Hebrew word that is commonly translated as "wife" is distinct from the Hebrew word that means "concubine". However, on at least one other occasion the term is used to refer to a woman who is not a wife specifically, the handmaiden of Jacob's wife. In the Levitical code, sexual intercourse between a man and a wife of a different man was forbidden and punishable by death for both persons involved. Since it was regarded as the highest blessing to have many children, wives often gave their maids to their husbands if they were barren, as in the cases of Sarah and Hagar, and Rachel and Bilhah. The children of the concubine often had equal rights with those of the wife; for example, King Abimelech was the son of Gideon and his concubine. Later biblical figures such as Gideon, and Solomon had concubines in addition to many childbearing wives. For example, the Books of Kings say that Solomon had 700 wives and 300 concubines.
The account of the unnamed Levite in Judges 19–20 shows that the taking of concubines was not the exclusive preserve of kings or patriarchs in Israel during the time of the Judges, and that the rape of a concubine was completely unacceptable to the Israelite nation and led to a civil war. In the story, the Levite appears to be an ordinary member of the tribe, whose concubine was a woman from Bethlehem in Judah. This woman was unfaithful, and eventually abandoned him to return to her paternal household. However, after four months, the Levite, referred to as her husband, decided to travel to her father's house to persuade his concubine to return. She is amenable to returning with him, and the father-in-law is very welcoming. The father-in-law convinces the Levite to remain several additional days, until the party leaves behind schedule in the late evening. The group pass up a nearby non-Israelite town to arrive very late in the city of Gibeah, which is in the land of the Benjaminites. The group sit around the town square, waiting for a local to invite them in for the evening, as was the custom for travelers. A local old man invites them to stay in his home, offering them guest right by washing their feet and offering them food. A band of wicked townsmen attack the house and demand the host send out the Levite man so they can have sex with him. The host offers to send out his virgin daughter as well as the Levite's concubine for them to rape, to avoid breaking guest right towards the Levite. Eventually, to ensure his own safety and that of his host, the Levite gives the men his concubine, who is raped and abused through the night, until she is left collapsed against the front door at dawn. In the morning, the Levite finds her when he tries to leave. When she fails to respond to her husband's order to get up, most likely because she is dead, the Levite places her on his donkey and continues home. Once home, he dismembers her body and distributes the 12 parts throughout the nation of Israel. The Israelites gather to learn why they were sent such grisly gifts, and are told of the sadistic rape of his concubine by the Levite. The crime is considered outrageous by the Israelite tribesmen, who then wreak total retribution on the men of Gibeah, as well as the surrounding tribe of Benjamin when they support the Gibeans, killing them without mercy and burning all their towns. The inhabitants of (the town of) Jabesh Gilead are then slaughtered as a punishment for not joining the eleven tribes in their war against the Benjaminites, and their four hundred unmarried daughters given in forced marriage to the six hundred Benjamite survivors. Finally, the two hundred Benjaminite survivors who still have no wives are granted a mass marriage by abduction by the other tribes.
In Judaism, concubines are referred to by the Hebrew term pilegesh (). The term is a loanword from Ancient Greek , meaning "a mistress staying in house".
According to the Babylonian Talmud, the difference between a concubine and a legitimate wife was that the latter received a ketubah and her marriage ("nissu'in") was preceded by an erusin ("formal betrothal"), which was not the case for a concubine. One opinion in the Jerusalem Talmud argues that the concubine should also receive a "marriage contract", but without a clause specifying a divorce settlement. According to Rashi, "wives with kiddushin and ketubbah, concubines with kiddushin but without ketubbah"; this reading is from the Jerusalem Talmud,
Certain Jewish thinkers, such as Maimonides, believed that concubines were strictly reserved for royal leadership and thus that a commoner may not have a concubine. Indeed, such thinkers argued that commoners may not engage in any type of sexual relations outside of a marriage.
Maimonides was not the first Jewish thinker to criticise concubinage. For example, Leviticus Rabbah severely condemns the custom. Other Jewish thinkers, such as Nahmanides, Samuel ben Uri Shraga Phoebus, and Jacob Emden, strongly objected to the idea that concubines should be forbidden.
In the Hebrew of the contemporary State of Israel, "pilegesh" is often used as the equivalent of the English word "mistress"—i.e., the female partner in extramarital relations—regardless of legal recognition. Attempts have been initiated to popularise "pilegesh" as a form of premarital, non-marital or extramarital relationship (which, according to the perspective of the enacting person(s), is permitted by Jewish law).
Concubinage according to Islamic sexual jurisprudence is not permitted because any sex before marriage is prohibited. Muslim scholars vehemently reject concubinage all along, which was not considered prostitution, and was very common during the Arab slave trade throughout the Middle Ages and early modern period, when women and girls from the Caucasus, Africa, Central Asia and Europe were captured and served as concubines in the harems of the Arab World. Ibn Battuta tells us several times that he was given or purchased female slaves.
Concubinage is permitted and regulated in Islam. Al-Muminun 6 and Al-Maarij 30 both, in identical wording, draw a distinction between spouses and "those whom one's right hands possess" (concubines), saying " أَزْوَاجِهِمْ أَوْ مَا مَلَكَتْ أَيْمَانُهُمْ" (literally, "their spouses or what their right hands possess"), while clarifying that sexual intercourse with either is permissible. However both these surahs literal wording do not specifically use the term wife but instead the more general & both-gender including term "spouse" in the grammatically masculine plural (azwajihim), thus Mohammad Asad in his commentary to both these Surahs rules out concubinage due to the fact that ""since the term azwaj ("spouses"), too, denotes both the male and the female partners in marriage, there is no reason for attributing to the phrase aw ma malakat aymanuhum the meaning of "their female slaves"; and since, on the other hand, it is out of the question that female and male slaves could have been referred to here, it is obvious that this phrase does not relate to slaves at all, but has the same meaning as in 4:24 namely, "those whom they rightfully possess through wedlock" with the significant difference that in the present context this expression relates to both husbands and wives, who "rightfully possess" one another by virtue of marriage"." Following this approach, Mohammad Asads translation of the mentioned verses denotes a different picture, which is as follows: "with any but their spouses - that is, those whom they rightfully possess [through wedlock]". Sayyid Abul Ala Maududi explains that "two categories of women have been excluded from the general command of guarding the private parts: (a) wives, (b) women who are legally in one's possession". "Concubine" ("surriyya") refers to the female slave ("jāriya"), whether Muslim or non-Muslim, with whom her master engages in sexual intercourse. The word ""surriyya"" is not mentioned in the Qur'an. However, the expression "Ma malakat aymanukum" (that which your right hands own), which occurs fifteen times in the sacred book, refers to slaves and therefore, though not necessarily, to concubines. Concubinage was a pre-Islamic custom that was allowed to be practiced under Islam with Jews and non-Muslim people to marry concubine after teaching her and instructing her well and then giving them freedom. Rationale given for recognition of concubinage in Islam is that "it satisfied the sexual desire of the female slaves and thereby prevented the spread of immorality in the Muslim community." Most schools restrict concubinage to a relationship where the female slave is required to be monogamous to her master (though the master's monogamy to her is not required), but according to Sikainga, "in reality, however, female slaves in many Muslim societies were prey for [male] members of their owners' household, their [owner's male] neighbors, and their [owner's male] guests."
Concubines were common in pre-Islamic Arabia and when Islam arrived, it had a society with concubines. Islam introduced legal restrictions to the concubinage and encouraged manumission. Islam furthermore endorsed educating, freeing and marrying female slaves. In verse 23:6 in the Quran it is allowed to have sexual intercourse with concubines after marrying them, as Islam forbids sexual intercourse outside of marriage.
Children of former concubines were generally declared as legitimate with or without wedlock, and the mother of a free child was considered free upon the death of the male partner.
According to Shia Muslims, Muhammad sanctioned Nikah mut‘ah (fixed-term marriage, called muta'a in Iraq and sigheh in Iran) which has instead been used as a legitimizing cover for sex workers, in a culture where prostitution is otherwise forbidden. Some Western writers have argued that mut'ah approximates prostitution. Julie Parshall writes that mut'ah is legalised prostitution which has been sanctioned by the Twelver Shia authorities. She quotes the Oxford encyclopedia of modern Islamic world to differentiate between marriage (nikah) and Mut'ah, and states that while nikah is for procreation, mut'ah is just for sexual gratification. According to Zeyno Baran, this kind of temporary marriage provides Shi'ite men with a religiously sanctioned equivalent to prostitution. According to Elena Andreeva's observation published in 2007, Russian travellers to Iran consider mut'ah to be "legalized profligacy" which is indistinguishable from prostitution. Religious supporters of mut'ah argue that temporary marriage is different from prostitution for a couple of reasons, including the necessity of iddah in case the couple have sexual intercourse. It means that if a woman marries a man in this way and has sex, she has to wait for a number of months before marrying again and therefore, a woman cannot marry more than 3 or 4 times in a year.
In ancient times, two sources for concubines were permitted under an Islamic regime. Primarily, non-Muslim women taken as prisoners of war were made concubines as happened after the Battle of the Trench, or in numerous later Caliphates.
It was encouraged to manumit slave women who rejected their initial faith and embraced Islam, or to bring them into formal marriage.
According to the rules of Islamic Fiqh, what is "halal" (permitted) by Allah in the Quran cannot be altered by any authority or individual.
It is further clarified that all domestic and organizational female employees are not concubines in this era and hence sex is forbidden with them unless Nikah, Nikah mut‘ah or Nikah Misyar is committed through the proper channels.
When slavery became institutionalized in the North American colonies, white men, whether or not they were married, sometimes took enslaved women as concubines; children of such unions remained slaves. Marriage between the races was prohibited by law in the colonies and the later United States. Many colonies and states also had laws against miscegenation, or any interracial relations. From 1662 the Colony of Virginia, followed by others, incorporated into law the principle that children took their mother's status, i.e., the principle of "partus sequitur ventrem". This led to generations of multiracial slaves, some of whom were otherwise considered legally white (one-eighth or less African, equivalent to a great-grandparent) before the American Civil War.
In some cases, men had long-term relationships with enslaved women, giving them and their mixed-race children freedom and providing their children with apprenticeships, education and transfer of capital. A relationship between Thomas Jefferson and Sally Hemings is an example of this. Such arrangements were more prevalent in the Southern states during the antebellum years.
In Louisiana and former French territories, a formalized system of concubinage called "plaçage" developed. European men took enslaved or free women of color as mistresses after making arrangements to give them a dowry, house or other transfer of property, and sometimes, if they were enslaved, offering freedom and education for their children. A third class of free people of color developed, especially in New Orleans. Many became educated, artisans and property owners. French-speaking and practicing Catholicism, these women combined French and African-American culture and created an elite between those of European descent and the slaves. Today, descendants of the free people of color are generally called Louisiana Creole people. | https://en.wikipedia.org/wiki?curid=7016 |
Central Plaza (Hong Kong)
Central Plaza is a 78-storey, skyscraper completed in August 1992 at 18 Harbour Road, in Wan Chai on Hong Kong Island in Hong Kong. It is the third tallest tower in the city after 2 International Finance Centre in Central and the ICC in West Kowloon. It was the tallest building in Asia from 1992 to 1996, until the Shun Hing Square was built in Shenzhen, a neighbouring city. Central Plaza surpassed the Bank of China Tower as the tallest building in Hong Kong until the completion of 2 IFC.
Central Plaza was also the tallest reinforced concrete building in the world, until it was surpassed by CITIC Plaza, Guangzhou. The building uses a triangular floor plan. On the top of the tower is a four-bar neon clock that indicates the time by displaying different colours for 15-minute periods, blinking at the change of the quarter.
An anemometer is installed on the tip of the building's mast, at above sea level. The mast has a height of . It also houses the world's highest church inside a skyscraper, Sky City Church.
The land upon which Central Plaza sits was reclaimed from Victoria Harbour in the 1970s. The site was auctioned off by the Hong Kong Government at City Hall Theatre on 25 January 1989. It was sold for a record HK$3.35 billion to a joint venture called "Cheer City Properties", owned 50 per cent by Sun Hung Kai Properties and 50 per cent by fellow real estate conglomerate Sino Land and their major shareholder the Ng Teng Fong family. A third developer, Ryoden Development, joined the consortium afterward. Ryoden Development disposed its 5% interest for 190,790 square feet of office space in New Kowloon Plaza from Sun Hung Kai in 1995.
The first major tenant to sign a lease was the Provisional Airport Authority, who on 2 August 1991 agreed to lease the 24th to 26th floors. A topping-out ceremony, presided over by Sir David Ford, was held on 9 April 1992.
Central Plaza is made up of two principal components: a free standing office tower and a podium block attached to it. The tower is made up of three sections: a tower base forming the main entrance and public circulation spaces; a tower body containing 57 office floors, a sky lobby and five mechanical plant floors; and the tower top consist of six mechanical plant floors and a tower mast.
The ground level public area along with the public sitting out area form an landscaped garden with fountain, trees and artificial stone paving. No commercial element is included in the podium. The first level is a public thoroughfare for three pedestrian bridges linking the Mass Transit Railway, the Convention and Exhibition Centre and the China Resource Building. By turning these space to public use, the building got 20% plot ratio more as bonus. The shape of the tower is not truly triangular but with its three corners cut off to provide better internal office spaces.
Central Plaza was designed by the Hong Kong architectural firm Ng Chun Man and Associates and engineered by Arup. The main contractor was a joint venture, comprising the contracting firms Sanfield (a subsidiary of Sun Hung Kai) and Tat Lee, called Manloze Ltd.
The building was designed to be triangular in shape because it would allow 20% more of the office area to enjoy the harbour view as compared with a square or rectangular shaped buildings. From an architectural point of view, this arrangement provides better floor area utilisation, offering an internal column-free office area with a clear depth of and an overall usable floor area efficiency of 81%.
Nonetheless, the triangular building plan causes the air handling unit (AHU) room in the internal core to also assume a triangular configuration. With only limited space, this makes the adoption of a standard AHU not feasible. Furthermore, all air-conditioning ducting, electrical trunking and piping gathered inside the core area has to be squeezed into a very narrow and congested corridor ceiling void.
As the building is situated opposite to the Hong Kong Convention and Exhibition Centre, the only way to get more sea view for the building and not be obstructed by the neighbouring high-rise buildings is to build it tall enough. However, a tall building brings a lot of difficulties to structural and building services design, for example, excessive system static pressure for water systems, high line voltage drop and long distance of vertical transportation. All these problems can increase the capital cost of the building systems and impair the safety operation of the building.
As a general practice, for achieving a clear height of , a floor-to-floor height of would be required. However, because of high windload in Hong Kong for such a super high-rise building, every increase in building height by a metre would increase the structural cost by more than HK$1 million (HK$304,800 per ft). Therefore, a comprehensive study was conducted and finally a floor height of was adopted. With this issue alone, an estimated construction cost saving for a total of 58 office floors, would be around HK$30 million. Yet at the same time, a maximum ceiling height of in office area could still be achieved with careful coordination and dedicated integration.
Steel structure is more commonly adopted in high-rise building. In the original scheme, an externally cross-braced framed tube was applied with primary/secondary beams carrying metal decking with reinforced concrete slab. The core was also of steelwork, designed to carry vertical load only. Later after a financial review by the developer, they decided to reduce the height of the superstructure by increasing the size of the floor plate so as to reduce the complex architectural requirements of the tower base which means a highstrength concrete solution became possible.
In the final scheme, columns at centres and floor edge beams were used to replace the large steel corner columns. As climbing form and table form construction method and efficient construction management are used in this project which make this reinforced concrete structure take no longer construction time than the steel structure. And the most attractive point is that the reinforced concrete scheme can save HK$230 million compared to that of steel structure. Hence the reinforced concrete structure was adopted and Central Plaza is now one of the tallest reinforced concrete buildings in the world.
In the reinforced concrete structure scheme, the core has a similar arrangement to the steel scheme and the wind shear is taken out from the core at the lowest basement level and transferred to the perimeter diaphragm walls. In order to reduce large shear reversals in the core walls in the basement, and at the top of the tower base level, the ground floor, basement levels 1 and 2 and the 5th and 6th floors, the floor slabs and beams are separated horizontally from the core walls.
Another advantage of using reinforced concrete structure is that it is more flexible to cope with changes in structural layout, sizes and height according to the site conditions by using table form system.
This skyscraper was visited in the seventh leg of the reality TV show "The Amazing Race 2", which described Central Plaza as "the tallest building in Hong Kong". Although contestants were told to reach the top floor, the actual task was performed on the 46th floor. | https://en.wikipedia.org/wiki?curid=7017 |
Caravaggio
Michelangelo Merisi (Michele Angelo Merigi or Amerighi) da Caravaggio (, , ; 29 September 1571 – 18 July 1610) was an Italian painter active in Rome for most of his artistic life. During the final four years of his life he moved between Naples, Malta, and Sicily until his death. His paintings combine a realistic observation of the human state, both physical and emotional, with a dramatic use of lighting, which had a formative influence on Baroque painting.
Caravaggio employed close physical observation with a dramatic use of chiaroscuro that came to be known as tenebrism. He made the technique a dominant stylistic element, darkening shadows and transfixing subjects in bright shafts of light. Caravaggio vividly expressed crucial moments and scenes, often featuring violent struggles, torture, and death. He worked rapidly, with live models, preferring to forgo drawings and work directly onto the canvas. His influence on the new Baroque style that emerged from Mannerism was profound. It can be seen directly or indirectly in the work of Peter Paul Rubens, Jusepe de Ribera, Gian Lorenzo Bernini, and Rembrandt, and artists in the following generation heavily under his influence were called the "Caravaggisti" or "Caravagesques", as well as tenebrists or "tenebrosi" ("shadowists").
Caravaggio trained as a painter in Milan before moving in his twenties to Rome. He developed a considerable name as an artist, and as a violent, touchy and provocative man. A brawl led to a death sentence for murder and forced him to flee to Naples. There he again established himself as one of the most prominent Italian painters of his generation. He traveled in 1607 to Malta and on to Sicily, and pursued a papal pardon for his sentence. In 1609 he returned to Naples, where he was involved in a violent clash; his face was disfigured and rumours of his death circulated. Questions about his mental state arose from his erratic and bizarre behavior. He died in 1610 under uncertain circumstances while on his way from Naples to Rome. Reports stated that he died of a fever, but suggestions have been made that he was murdered or that he died of lead poisoning.
Caravaggio's innovations inspired Baroque painting, but the Baroque incorporated the drama of his chiaroscuro without the psychological realism. The style evolved and fashions changed, and Caravaggio fell out of favor. In the 20th century interest in his work revived, and his importance to the development of Western art was reevaluated. The 20th-century art historian André Berne-Joffroy stated, "What begins in the work of Caravaggio is, quite simply, modern painting."
Caravaggio (Michelangelo Merisi or Amerighi) was born in Milan, where his father, Fermo (Fermo Merixio), was a household administrator and architect-decorator to the Marchese of Caravaggio, a town not far from the city of Bergamo. In 1576 the family moved to Caravaggio (Caravaggius) to escape a plague that ravaged Milan, and Caravaggio's father and grandfather both died there on the same day in 1577. It is assumed that the artist grew up in Caravaggio, but his family kept up connections with the Sforzas and with the powerful Colonna family, who were allied by marriage with the Sforzas and destined to play a major role later in Caravaggio's life.
Caravaggio's mother died in 1584, the same year he began his four-year apprenticeship to the Milanese painter Simone Peterzano, described in the contract of apprenticeship as a pupil of Titian. Caravaggio appears to have stayed in the Milan-Caravaggio area after his apprenticeship ended, but it is possible that he visited Venice and saw the works of Giorgione, whom Federico Zuccari later accused him of imitating, and Titian. He would also have become familiar with the art treasures of Milan, including Leonardo da Vinci's "Last Supper", and with the regional Lombard art, a style that valued simplicity and attention to naturalistic detail and was closer to the naturalism of Germany than to the stylised formality and grandeur of Roman Mannerism.
Following his initial training under Simone Peterzano, in 1592 Caravaggio left Milan for Rome, in flight after "certain quarrels" and the wounding of a police officer. The young artist arrived in Rome "naked and extremely needy ... without fixed address and without provision ... short of money." During this period he stayed with the miserly Pandolfo Pucci, known as "monnsignor Insalata". A few months later he was performing hack-work for the highly successful Giuseppe Cesari, Pope Clement VIII's favourite artist, "painting flowers and fruit" in his factory-like workshop.
In Rome there was demand for paintings to fill the many huge new churches and palazzi being built at the time. It was also a period when the Church was searching for a stylistic alternative to Mannerism in religious art that was tasked to counter the threat of Protestantism. Caravaggio's innovation was a radical naturalism that combined close physical observation with a dramatic, even theatrical, use of chiaroscuro that came to be known as tenebrism (the shift from light to dark with little intermediate value).
Known works from this period include a small "Boy Peeling a Fruit" (his earliest known painting), a "Boy with a Basket of Fruit", and the "Young Sick Bacchus", supposedly a self-portrait done during convalescence from a serious illness that ended his employment with Cesari. All three demonstrate the physical particularity for which Caravaggio was to become renowned: the fruit-basket-boy's produce has been analysed by a professor of horticulture, who was able to identify individual cultivars right down to "... a large fig leaf with a prominent fungal scorch lesion resembling anthracnose ("Glomerella cingulata")."
Caravaggio left Cesari, determined to make his own way after a heated argument. At this point he forged some extremely important friendships, with the painter Prospero Orsi, the architect Onorio Longhi, and the sixteen-year-old Sicilian artist Mario Minniti. Orsi, established in the profession, introduced him to influential collectors; Longhi, more balefully, introduced him to the world of Roman street-brawls. Minniti served Caravaggio as a model and, years later, would be instrumental in helping him to obtain important commissions in Sicily. Ostensibly, the first archival reference to Caravaggio in a contemporary document from Rome is the listing of his name, with that of Prospero Orsi as his partner, as an 'assistante' in a procession in October 1594 in honour of St. Luke. The earliest informative account of his life in the city is a court transcript dated 11 July 1597, when Caravaggio and Prospero Orsi were witnesses to a crime near San Luigi de' Francesi.
An early published notice on Caravaggio, dating from 1604 and describing his lifestyle three years previously, recounts that "after a fortnight's work he will swagger about for a month or two with a sword at his side and a servant following him, from one ball-court to the next, ever ready to engage in a fight or an argument, so that it is most awkward to get along with him." In 1606 he killed a young man in a brawl, possibly unintentionally, and fled from Rome with a death sentence hanging over him.
"The Fortune Teller", his first composition with more than one figure, shows a boy, likely Minniti, having his palm read by a gypsy girl, who is stealthily removing his ring as she strokes his hand. The theme was quite new for Rome, and proved immensely influential over the next century and beyond. This, however, was in the future: at the time, Caravaggio sold it for practically nothing. "The Cardsharps"—showing another naïve youth of privilege falling the victim of card cheats—is even more psychologically complex, and perhaps Caravaggio's first true masterpiece. Like "The Fortune Teller", it was immensely popular, and over 50 copies survive. More importantly, it attracted the patronage of Cardinal Francesco Maria del Monte, one of the leading connoisseurs in Rome. For Del Monte and his wealthy art-loving circle, Caravaggio executed a number of intimate chamber-pieces—"The Musicians", "The Lute Player", a tipsy "Bacchus", an allegorical but realistic "Boy Bitten by a Lizard"—featuring Minniti and other adolescent models.
Caravaggio's first paintings on religious themes returned to realism, and the emergence of remarkable spirituality. The first of these was the "Penitent Magdalene", showing Mary Magdalene at the moment when she has turned from her life as a courtesan and sits weeping on the floor, her jewels scattered around her. "It seemed not a religious painting at all ... a girl sitting on a low wooden stool drying her hair ... Where was the repentance ... suffering ... promise of salvation?" It was understated, in the Lombard manner, not histrionic in the Roman manner of the time. It was followed by others in the same style: "Saint Catherine"; "Martha and Mary Magdalene"; "Judith Beheading Holofernes"; a "Sacrifice of Isaac"; a "Saint Francis of Assisi in Ecstasy"; and a "Rest on the Flight into Egypt". These works, while viewed by a comparatively limited circle, increased Caravaggio's fame with both connoisseurs and his fellow artists. But a true reputation would depend on public commissions, and for these it was necessary to look to the Church.
Already evident was the intense realism or naturalism for which Caravaggio is now famous. He preferred to paint his subjects as the eye sees them, with all their natural flaws and defects instead of as idealised creations. This allowed a full display of his virtuosic talents. This shift from accepted standard practice and the classical idealism of Michelangelo was very controversial at the time. Caravaggio also dispensed with the lengthy preparations traditional in central Italy at the time. Instead, he preferred the Venetian practice of working in oils directly from the subject—half-length figures and still life. "Supper at Emmaus", from c. 1600–1601, is a characteristic work of this period demonstrating his virtuoso talent.
In 1599, presumably through the influence of Del Monte, Caravaggio was contracted to decorate the Contarelli Chapel in the church of San Luigi dei Francesi. The two works making up the commission, "The Martyrdom of Saint Matthew" and "The Calling of Saint Matthew", delivered in 1600, were an immediate sensation. Thereafter he never lacked commissions or patrons.
Caravaggio's tenebrism (a heightened chiaroscuro) brought high drama to his subjects, while his acutely observed realism brought a new level of emotional intensity. Opinion among his artist peers was polarised. Some denounced him for various perceived failings, notably his insistence on painting from life, without drawings, but for the most part he was hailed as a great artistic visionary: "The painters then in Rome were greatly taken by this novelty, and the young ones particularly gathered around him, praised him as the unique imitator of nature, and looked on his work as miracles."
Caravaggio went on to secure a string of prestigious commissions for religious works featuring violent struggles, grotesque decapitations, torture and death, most notable and most technically masterful among them "The Taking of Christ" of circa 1602 for the Mattei Family, recently rediscovered in Ireland after two centuries. For the most part each new painting increased his fame, but a few were rejected by the various bodies for whom they were intended, at least in their original forms, and had to be re-painted or find new buyers. The essence of the problem was that while Caravaggio's dramatic intensity was appreciated, his realism was seen by some as unacceptably vulgar.
His first version of "Saint Matthew and the Angel", featuring the saint as a bald peasant with dirty legs attended by a lightly clad over-familiar boy-angel, was rejected and a second version had to be painted as "The Inspiration of Saint Matthew". Similarly, "The Conversion of Saint Paul" was rejected, and while another version of the same subject, the "Conversion on the Way to Damascus", was accepted, it featured the saint's horse's haunches far more prominently than the saint himself, prompting this exchange between the artist and an exasperated official of Santa Maria del Popolo: "Why have you put a horse in the middle, and Saint Paul on the ground?" "Because!" "Is the horse God?" "No, but he stands in God's light!"
Other works included "Entombment", the "Madonna di Loreto" ("Madonna of the Pilgrims"), the "Grooms' Madonna", and the "Death of the Virgin". The history of these last two paintings illustrates the reception given to some of Caravaggio's art, and the times in which he lived. The "Grooms' Madonna", also known as "Madonna dei palafrenieri", painted for a small altar in Saint Peter's Basilica in Rome, remained there for just two days, and was then taken off. A cardinal's secretary wrote: "In this painting there are but vulgarity, sacrilege, impiousness and disgust...One would say it is a work made by a painter that can paint well, but of a dark spirit, and who has been for a lot of time far from God, from His adoration, and from any good thought..."
The "Death of the Virgin", commissioned in 1601 by a wealthy jurist for his private chapel in the new Carmelite church of Santa Maria della Scala, was rejected by the Carmelites in 1606. Caravaggio's contemporary Giulio Mancini records that it was rejected because Caravaggio had used a well-known prostitute as his model for the Virgin. Giovanni Baglione, another contemporary, tells us it was due to Mary's bare legs—a matter of decorum in either case. Caravaggio scholar John Gash suggests that the problem for the Carmelites may have been theological rather than aesthetic, in that Caravaggio's version fails to assert the doctrine of the Assumption of Mary, the idea that the Mother of God did not die in any ordinary sense but was assumed into Heaven. The replacement altarpiece commissioned (from one of Caravaggio's most able followers, Carlo Saraceni), showed the Virgin not dead, as Caravaggio had painted her, but seated and dying; and even this was rejected, and replaced with a work showing the Virgin not dying, but ascending into Heaven with choirs of angels. In any case, the rejection did not mean that Caravaggio or his paintings were out of favour. The "Death of the Virgin" was no sooner taken out of the church than it was purchased by the Duke of Mantua, on the advice of Rubens, and later acquired by Charles I of England before entering the French royal collection in 1671.
One secular piece from these years is "Amor Vincit Omnia", in English also called "Amor Victorious", painted in 1602 for Vincenzo Giustiniani, a member of Del Monte's circle. The model was named in a memoir of the early 17th century as "Cecco", the diminutive for Francesco. He is possibly Francesco Boneri, identified with an artist active in the period 1610–1625 and known as Cecco del Caravaggio ('Caravaggio's Cecco'), carrying a bow and arrows and trampling symbols of the warlike and peaceful arts and sciences underfoot. He is unclothed, and it is difficult to accept this grinning urchin as the Roman god Cupid—as difficult as it was to accept Caravaggio's other semi-clad adolescents as the various angels he painted in his canvases, wearing much the same stage-prop wings. The point, however, is the intense yet ambiguous reality of the work: it is simultaneously Cupid and Cecco, as Caravaggio's Virgins were simultaneously the Mother of Christ and the Roman courtesans who modeled for them.
Caravaggio led a tumultuous life. He was notorious for brawling, even in a time and place when such behavior was commonplace, and the transcripts of his police records and trial proceedings fill several pages. On 29 May 1606, he killed, possibly unintentionally, a young man named Ranuccio Tomassoni from Terni (Umbria). The circumstances of the brawl and the death of Ranuccio Tomassoni remain mysterious. Several contemporary "avvisi" referred to a quarrel over a gambling debt and a tennis game, and this explanation has become established in the popular imagination. But recent scholarship has made it clear that more was involved. Good modern accounts are to be found in Peter Robb's "M" and Helen Langdon's "Caravaggio: A Life". A theory relating the death to Renaissance notions of honour and symbolic wounding has been advanced by art historian Andrew Graham-Dixon. Whatever the details, it was a serious matter. Previously, his high-placed patrons had protected him from the consequences of his escapades, but this time they could do nothing. Caravaggio, outlawed, fled to Naples.
Following the death of Tomassoni, Caravaggio fled first to the estates of the Colonna family south of Rome, then on to Naples, where Costanza Colonna Sforza, widow of Francesco Sforza, in whose husband's household Caravaggio's father had held a position, maintained a palace. In Naples, outside the jurisdiction of the Roman authorities and protected by the Colonna family, the most famous painter in Rome became the most famous in Naples.
His connections with the Colonnas led to a stream of important church commissions, including the "Madonna of the Rosary", and "The Seven Works of Mercy". "The Seven Works of Mercy" depicts the seven corporal works of mercy as a set of compassionate acts concerning the material needs of others. The painting was made for, and is still housed in, the church of Pio Monte della Misericordia in Naples. Caravaggio combined all seven works of mercy in one composition, which became the church's altarpiece. Alessandro Giardino has also established the connection between the iconography of "The Seven Works of Mercy" and the cultural, scientific and philosophical circles of the painting's commissioners.
Despite his success in Naples, after only a few months in the city Caravaggio left for Malta, the headquarters of the Knights of Malta. Fabrizio Sforza Colonna, Costanza's son, was a Knight of Malta and general of the Order's galleys. He appears to have facilitated Caravaggio's arrival in the island in 1607 (and his escape the next year). Caravaggio presumably hoped that the patronage of Alof de Wignacourt, Grand Master of the Knights of Saint John, could help him secure a pardon for Tomassoni's death. De Wignacourt was so impressed at having the famous artist as official painter to the Order that he inducted him as a Knight, and the early biographer Bellori records that the artist was well pleased with his success.
Major works from his Malta period include the "Beheading of Saint John the Baptist", his largest ever work, and the only painting to which he put his signature, Saint Jerome Writing (both housed in Saint John's Co-Cathedral, Valletta, Malta) and a "Portrait of Alof de Wignacourt and his Page", as well as portraits of other leading Knights. According to Andrea Pomella, "The Beheading of Saint John the Baptist" is widely considered "one of the most important works in Western painting." Completed in 1608, the painting had been commissioned by the Knights of Malta as an altarpiece and measuring at 150 inches by 200 inches was the largest altarpiece Caravaggio painted. It still hangs in St. John's Co-Cathedral, for which it was commissioned and where Caravaggio himself was inducted and briefly served as a knight.
Yet, by late August 1608, he was arrested and imprisoned, likely the result of yet another brawl, this time with an aristocratic knight, during which the door of a house was battered down and the knight seriously wounded. Caravaggio was imprisoned by the Knights at Valletta, but he managed to escape. By December, he had been expelled from the Order "as a foul and rotten member", a formal phrase used in all such cases.
Caravaggio made his way to Sicily where he met his old friend Mario Minniti, who was now married and living in Syracuse. Together they set off on what amounted to a triumphal tour from Syracuse to Messina and, maybe, on to the island capital, Palermo. In Syracuse and Messina Caravaggio continued to win prestigious and well-paid commissions. Among other works from this period are "Burial of St. Lucy", "The Raising of Lazarus", and "Adoration of the Shepherds". His style continued to evolve, showing now friezes of figures isolated against vast empty backgrounds. "His great Sicilian altarpieces isolate their shadowy, pitifully poor figures in vast areas of darkness; they suggest the desperate fears and frailty of man, and at the same time convey, with a new yet desolate tenderness, the beauty of humility and of the meek, who shall inherit the earth." Contemporary reports depict a man whose behaviour was becoming increasingly bizarre, which included sleeping fully armed and in his clothes, ripping up a painting at a slight word of criticism, and mocking local painters.
Caravaggio displayed bizarre behaviour from very early in his career. Mancini describes him as "extremely crazy", a letter of Del Monte notes his strangeness, and Minniti's 1724 biographer says that Mario left Caravaggio because of his behaviour. The strangeness seems to have increased after Malta. Susinno's early-18th-century "Le vite de' pittori Messinesi" ("Lives of the Painters of Messina") provides several colourful anecdotes of Caravaggio's erratic behaviour in Sicily, and these are reproduced in modern full-length biographies such as Langdon and Robb. Bellori writes of Caravaggio's "fear" driving him from city to city across the island and finally, "feeling that it was no longer safe to remain", back to Naples. Baglione says Caravaggio was being "chased by his enemy", but like Bellori does not say who this enemy was.
After only nine months in Sicily, Caravaggio returned to Naples in the late summer of 1609. According to his earliest biographer he was being pursued by enemies while in Sicily and felt it safest to place himself under the protection of the Colonnas until he could secure his pardon from the pope (now Paul V) and return to Rome. In Naples he painted "The Denial of Saint Peter", a final "John the Baptist (Borghese)", and his last picture, "The Martyrdom of Saint Ursula". His style continued to evolve—Saint Ursula is caught in a moment of highest action and drama, as the arrow fired by the king of the Huns strikes her in the breast, unlike earlier paintings that had all the immobility of the posed models. The brushwork was also much freer and more impressionistic.
In October 1609 he was involved in a violent clash, an attempt on his life, perhaps ambushed by men in the pay of the knight he had wounded in Malta or some other faction of the Order. His face was seriously disfigured and rumours circulated in Rome that he was dead. He painted a "Salome with the Head of John the Baptist (Madrid)", showing his own head on a platter, and sent it to de Wignacourt as a plea for forgiveness. Perhaps at this time, he painted also a "David with the Head of Goliath", showing the young David with a strangely sorrowful expression gazing on the severed head of the giant, which is again Caravaggio. This painting he may have sent to his patron, the unscrupulous art-loving Cardinal Scipione Borghese, nephew of the pope, who had the power to grant or withhold pardons. Caravaggio hoped Borghese could mediate a pardon, in exchange for works by the artist.
News from Rome encouraged Caravaggio, and in the summer of 1610 he took a boat northwards to receive the pardon, which seemed imminent thanks to his powerful Roman friends. With him were three last paintings, the gifts for Cardinal Scipione. What happened next is the subject of much confusion and conjecture, shrouded in much mystery.
The bare facts seem to be that on 28 July an anonymous "avviso" (private newsletter) from Rome to the ducal court of Urbino reported that Caravaggio was dead. Three days later another "avviso" said that he had died of fever on his way from Naples to Rome. A poet friend of the artist later gave 18 July as the date of death, and a recent researcher claims to have discovered a death notice showing that the artist died on that day of a fever in Porto Ercole, near Grosseto in Tuscany.
Caravaggio had a fever at the time of his death, and what killed him has been a matter of historical debate and study. Traditionally historians have long thought he died of syphilis. Some have said he had malaria, or possibly brucellosis from unpasteurised dairy. Some scholars have argued that Caravaggio was actually attacked and killed by the same "enemies" that had been pursuing him since he fled Malta, possibly Wignacourt and/or factions of the Knights.
Human remains found in a church in Porto Ercole in 2010 are believed to almost certainly belong to Caravaggio. The findings come after a year-long investigation using DNA, carbon dating and other analyses. Initial tests suggested Caravaggio might have died of lead poisoning—paints used at the time contained high amounts of lead salts, and Caravaggio is known to have indulged in violent behavior, as caused by lead poisoning. Later research suggested he died as the result of a wound sustained in a brawl in Naples, specifically from sepsis. Recently released Vatican documents (2002) also indicate that fatal wounds may have been sustained as a result of a vendetta, perpetrated after Caravaggio had murdered a love rival in a botched attempt at castration.
Caravaggio never married and had no known children, and Howard Hibbard notes the absence of erotic female figures from the artist's oeuvre: "In his entire career he did not paint a single female nude." On the other hand, the cabinet-pieces from the Del Monte period are replete with "full-lipped, languorous boys ... who seem to solicit the onlooker with their offers of fruit, wine, flowers—and themselves" suggesting an erotic interest in the male form. At the same time, however, a connection with a certain Lena is mentioned in a 1605 court deposition by Pasqualone, where she is described as "Michelangelo's girl". According to G.B. Passeri, this 'Lena' was Caravaggio's model for the "Madonna di Loreto"; and according to Catherine Puglisi, 'Lena' may have been the same person as the courtesan Maddalena di Paolo Antognetti, who named Caravaggio as an "intimate friend" by her own testimony in 1604. Caravaggio also probably enjoyed close relationships with other "whores and courtesans" such as Fillide Melandroni, of whom he painted a portrait.
Nevertheless, since the 1970s both art scholars and historians have debated the inferences of homoeroticism in Caravaggio's works as a way to better understand the man. The model of "Amor vincit omnia", for example, is known to have been Cecco di Caravaggio. Cecco stayed with Caravaggio even after he was obliged to leave Rome in 1606, and the two may have been lovers.
Caravaggio's sexuality also received early speculation due to claims about the artist by Honoré Gabriel Riqueti, comte de Mirabeau. Writing in 1783, Mirabeau contrasted the personal life of Caravaggio directly with the writings of St Paul in the Book of Romans, arguing that "Romans" excessively practice sodomy or homosexuality. The Holy Mother Catholic Church teachings on morality (and so on; short book title) contains the Latin phrase ""Et fœminæ eorum immutaverunt naturalem usum in eum usum qui est contra naturam."" The phrase, according to Mirabeau, entered Caravaggio's thoughts, and he claimed that such an "abomination" could be witnessed through a particular painting housed at the Museum of the Grand Duke of Tuscany—featuring a rosary of a blasphemous nature, in which a circle of thirty men ("turpiter ligati") are intertwined in embrace and presented in unbridled composition. Mirabeau notes the affectionate nature of Caravaggio's depiction reflects the voluptuous glow of the artist's sexuality. By the late nineteenth century, Sir Richard Francis Burton identified the painting as Caravaggio's painting of St. Rosario. Burton also identifies both St. Rosario and this painting with the practices of Tiberius mentioned by Seneca the Younger. The survival status and location of Caravaggio's painting is unknown. No such painting appears in his or his school's catalogues.
Aside from the paintings, evidence also comes from the libel trial brought against Caravaggio by Giovanni Baglione in 1603. Baglione accused Caravaggio and his friends of writing and distributing scurrilous doggerel attacking him; the pamphlets, according to Baglione's friend and witness Mao Salini, had been distributed by a certain Giovanni Battista, a "bardassa," or boy prostitute, shared by Caravaggio and his friend Onorio Longhi. Caravaggio denied knowing any young boy of that name, and the allegation was not followed up.
Baglione's painting of "Divine Love" has also been seen as a visual accusation of sodomy against Caravaggio. Such accusations were damaging and dangerous as sodomy was a capital crime at the time. Even though the authorities were unlikely to investigate such a well-connected person as Caravaggio, "Once an artist had been smeared as a pederast, his work was smeared too." Francesco Susino in his later biography additionally relates the story of how the artist was chased by a school-master in Sicily for spending too long gazing at the boys in his care. Susino presents it as a misunderstanding, but Caravaggio may indeed have been seeking sexual solace; and the incident could explain one of his most homoerotic paintings: his last depiction of St John the Baptist.
The art historian Andrew Graham-Dixon has summarised the debate:
A lot has been made of Caravaggio's presumed homosexuality, which has in more than one previous account of his life been presented as the single key that explains everything, both the power of his art and the misfortunes of his life. There is no absolute proof of it, only strong circumstantial evidence and much rumour. The balance of probability suggests that Caravaggio did indeed have sexual relations with men. But he certainly had female lovers. Throughout the years that he spent in Rome he kept close company with a number of prostitutes. The truth is that Caravaggio was as uneasy in his relationships as he was in most other aspects of life. He likely slept with men. He did sleep with women. He settled with no one... [but] the idea that he was an early martyr to the drives of an unconventional sexuality is an anachronistic fiction.
Caravaggio "put the oscuro (shadows) into chiaroscuro." Chiaroscuro was practiced long before he came on the scene, but it was Caravaggio who made the technique a dominant stylistic element, darkening the shadows and transfixing the subject in a blinding shaft of light. With this came the acute observation of physical and psychological reality that formed the ground both for his immense popularity and for his frequent problems with his religious commissions.
He worked at great speed, from live models, scoring basic guides directly onto the canvas with the end of the brush handle; very few of Caravaggio's drawings appear to have survived, and it is likely that he preferred to work directly on the canvas. The approach was anathema to the skilled artists of his day, who decried his refusal to work from drawings and to idealise his figures. Yet the models were basic to his realism. Some have been identified, including Mario Minniti and Francesco Boneri, both fellow artists, Minniti appearing as various figures in the early secular works, the young Boneri as a succession of angels, Baptists and Davids in the later canvasses. His female models include Fillide Melandroni, Anna Bianchini, and Maddalena Antognetti (the "Lena" mentioned in court documents of the "artichoke" case as Caravaggio's concubine), all well-known prostitutes, who appear as female religious figures including the Virgin and various saints. Caravaggio himself appears in several paintings, his final self-portrait being as the witness on the far right to the "Martyrdom of Saint Ursula".
Caravaggio had a noteworthy ability to express in one scene of unsurpassed vividness the passing of a crucial moment. "The Supper at Emmaus" depicts the recognition of Christ by his disciples: a moment before he is a fellow traveler, mourning the passing of the Messiah, as he never ceases to be to the inn-keeper's eyes; the second after, he is the Saviour. In "The Calling of St Matthew", the hand of the Saint points to himself as if he were saying "who, me?", while his eyes, fixed upon the figure of Christ, have already said, "Yes, I will follow you". With "The Resurrection of Lazarus", he goes a step further, giving us a glimpse of the actual physical process of resurrection. The body of Lazarus is still in the throes of rigor mortis, but his hand, facing and recognising that of Christ, is alive. Other major Baroque artists would travel the same path, for example Bernini, fascinated with themes from Ovid's "Metamorphoses".
The installation of the St. Matthew paintings in the Contarelli Chapel had an immediate impact among the younger artists in Rome, and Caravaggism became the cutting edge for every ambitious young painter. The first Caravaggisti included Orazio Gentileschi and Giovanni Baglione. Baglione's Caravaggio phase was short-lived; Caravaggio later accused him of plagiarism and the two were involved in a long feud. Baglione went on to write the first biography of Caravaggio. In the next generation of Caravaggisti there were Carlo Saraceni, Bartolomeo Manfredi and Orazio Borgianni. Gentileschi, despite being considerably older, was the only one of these artists to live much beyond 1620, and ended up as court painter to Charles I of England. His daughter Artemisia Gentileschi was also stylistically close to Caravaggio, and one of the most gifted of the movement. Yet in Rome and in Italy it was not Caravaggio, but the influence of his rival Annibale Carracci, blending elements from the High Renaissance and Lombard realism, which ultimately triumphed.
Caravaggio's brief stay in Naples produced a notable school of Neapolitan Caravaggisti, including Battistello Caracciolo and Carlo Sellitto. The Caravaggisti movement there ended with a terrible outbreak of plague in 1656, but the Spanish connection—Naples was a possession of Spain—was instrumental in forming the important Spanish branch of his influence.
A group of Catholic artists from Utrecht, the "Utrecht Caravaggisti", travelled to Rome as students in the first years of the 17th century and were profoundly influenced by the work of Caravaggio, as Bellori describes. On their return to the north this trend had a short-lived but influential flowering in the 1620s among painters like Hendrick ter Brugghen, Gerrit van Honthorst, Andries Both and Dirck van Baburen. In the following generation the effects of Caravaggio, although attenuated, are to be seen in the work of Rubens (who purchased one of his paintings for the Gonzaga of Mantua and painted a copy of the "Entombment of Christ"), Vermeer, Rembrandt and Velázquez, the last of whom presumably saw his work during his various sojourns in Italy.
Caravaggio's innovations inspired the Baroque, but the Baroque took the drama of his chiaroscuro without the psychological realism. While he directly influenced the style of the artists mentioned above, and, at a distance, the Frenchmen Georges de La Tour and Simon Vouet, and the Spaniard Giuseppe Ribera, within a few decades his works were being ascribed to less scandalous artists, or simply overlooked. The Baroque, to which he contributed so much, had evolved, and fashions had changed, but perhaps more pertinently Caravaggio never established a workshop as the Carracci did, and thus had no school to spread his techniques. Nor did he ever set out his underlying philosophical approach to art, the psychological realism that may only be deduced from his surviving work.
Thus his reputation was doubly vulnerable to the critical demolition-jobs done by two of his earliest biographers, Giovanni Baglione, a rival painter with a personal vendetta, and the influential 17th-century critic Gian Pietro Bellori, who had not known him but was under the influence of the earlier Giovanni Battista Agucchi and Bellori's friend Poussin, in preferring the "classical-idealistic" tradition of the Bolognese school led by the Carracci. Baglione, his first biographer, played a considerable part in creating the legend of Caravaggio's unstable and violent character, as well as his inability to draw.
In the 1920s, art critic Roberto Longhi brought Caravaggio's name once more to the foreground, and placed him in the European tradition: "Ribera, Vermeer, La Tour and Rembrandt could never have existed without him. And the art of Delacroix, Courbet and Manet would have been utterly different". The influential Bernard Berenson agreed: "With the exception of Michelangelo, no other Italian painter exercised so great an influence."
Caravaggio's epitaph was composed by his friend Marzio Milesi. It reads:
He was commemorated on the front of the Banca d'Italia 100,000-lire banknote in the 1980s and 90s (before Italy switched to the Euro) with the back showing his "Basket of Fruit".
There is disagreement as to the size of Caravaggio's oeuvre, with counts as low as 40 and as high as 80. In his biography, Caravaggio scholar Alfred Moir writes "The forty-eight colorplates in this book include almost all of the surviving works accepted by every Caravaggio expert as autograph, and even the least demanding would add fewer than a dozen more". One, "The Calling of Saints Peter and Andrew", was recently authenticated and restored; it had been in storage in Hampton Court, mislabeled as a copy. Richard Francis Burton writes of a "picture of St. Rosario (in the museum of the Grand Duke of Tuscany), showing a circle of thirty men "turpiter ligati"" ("lewdly banded"), which is not known to have survived. The rejected version of "Saint Matthew and the Angel", intended for the Contarelli Chapel in San Luigi dei Francesi in Rome, was destroyed during the bombing of Dresden, though black and white photographs of the work exist. In June 2011 it was announced that a previously unknown Caravaggio painting of Saint Augustine dating to about 1600 had been discovered in a private collection in Britain. Called a "significant discovery", the painting had never been published and is thought to have been commissioned by Vincenzo Giustiniani, a patron of the painter in Rome.
A painting believed by some experts to be Caravaggio's second version of "Judith Beheading Holofernes", tentatively dated between 1600 and 1610, was discovered in an attic in Toulouse in 2014. An export ban was placed on the painting by the French government while tests were carried out to establish its provenance. In February 2019 it was announced that the painting would be sold at auction after the Louvre had turned down the opportunity to purchase it for €100 million.
In October 1969, two thieves entered the Oratory of Saint Lawrence in Palermo, Sicily and stole Caravaggio's "Nativity with St. Francis and St. Lawrence" from its frame. Experts estimated its value at $20 million.
Following the theft, Italian police set up an art theft task force with the specific aim of re-acquiring lost and stolen art works. Since the creation of this task force, many leads have been followed regarding the "Nativity". Former Italian mafia members have stated that "Nativity with St. Francis and St. Lawrence" was stolen by the Sicilian Mafia and displayed at important mafia gatherings. Former mafia members have said that the "Nativity" was damaged and has since been destroyed.
The whereabouts of the artwork are still unknown. A reproduction currently hangs in its place in the Oratory of San Lorenzo.
Caravaggio's work has been widely influential in late-20th-century American gay culture, with frequent references to male sexual imagery in paintings such as "The Musicians" and "Amor Victorious". British filmmaker Derek Jarman made a critically applauded biopic entitled "Caravaggio" in 1986. Several poems written by Thom Gunn were responses to specific Caravaggio paintings.
The main primary sources for Caravaggio's life are:
All have been reprinted in Howard Hibbard's "Caravaggio" and in the appendices to Catherine Puglisi's "Caravaggio".
Biography
Articles and essays
Art works
Music
Video | https://en.wikipedia.org/wiki?curid=7018 |
Jean-Baptiste-Siméon Chardin
Jean-Baptiste-Siméon Chardin (; November 2, 1699 – December 6, 1779) was an 18th-century French painter. He is considered a master of still life, and is also noted for his genre paintings which depict kitchen maids, children, and domestic activities. Carefully balanced composition, soft diffusion of light, and granular impasto characterize his work.
Chardin was born in Paris, the son of a cabinetmaker, and rarely left the city. He lived on the Left Bank near Saint-Sulpice until 1757, when Louis XV granted him a studio and living quarters in the Louvre.
Chardin entered into a marriage contract with Marguerite Saintard in 1723, whom he did not marry until 1731. He served apprenticeships with the history painters Pierre-Jacques Cazes and Noël-Nicolas Coypel, and in 1724 became a master in the Académie de Saint-Luc.
According to one nineteenth-century writer, at a time when it was hard for unknown painters to come to the attention of the Royal Academy, he first found notice by displaying a painting at the "small Corpus Christi" (held eight days after the regular one) on the Place Dauphine (by the Pont Neuf). Van Loo, passing by in 1720, bought it and later assisted the young painter.
Upon presentation of "The Ray" and "The Buffet" in 1728, he was admitted to the Académie Royale de Peinture et de Sculpture. The following year he ceded his position in the Académie de Saint-Luc. He made a modest living by "produc[ing] paintings in the various genres at whatever price his customers chose to pay him", and by such work as the restoration of the frescoes at the Galerie François I at Fontainebleau in 1731.
In November 1731 his son Jean-Pierre was baptized, and a daughter, Marguerite-Agnès, was baptized in 1733. In 1735 his wife Marguerite died, and within two years Marguerite-Agnès had died as well.
Beginning in 1737 Chardin exhibited regularly at the Salon. He would prove to be a "dedicated academician", regularly attending meetings for fifty years, and functioning successively as counsellor, treasurer, and secretary, overseeing in 1761 the installation of Salon exhibitions.
Chardin's work gained popularity through reproductive engravings of his genre paintings (made by artists such as François-Bernard Lépicié and P.-L. Sugurue), which brought Chardin income in the form of "what would now be called royalties". In 1744 he entered his second marriage, this time to Françoise-Marguerite Pouget. The union brought a substantial improvement in Chardin's financial circumstances. In 1745 a daughter, Angélique-Françoise, was born, but she died in 1746.
In 1752 Chardin was granted a pension of 500 livres by Louis XV. At the Salon of 1759 he exhibited nine paintings; it was the first Salon to be commented upon by Denis Diderot, who would prove to be a great admirer and public champion of Chardin's work. Beginning in 1761, his responsibilities on behalf of the Salon, simultaneously arranging the exhibitions and acting as treasurer, resulted in a diminution of productivity in painting, and the showing of 'replicas' of previous works. In 1763 his services to the Académie were acknowledged with an extra 200 livres in pension. In 1765 he was unanimously elected associate member of the Académie des Sciences, Belles-Lettres et Arts of Rouen, but there is no evidence that he left Paris to accept the honor. By 1770 Chardin was the 'Premier peintre du roi', and his pension of 1,400 livres was the highest in the Academy.
In 1772 Chardin's son, also a painter, drowned in Venice, a probable suicide. The artist's last known oil painting was dated 1776; his final Salon participation was in 1779, and featured several pastel studies. Gravely ill by November of that year, he died in Paris on December 6, at the age of 80.
Chardin worked very slowly and painted only slightly more than 200 pictures (about four a year) in total.
Chardin's work had little in common with the Rococo painting that dominated French art in the 18th century. At a time when history painting was considered the supreme classification for public art, Chardin's subjects of choice were viewed as minor categories. He favored simple yet beautifully textured still lifes, and sensitively handled domestic interiors and genre paintings. Simple, even stark, paintings of common household items ("Still Life with a Smoker's Box") and an uncanny ability to portray children's innocence in an unsentimental manner ("Boy with a Top" [right]) nevertheless found an appreciative audience in his time, and account for his timeless appeal.
Largely self-taught, Chardin was greatly influenced by the realism and subject matter of the 17th-century Low Country masters. Despite his unconventional portrayal of the ascendant bourgeoisie, early support came from patrons in the French aristocracy, including Louis XV. Though his popularity rested initially on paintings of animals and fruit, by the 1730s he introduced kitchen utensils into his work ("The Copper Cistern", ca. 1735, Louvre). Soon figures populated his scenes as well, supposedly in response to a portrait painter who challenged him to take up the genre. "Woman Sealing a Letter" (ca. 1733), which may have been his first attempt, was followed by half-length compositions of children saying grace, as in "Le Bénédicité", and kitchen maids in moments of reflection. These humble scenes deal with simple, everyday activities, yet they also have functioned as a source of documentary information about a level of French society not hitherto considered a worthy subject for painting. The pictures are noteworthy for their formal structure and pictorial harmony. Chardin said about painting, "Who said one paints with colors? One "employs" colors, but one paints with "feeling"."
A child playing was a favourite subject of Chardin. He depicted an adolescent building a house of cards on at least four occasions. The version at Waddesdon Manor is the most elaborate. Scenes such as these derived from 17th-century Netherlandish vanitas works, which bore messages about the transitory nature of human life and the worthlessness of material ambitions, but Chardin's also display a delight in the ephemeral phases of childhood for their own sake.
Chardin frequently painted replicas of his compositions—especially his genre paintings, nearly all of which exist in multiple versions which in many cases are virtually indistinguishable. Beginning with "The Governess" (1739, in the National Gallery of Canada, Ottawa), Chardin shifted his attention from working-class subjects to slightly more spacious scenes of bourgeois life.
In 1756 Chardin returned to the subject of the still life. In the 1770s his eyesight weakened and he took to painting in pastels, a medium in which he executed portraits of his wife and himself (see "Self-portrait" at top right). His works in pastels are now highly valued. Chardin's extant paintings, which number about 200, are in many major museums, including the Louvre.
Chardin's influence on the art of the modern era was wide-ranging, and has been well-documented. Édouard Manet's half-length "Boy Blowing Bubbles" and the still lifes of Paul Cézanne are equally indebted to their predecessor. He was one of Henri Matisse's most admired painters; as an art student Matisse made copies of four Chardin paintings in the Louvre. Chaim Soutine's still lifes looked to Chardin for inspiration, as did the paintings of Georges Braque, and later, Giorgio Morandi. In 1999 Lucian Freud painted and etched several copies after "The Young Schoolmistress" (National Gallery, London).
Marcel Proust, in the chapter "How to open your eyes?" from "In Search of Lost Time" ("À la recherche du temps perdu"), describes a melancholic young man sitting at his simple breakfast table. The only comfort he finds is in the imaginary ideas of beauty depicted in the great masterpieces of the Louvre, materializing fancy palaces, rich princes, and the like. The author tells the young man to follow him to another section of the Louvre where the pictures of Jean-Baptiste Chardin are. There he would see the beauty in still life at home and in everyday activities like peeling turnips. | https://en.wikipedia.org/wiki?curid=7019 |
Crookes radiometer
The Crookes radiometer (also known as a light mill) consists of an airtight glass bulb containing a partial vacuum, with a set of vanes which are mounted on a spindle inside. The vanes rotate when exposed to light, with faster rotation for more intense light, providing a quantitative measurement of electromagnetic radiation intensity.
The reason for the rotation was a cause of much scientific debate in the ten years following the invention of the device, but in 1879 the currently accepted explanation for the rotation was published. Today the device is mainly used in physics education as a demonstration of a heat engine run by light energy.
It was invented in 1873 by the chemist Sir William Crookes as the by-product of some chemical research. In the course of very accurate quantitative chemical work, he was weighing samples in a partially evacuated chamber to reduce the effect of air currents, and noticed the weighings were disturbed when sunlight shone on the balance. Investigating this effect, he created the device named after him.
It is still manufactured and sold as an educational aid or curiosity.
The radiometer is made from a glass bulb from which much of the air has been removed to form a partial vacuum. Inside the bulb, on a low friction spindle, is a rotor with several (usually four) vertical lightweight vanes spaced equally around the axis. The vanes are polished or white on one side and black on the other.
When exposed to sunlight, artificial light, or infrared radiation (even the heat of a hand nearby can be enough), the vanes turn with no apparent motive power, the dark sides retreating from the radiation source and the light sides advancing.
Cooling the radiometer causes rotation in the opposite direction.
The effect begins to be observed at partial vacuum pressures of several hundred pascals (or a few torr), reaches a peak at around 1 pascal (7.5 x 10−3 torr) and has disappeared by the time the vacuum reaches 10−4 pascal (7.5 x 10−7 torr) (see explanations note 1). At these very high vacuums the effect of photon radiation pressure on the vanes can be observed in very sensitive apparatus (see Nichols radiometer) but this is insufficient to cause rotation.
The prefix "" in the title originates from the combining form of Latin "radius", a ray: here it refers to electromagnetic radiation. A Crookes radiometer, consistent with the suffix "" in its title, can provide a quantitative measurement of electromagnetic radiation intensity. This can be done, for example, by visual means (e.g., a spinning slotted disk, which functions as a simple stroboscope) without interfering with the measurement itself.
Radiometers are now commonly sold worldwide as a novelty ornament; needing no batteries, but only light to get the vanes to turn. They come in various forms, such as the one pictured, and are often used in science museums to illustrate "radiation pressure" – a scientific principle that they do not in fact demonstrate.
When a radiant energy source is directed at a Crookes radiometer, the radiometer becomes a heat engine. The operation of a heat engine is based on a difference in temperature that is converted to a mechanical output. In this case, the black side of the vane becomes hotter than the other side, as radiant energy from a light source warms the black side by black-body absorption faster than the silver or white side. The internal air molecules are heated up when they touch the black side of the vane. The details of exactly how this moves the warmer side of the vane forward are given in the section below.
The internal temperature rises as the black vanes impart heat to the air molecules, but the molecules are cooled again when they touch the bulb's glass surface, which is at ambient temperature. This heat loss through the glass keeps the internal bulb temperature steady with the result that the two sides of the vanes develop a temperature difference. The white or silver side of the vanes are slightly warmer than the internal air temperature but cooler than the black side, as some heat conducts through the vane from the black side. The two sides of each vane must be thermally insulated to some degree so that the polished or white side does not immediately reach the temperature of the black side. If the vanes are made of metal, then the black or white paint can be the insulation. The glass stays much closer to ambient temperature than the temperature reached by the black side of the vanes. The external air helps conduct heat away from the glass.
The air pressure inside the bulb needs to strike a balance between too low and too high. A strong vacuum inside the bulb does not permit motion, because there are not enough air molecules to cause the air currents that propel the vanes and transfer heat to the outside before both sides of each vane reach thermal equilibrium by heat conduction through the vane material. High inside pressure inhibits motion because the temperature differences are not enough to push the vanes through the higher concentration of air: there is too much air resistance for "eddy currents" to occur, and any slight air movement caused by the temperature difference is damped by the higher pressure before the currents can "wrap around" to the other side.
When the radiometer is heated in the absence of a light source, it turns in the forward direction (i.e. black sides trailing). If a person's hands are placed around the glass without touching it, the vanes will turn slowly or not at all, but if the glass is touched to warm it quickly, they will turn more noticeably. Directly heated glass gives off enough infrared radiation to turn the vanes, but glass blocks much of the far-infrared radiation from a source of warmth not in contact with it. However, near-infrared and visible light more easily penetrate the glass.
If the glass is cooled quickly in the absence of a strong light source by putting ice on the glass or placing it in the freezer with the door almost closed, it turns backwards (i.e. the silver sides trail). This demonstrates black-body radiation from the black sides of the vanes rather than black-body absorption. The wheel turns backwards because the net exchange of heat between the black sides and the environment initially cools the black sides faster than the white sides. Upon reaching equilibrium, typically after a minute or two, reverse rotation ceases. This contrasts with sunlight, with which forward rotation can be maintained all day.
Over the years, there have been many attempts to explain how a Crookes radiometer works:
To rotate, a light mill does not have to be coated with different colors across each vane. In 2009, researchers at the University of Texas, Austin created a monocolored light mill which has four curved vanes; each vane forms a convex and a concave surface. The light mill is uniformly coated by gold nanocrystals, which are a strong light absorber. Upon exposure, due to geometric effect, the convex side of the vane receives more photon energy than the concave side does, and subsequently the gas molecules receive more heat from the convex side than from the concave side. At rough vacuum, this asymmetric heating effect generates a net gas movement across each vane, from the concave side to the convex side, as shown by the researchers' Direct Simulation Monte Carlo (DSMC) modeling. The gas movement causes the light mill to rotate with the concave side moving forward, due to Newton's Third Law.
This monocolored design promotes the fabrication of micrometer- or nanometer- scaled light mills, as it is difficult to pattern materials of distinct optical properties within a very narrow, three-dimensional space.
The thermal creep from the hot side of a vane to the cold side has been demonstrated in a mill with horizontal vanes that have a two tone surface with a black half and a white half. This design is called a Hettner radiometer .This radiometer's angular speed was found to be limited by the behavior of the drag force due to the gas in the vessel more than by the behavior of the thermal creep force. This design does not experience the Einstein effect because the faces are parallel to the temperature gradient.
In 2010 researchers at the University of California, Berkeley succeeded in building a nanoscale light mill that works on an entirely different principle to the Crookes radiometer. A gold light mill, only 100 nanometers in diameter, was built and illuminated by laser light that had been tuned. The possibility of doing this had been suggested by the Princeton physicist Richard Beth in 1936. The torque was greatly enhanced by the resonant coupling of the incident light to plasmonic waves in the gold structure. | https://en.wikipedia.org/wiki?curid=7021 |
Cold Chisel
Cold Chisel are an Australian pub rock band, which formed in Adelaide in 1973 by mainstay members Ian Moss on guitar and vocals, Steve Prestwich on drums and Don Walker on piano and keyboards. They were soon joined by Jimmy Barnes on lead vocals and, in 1975, Phil Small became their bass guitarist. The group disbanded in late 1983 but subsequently reformed several times. Musicologist Ian McFarlane wrote that they became "one of Australia's best-loved groups" as well as "one of the best live bands", fusing "a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook."
Seven of their studio albums have reached the Australian top five, "Breakfast at Sweethearts" (February 1979), "East" (June 1980), "Circus Animals" (March 1982, No. 1), "Twentieth Century" (April 1984, No. 1), "The Last Wave of Summer" (October 1998, No. 1), "No Plans" (April 2012) and "The Perfect Crime" (October 2015). Their top 10 singles are "Forever Now" (1982), "Hands Out of My Pocket" (1994) and "The Things I Love in You" (1998).
At the ARIA Music Awards of 1993 they were inducted into the Hall of Fame. In 2001 Australasian Performing Right Association (APRA), listed their single, "Khe Sanh" (May 1978), at No. 8 of the all-time best Australian songs. "Circus Animals" was listed at No. 4 in the book, "100 Best Australian Albums" (October 2010), while "East" appeared at No. 53. They won The Ted Albert Award for Outstanding Services to Australian Music at the APRA Music Awards of 2016. Cold Chisel's popularity is largely restricted to Australia and New Zealand, with their songs and musicianship highlighting working class life. Their early bass guitarist (1973–75), Les Kaczmarek, died in December 2008; Steve Prestwich died of a brain tumour in January 2011.
Cold Chisel originally formed as Orange in Adelaide in 1973 as a heavy metal band by Ted Broniecki on keyboards, Les Kaczmarek on bass guitar, Ian Moss on guitar and vocals, Steve Prestwich on drums and Don Walker on piano. Their early material included cover versions of Free and Deep Purple material. Broniecki left by September 1973 and seventeen-year-old singer, Jimmy Barnes – called Jim Barnes during their initial career – joined in December.
The group changed its name several times before settling on Cold Chisel in 1974 after Walker's song of that title. Barnes' relationship with the others was volatile: he often came to blows with Prestwich and left the band several times. During these periods Moss would handle vocals until Barnes returned. Walker emerged as the group's primary songwriter and spent 1974 in Armidale, completing his studies in quantum mechanics. Barnes' older brother, John Swan, was a member of Cold Chisel around this time, providing backing vocals and percussion. After several violent incidents, including beating up a roadie, he was fired. In mid-1975 Barnes left to join Fraternity as Bon Scott's replacement on lead vocals, alongside Swan on drums and vocals. Kaczmarek left Cold Chisel during 1975 and was replaced by Phil Small on bass guitar. In November of that year, without Barnes, they recorded their early demos.
In May 1976 Cold Chisel relocated to Melbourne, but "frustrated by their lack of progress," they moved on to Sydney in November. In May of the following year Barnes told his fellow members that he would leave again. From July he joined Feather for a few weeks, on co-lead vocals with Swan – they were a Sydney-based hard rock group, which had evolved from Blackfeather. A farewell performance for Cold Chisel, with Barnes aboard, went so well that the singer changed his mind and returned. In the following month the Warner Music Group signed the group.
In the early months of 1978 Cold Chisel recorded their self-titled debut album with their manager and producer, Peter Walker (ex-Bakery). All tracks were written by Don Walker, except "Juliet", where Barnes composed its melody and Walker the lyrics. "Cold Chisel" was released in April and included guest studio musicians: Dave Blight on harmonica (who became a regular on-stage guest) and saxophonists Joe Camilleri and Wilbur Wilde (from Jo Jo Zep & The Falcons). Australian musicologist, Ian McFarlane, described how, "[it] failed to capture the band's renowned live firepower, despite the presence of such crowd favourites as 'Khe Sanh', 'Home and Broken Hearted' and 'One Long Day'." It reached the top 40 on the Kent Music Report and was certified as a gold record for shipment of 35000 units.
In May 1978, "Khe Sanh", was released as their debut single but it was declared too offensive for commercial radio due to the sexual implication of the lyrics, "Their legs were often open/But their minds were always closed." However, it was played regularly on Sydney youth radio station, Double J, which was not subject to the restrictions as it was part of the Australian Broadcasting Corporation (ABC). Another ABC program, "Countdown"s producers asked them to change the lyric but they refused. Despite such setbacks, "Khe Sanh" reached No. 41 on the Kent Music Report singles chart. It became Cold Chisel's signature tune and was popular among their fans. They later remixed the track, with re-recorded vocals, for inclusion on the international version of their third album, "East" (June 1980).
The band's next release was a live five-track extended play, "You're Thirteen, You're Beautiful, and You're Mine", in November 1978. McFarlane observed, "It captured the band in its favoured element, fired by raucous versions of Walker's 'Merry-Go-Round' and Chip Taylor's 'Wild Thing'." It was recorded at Sydney's Regent Theatre in 1977, when they had Midnight Oil as one of the support acts. Australian writer, Ed Nimmervoll, described a typical performance by Cold Chisel, "Everybody was talking about them anyway, drawn by the songs, and Jim Barnes' presence on stage, crouched, sweating, as he roared his vocals into the microphone at the top of his lungs." The EP peaked at No. 35 on the Kent Music Report Singles Chart.
"Merry Go Round" was re-recorded for their second studio album, "Breakfast at Sweethearts" (February 1979). This was recorded between July 1978 and January 1979 with producer, Richard Batchens, who had previously worked with Richard Clapton, Sherbet and Blackfeather. Batchens smoothed out the band's rough edges and attempted to give their songs a sophisticated sound. With regards to this approach, the band were unsatisfied with the finished product. It peaked at No. 4 and was the top selling album in Australia by a locally based artist for that year; it was certified platinum for shipment of 70000 copies. The majority of its tracks were written by Walker, with Barnes and Walker on the lead single, "Goodbye (Astrid, Goodbye)" (September 1978), and Moss contributed to "Dresden". "Goodbye (Astrid, Goodbye)" became a live favourite, and was covered by U2 during Australian tours in the 1980s.
Cold Chisel had gained national chart success and increased popularity of their fans without significant commercial radio airplay or support from "Countdown". The members developed reputations for wild behaviour, particularly Barnes who claimed to have had sex with over 1000 women and who consumed more than a bottle of vodka each night while performing. In late 1979, severing their relationship with Batchens, Cold Chisel chose Mark Opitz to produce the next single, "Choirgirl" (November). It is a Walker composition dealing with a young woman's experience with abortion. Despite the subject matter it reached No. 14.
"Choirgirl" paved the way for the group's third studio album, "East" (June 1980), with Opitz producing. Recorded over two months in early 1980, "East", reached No. 2 and is the second highest selling album by an Australian artist for that year. "The Australian Women's Weekly"s Gregg Flynn noticed, "[they are] one of the few Australian bands in which each member is capable of writing hit songs." Despite the continued dominance of Walker, the other members contributed more tracks to their play list, and this was their first album to have songs written by each one. McFarlane described it as, "a confident, fully realised work of tremendous scope." Nimmervoll explained how, "This time everything fell into place, the sound, the songs, the playing... "East" was a triumph. [The group] were now the undisputed No. 1 rock band in Australia."
The album varied from straight ahead rock tracks, "Standing on the Outside" and "My Turn to Cry", to rockabilly-flavoured work-outs ("Rising Sun", written about Barnes' relationship with his then-girlfriend Jane Mahoney) and pop-laced love songs ("My Baby", featuring Joe Camilleri on saxophone) to a poignant piano ballad about prison life, "Four Walls". The cover art showed Barnes reclined in a bathtub wearing a kamikaze bandanna in a room littered with junk and was inspired by Jacques-Louis David's 1793 painting, "The Death of Marat". The Ian Moss-penned "Never Before" was chosen as the first song to air on the ABC's youth radio station, Triple J, when it switched to the FM band that year. Supporting the release of "East", Cold Chisel embarked on the Youth in Asia Tour from May 1980, which took its name from a lyric in "Star Hotel".
The Youth in Asia Tour performances were used for Cold Chisel's double live album, "Swingshift" (March 1981). Nimmervoll declared, "[the group] rammed what they were all about with [this album]." In March 1981 the band won seven categories: Best Australian Album, Most Outstanding Achievement, Best Recorded Song Writer, Best Australian Producer, Best Australian Record Cover Design, Most Popular Group and Most Popular Record, at the "Countdown"/"TV Week" pop music awards for 1980. They attended the ceremony at the Sydney Entertainment Centre and were due to perform: however, as a protest against a TV magazine's involvement, they refused to accept any trophy and finished the night with, "My Turn to Cry". After one verse and chorus, they smashed up the set and left the stage.
"Swingshift" debuted at No 1, which demonstrated their status as the highest selling local act. With a slightly different track-listing, "East", was issued in the United States and they undertook their first US tour in mid-1981. Ahead of the tour they had issued, "My Baby", for the North America market and it reached the top 40 on "Billboard"s chart, Mainstream Rock. They were generally popular as a live act there, but the US branch of their label did little to promote the album. According to Barnes' biographer, Toby Creswell, at one point they were ushered into an office to listen to the US master tape to find it had substantial hiss and other ambient noise, which made it almost unable to be released. Notwithstanding, the album reached the lower region of the "Billboard" 200 in July. The group were booed off stage after a lacklustre performance in Dayton, Ohio in May 1981 opening for Ted Nugent. Other support slots they took were for Cheap Trick, Joe Walsh, Heart and the Marshall Tucker Band. European audiences were more accepting of the Australian band and they developed a fan base in Germany.
In August 1981 Cold Chisel began work on a fourth studio album, "Circus Animals" (March 1982), again with Opitz producing. To launch the album, the band performed under a circus tent at Wentworth Park in Sydney and toured heavily once more, including a show in Darwin that attracted more than 10 percent of the city's population. It peaked at No. 1 in both Australia and on the Official New Zealand Music Chart. In October 2010 it was listed at No. 4 in the book, "100 Best Australian Albums", by music journalists, Creswell, Craig Mathieson and John O'Donnell.
Its lead single, "You Got Nothing I Want" (November 1981), is an aggressive Barnes-penned hard rock track, which attacked the US industry for its handling of the band on their recent tour. The song caused problems for Barnes when he later attempted to break into the US market as a solo performer; senior music executives there continued to hold it against him. Like its predecessor, "Circus Animals", contained songs of contrasting styles, with harder-edged tracks like "Bow River" and "Hound Dog" in place beside more expansive ballads such as the next two singles, "Forever Now" (March 1982) and "When the War Is Over" (August), both are written by Prestwich. "Forever Now" is their highest charting single in two Australasian markets: No. 4 on the Kent Music Report Singles Chart and No. 2 on the Official New Zealand Music Chart.
"When the War Is Over" is the most covered Cold Chisel track – Uriah Heep included a version on their 1989 album, "Raging Silence"; John Farnham recorded it while he and Prestwich were members of Little River Band in the mid-1980s and again for his 1990 solo album, "Age of Reason". The song was also a No. 1 hit for former "Australian Idol" contestant, Cosima De Vito, in 2004 and was performed by Bobby Flynn during that show's 2006 season. "Forever Now" was covered, as a country waltz, by Australian band, the Reels.
Success outside Australasia continued to elude Cold Chisel and friction occurred between the members. According to McFarlane, "[the] failed attempts to break into the American market represented a major blow... [their] earthy, high-energy rock was overlooked." In early 1983 they toured Germany but the shows went so badly that in the middle of the tour Walker up-ended his keyboard and stormed off stage during one show. After returning to Australia, Prestwich was fired and replaced by Ray Arnott, formerly of the 1970s progressive rockers, Spectrum, and country rockers, the Dingoes.
After this, Barnes requested a large advance from management. Now married with a young child, exorbitant spending had left him almost broke. His request was refused as there was a standing arrangement that any advance to one band member had to be paid to all the others. After a meeting on 17 August during which Barnes quit the band it was decided that the group would split up. A farewell concert series, The Last Stand, was planned and a final studio album, "Twentieth Century" (February 1984), was recorded. Prestwich returned for that tour, which began in October. Before the last four scheduled shows in Sydney, Barnes lost his voice and those dates were postponed to mid-December.
The band's final performances were at the Sydney Entertainment Centre from 12 to 15 December 1983 – ten years since their first live appearance as Cold Chisel in Adelaide – the group then disbanded. The Sydney shows formed the basis of a concert film, "The Last Stand" (July 1984), which became the biggest-selling cinema-released concert documentary by an Australian band to that time. Other recordings from the tour were used on a live album, "" (1984), the title is a reference to the pseudonym the group occasionally used when playing warm-up shows before tours. Some were also used as b-sides for a three-CD singles package, "Three Big XXX Hits", issued ahead of the release of their 1994 compilation album, "Teenage Love".
During breaks in the tour, "Twentieth Century", was recorded. It was a fragmentary process, spread across various studios and sessions as the individual members often refused to work together – both Arnott (on ten tracks) and Prestwich (on three tracks) are recorded as drummers. The album reached No. 1 and provided the singles, "Saturday Night" (March 1984) and "Flame Trees" (August), both of which remain radio staples. "Flame Trees", co-written by Prestwich and Walker, took its title from the BBC series, "The Flame Trees of Thika", although it was lyrically inspired by the organist's hometown of Grafton. Barnes later recorded an acoustic version for his 1993 solo album, "Flesh and Wood", and it was also covered by Sarah Blasko in 2006.
Barnes launched his solo career in January 1984, which has provided nine Australian number-one studio albums and an array of hit singles, including, "Too Much Ain't Enough Love", which peaked at No. 1. He has recorded with INXS, Tina Turner, Joe Cocker and John Farnham to become one of the country's most popular male rock singers. Prestwich joined Little River Band in 1984 and appeared on the albums, "Playing to Win" and "No Reins", before departing in 1986 to join Farnham's touring band. Moss, Small and Walker took extended breaks from music.
Small maintained a low profile as a member in a variety of minor groups, Pound, the Earls of Duke and the Outsiders. Walker formed Catfish in 1988, ostensibly a solo band with a variable membership, which included Moss, Charlie Owen and Dave Blight at times. Catfish played a modern jazz aspect and the recordings during this phase attracted little commercial success. During 1988 and 1989 Walker wrote several tracks for Moss including the singles, "Tucker's Daughter" (November 1988) and "Telephone Booth" (June 1989), which appeared on Moss' debut solo album, "Matchbook" (August 1989). Both the album and "Tucker's Daughter" peaked at No. 1. Moss won five trophies at the ARIA Music Awards of 1990. His other solo albums met with less chart or award success.
Throughout the 1980s and most of the 1990s, Cold Chisel were courted to re-form but refused, at one point reportedly turning down a $5 million offer to play a sole show in each of the major Australian state capitals. Moss and Walker often collaborated on projects, neither worked with Barnes until Walker wrote, "Stone Cold", for the singer's sixth studio album, "Heat" (October 1993). The pair recorded an acoustic version for "Flesh and Wood" (December). Thanks primarily to continued radio airplay and Barnes' solo success, Cold Chisel's legacy remained solidly intact. By the early 1990s the group had surpassed 3 million album sales, most sold since 1983. The 1991 compilation album, "Chisel", was re-issued and re-packaged several times, once with the long-deleted 1978 EP as a bonus disc and a second time in 2001 as a double album. The "Last Stand" soundtrack album was finally released in 1992. In 1994 a complete album of previously unreleased demo and rare live recordings, "Teenage Love", was released, which provided three singles.
Cold Chisel reunited in October 1997, with the line-up of Barnes, Moss, Prestwich, Small and Walker. They recorded their sixth studio album, "The Last Wave of Summer" (October 1998), from February to July with the band members co-producing. They supported it with a national tour. The album debuted at number one on the ARIA Albums Chart. In 2003 they re-grouped for the Ringside Tour and in 2005 again to perform at a benefit for the victims of the Boxing Day tsunami at the Myer Music Bowl in Melbourne. Founding bass guitarist, Les Kaczmarek, died of liver failure on 5 December 2008, aged 53. Walker described him as, "a wonderful and beguiling man in every respect."
On 10 September 2009 Cold Chisel announced they would reform for a one-off performance at the Sydney 500 V8 Supercars event on 5 December. The band performed at ANZ Stadium to the largest crowd of its career, with more than 45,000 fans in attendance. They played a single live show in 2010: at the Deniliquin ute muster in October. In December Moss confirmed that Cold Chisel were working on new material for an album.
In January 2011 Steve Prestwich was diagnosed with a brain tumour; he underwent surgery on 14 January but never regained consciousness and died two days later, aged 56. All six of Cold Chisel's studio albums were re-released in digital and CD formats in mid-2011. Three digital-only albums were released, "Never Before", "Besides" and "Covered", as well as a new compilation album, "The Best of Cold Chisel: All for You", which peaked at number 2 on the ARIA Charts. The thirty-date Light the Nitro Tour was announced in July along with the news that former Divinyls and Catfish drummer, Charley Drayton, had replaced Prestwich. Most shows on the tour sold out within days and new dates were later announced for early 2012.
"No Plans", their seventh studio album, was released in April 2012, with Kevin Shirley producing, which peaked at No. 2. "The Australian"s Stephen Fitzpatrick rated it as four-and-a-half out-of-five and found its lead track, "All for You", "speaks of redemption; of a man's ability to make something of himself through love." The track, "I Got Things to Do", was written and sung by Prestwich, which Fitzpatrick described as, "the bittersweet finale" and had "a vocal track the other band members did not know existed until after his death." Midway through 2012 they had a short UK tour and played with Soundgarden and Mars Volta at Hard Rock Calling at London's Hyde Park.
The group's eighth studio album, "The Perfect Crime", appeared in October 2015, again with Shirley producing, which peaked at No. 2. Martin Boulton of "The Sydney Morning Herald" rated it at four-out-of-five stars and explained, "[they] work incredibly hard, not take any shortcuts and play the hell out of the songs" where the album, "delves further back to their rock'n'roll roots with chief songwriter [Walker] carving up the keys, guitarist [Moss] both gritty and sublime and the [Small/Drayton] engine room firing on every cylinder. Barnes' voice sounds worn, wonderful and better than ever."
The band's latest album, "Blood Moon", was released in December 2019. The album debuted a number one on the ARIA Albums Chart, the band's fifth to reach the top. Half of the songs had lyrics written by Barnes and music by Walker, a new combination for Cold Chisel, with Barnes noting his increased confidence after writing two autobiographies.
McFarlane described Cold Chisel's early career in his "Encyclopedia of Australian Rock and Pop" (1999), "after ten years on the road, [they] called it a day. Not that the band split up for want of success; by that stage [they] had built up a reputation previously uncharted in Australian rock history. By virtue of the profound effect the band's music had on the many thousands of fans who witnessed its awesome power, Cold Chisel remains one of Australia's best-loved groups. As one of the best live bands of its day, [they] fused a combination of rockabilly, hard rock and rough-house soul'n'blues that was defiantly Australian in outlook." "The Canberra Times" Luis Feliu, in July 1978, observed how, "This is not just another Australian rock band, no mediocrity here, and their honest, hard-working approach looks like paying off." While "the range of styles tackled and done convincingly, from hard rock to blues, boogie, rhythm and blues, is where the appeal lies."
Influences from blues and early rock n' roll was broadly apparent, fostered by the love of those styles by Moss, Barnes and Walker. Small and Prestwich contributed strong pop sensibilities. This allowed volatile rock songs like "You Got Nothing I Want" and "Merry-Go-Round" to stand beside thoughtful ballads like "Choirgirl", pop-flavoured love songs like "My Baby" and caustic political statements like "Star Hotel", an attack on the late 1970s government of Malcolm Fraser, inspired by the Star Hotel riot in Newcastle.
The songs were not overtly political but rather observations of everyday life within Australian society and culture, in which the members with their various backgrounds (Moss was from Alice Springs, Walker grew up in rural New South Wales, Barnes and Prestwich were working-class immigrants from the UK) were quite well able to provide.
Cold Chisel's songs were about distinctly Australian experiences, a factor often cited as a major reason for the band's lack of international appeal. "Saturday Night" and "Breakfast at Sweethearts" were observations of the urban experience of Sydney's Kings Cross district where Walker lived for many years. "Misfits", which featured on the b-side to "My Baby", was about homeless kids in the suburbs surrounding Sydney. Songs like "Shipping Steel" and "Standing on The Outside" were working class anthems and many others featured characters trapped in mundane, everyday existences, yearning for the good times of the past ("Flame Trees") or for something better from life ("Bow River").
Alongside contemporaries like The Angels and Midnight Oil, Cold Chisel was renowned as one of the most dynamic live acts of their day and from early in their career concerts routinely became sell-out events. But the band was also famous for its wild lifestyle, particularly the hard-drinking Barnes, who played his role as one of the wild men of Australian rock to the hilt, never seen on stage without at least one bottle of vodka and often so drunk he could barely stand upright. Despite this, by 1982 he was a devoted family man who refused to tour without his wife and daughter. All the other band members were also settled or married; Ian Moss had a long-term relationship with the actress, Megan Williams, (she even sang on "Twentieth Century") whose own public persona could have hardly been more different.
It was the band's public image that often had them compared less favourably with other important acts like Midnight Oil, whose music and politics (while rather more overt) were often similar but whose image and reputation was more clean-cut. Cold Chisel remained hugely popular however and by the mid-1990s they continued to sell records at such a consistent rate they became the first Australian band to achieve higher sales after their split than during their active years.
At the ARIA Music Awards of 1993 they were inducted into the Hall of Fame. While repackages and compilations accounted for much of these sales, 1994's "Teenage Love" provided two of its singles, which were top ten hits. When the group finally reformed in 1998 the resultant album was also a major hit and the follow-up tour sold out almost immediately. In 2001 Australasian Performing Right Association (APRA), listed their single, "Khe Sanh" (May 1978), at No. 8 of the all-time best Australian songs.
Cold Chisel were one of the first Australian acts to have become the subject of a major tribute album. In 2007, "Standing on the Outside: The Songs of Cold Chisel" was released, featuring a collection of the band's songs as performed by artists including The Living End, Evermore, Something for Kate, Pete Murray, Katie Noonan, You Am I, Paul Kelly, Alex Lloyd, Thirsty Merc and Ben Lee, many of whom were children when Cold Chisel first disbanded and some, like the members of Evermore, had not even been born. "Circus Animals" was listed at No. 4 in the book, "100 Best Australian Albums" (October 2010), while "East" appeared at No. 53. They won The Ted Albert Award for Outstanding Services to Australian Music at the APRA Music Awards of 2016.
Current members
Former members
Additional musicians | https://en.wikipedia.org/wiki?curid=7022 |
Confederate States of America
The Confederate States of America (CSA), commonly referred to as the Confederate States (C.S. or CS) or the Confederacy, was an unrecognized republic that fought against the United States during the American Civil War.
Existing from 1861 to 1865, the Confederacy was originally formed by secession of seven slave-holding states—South Carolina, Mississippi, Florida, Alabama, Georgia, Louisiana, and Texas—in the Lower South region of the United States, whose economy was heavily dependent upon agriculture, particularly cotton, and a plantation system that relied upon the labor of African-American slaves. Convinced that white supremacy and the institution of slavery were threatened by the November 1860 election of Republican candidate Abraham Lincoln to the U.S. presidency on a platform which opposed the expansion of slavery into the western territories, the Confederacy declared its secession in rebellion against the United States, with the loyal states becoming known as the Union during the ensuing American Civil War. In a speech known today as the Cornerstone Address, Confederate Vice President Alexander H. Stephens described its ideology as being centrally based "upon the great truth that the negro is not equal to the white man; that slavery, subordination to the superior race, is his natural and normal condition".
Before Lincoln took office in March, a new Confederate government was established in February 1861 which was considered illegal by the United States federal government. Confederate states volunteered militia units, and the new government hastened to form its own Confederate States Army practically overnight. After war began in April, four slave states of the Upper South—Virginia, Arkansas, Tennessee, and North Carolina—also seceded and joined the Confederacy. The Confederacy later accepted Missouri and Kentucky as members, although neither officially declared secession nor were they ever largely controlled by Confederate forces, despite the efforts of Confederate shadow governments which were eventually expelled. The government of the United States (the Union) rejected the claims of secession as illegitimate.
The Civil War began on April 12, 1861, when the Confederates attacked Fort Sumter, a Union fort in the harbor of Charleston, South Carolina. No foreign government ever officially recognized the Confederacy as an independent country, although Great Britain and France granted it belligerent status, which allowed Confederate agents to contract with private concerns for arms and other supplies. In early 1865, after four years of heavy fighting and 620,000–850,000 military deaths, all Confederate forces surrendered, most symbolically in Confederate general Robert E. Lee's surrender to Ulysses S. Grant at Appomattox on April 9, 1865. The war lacked a formal end; nearly all Confederate forces had been forced into surrender or deliberately disbanded by the end of 1865, when Confederate President Jefferson Davis lamented that the Confederacy had "disappeared". President Lincoln was assassinated by Confederate sympathizer John Wilkes Booth on April 15, 1865.
After the war, Confederate states were readmitted to the Union during the Reconstruction era, after each ratified the 13th Amendment to the U.S. Constitution, which outlawed slavery. "Lost Cause" ideology—an idealized view of the Confederacy as valiantly fighting for a just cause—emerged in the decades after the war among former Confederate generals and politicians, as well as organizations such as the Sons of Confederate Veterans and the United Daughters of the Confederacy. Particularly intense periods of Lost Cause activity came around the time of World War I, as the last Confederate veterans began to die and a push was made to preserve their memory, and then during the Civil Rights Movement of the 1950s and 1960s, in reaction to growing public support for racial equality. Through activities such as building prominent Confederate monuments and writing school history textbooks to paint the Confederacy in a favorable light, Lost Cause advocates such as the United Daughters of the Confederacy sought to ensure future generations of Southern whites would continue to support white supremacist policies such as the Jim Crow laws. The modern display of Confederate flags primarily started in the late 1940s with Senator Strom Thurmond's Dixiecrats in opposition to the Civil Rights Movement, and has continued to the present day.
On February 22, 1862, the Confederate Constitution of seven state signatories – Mississippi, South Carolina, Florida, Alabama, Georgia, Louisiana, and Texas – replaced the Provisional Constitution of February 8, 1861, with one stating in its preamble a desire for a "permanent federal government". Four additional slave-holding states – Virginia, Arkansas, Tennessee, and North Carolina – declared their secession and joined the Confederacy following a call by U.S. President Abraham Lincoln for troops from each state to recapture Sumter and other seized federal properties in the South.
Missouri and Kentucky were represented by partisan factions adopting the forms of state governments without control of substantial territory or population in either case. The antebellum state governments in both maintained their representation in the Union. Also fighting for the Confederacy were two of the "Five Civilized Tribes" – the Choctaw and the Chickasaw – in Indian Territory and a new, but uncontrolled, Confederate Territory of Arizona. Efforts by certain factions in Maryland to secede were halted by federal imposition of martial law; Delaware, though of divided loyalty, did not attempt it. A Unionist government was formed in opposition to the secessionist state government in Richmond and administered the western parts of Virginia that had been occupied by Federal troops. The Restored Government of Virginia later recognized the new state of West Virginia, which was admitted to the Union during the war on June 20, 1863, and relocated to Alexandria for the rest of the war.
Confederate control over its claimed territory and population in congressional districts steadily shrank from three-quarters to a third during the course of the American Civil War due to the Union's successful overland campaigns, its control of inland waterways into the South, and its blockade of the southern coast. With the Emancipation Proclamation on January 1, 1863, the Union made abolition of slavery a war goal (in addition to reunion). As Union forces moved southward, large numbers of plantation slaves were freed. Many joined the Union lines, enrolling in service as soldiers, teamsters and laborers. The most notable advance was Sherman's "March to the Sea" in late 1864. Much of the Confederacy's infrastructure was destroyed, including telegraphs, railroads and bridges. Plantations in the path of Sherman's forces were severely damaged. Internal movement within the Confederacy became increasingly difficult, weakening its economy and limiting army mobility.
These losses created an insurmountable disadvantage in men, materiel, and finance. Public support for Confederate President Jefferson Davis's administration eroded over time due to repeated military reverses, economic hardships, and allegations of autocratic government. After four years of campaigning, Richmond was captured by Union forces in April 1865. A few days later General Robert E. Lee surrendered to Union General Ulysses S. Grant, effectively signalling the collapse of the Confederacy. President Davis was captured on May 10, 1865, and jailed for treason, but no trial was ever held.
The Confederacy was established in the Montgomery Convention in February 1861 by seven states (South Carolina, Mississippi, Alabama, Florida, Georgia, Louisiana, adding Texas in March before Lincoln's inauguration), expanded in May–July 1861 (with Virginia, Arkansas, Tennessee, North Carolina), and disintegrated in April–May 1865. It was formed by delegations from seven slave states of the Lower South that had proclaimed their secession from the Union. After the fighting began in April, four additional slave states seceded and were admitted. Later, two slave states (Missouri and Kentucky) and two territories were given seats in the Confederate Congress.
Southern nationalism was swelling and pride supported the new founding. Confederate nationalism prepared men to fight for "the Cause". For the duration of its existence, the Confederacy underwent trial by war. The "Southern Cause" transcended the ideology of states' rights, tariff policy, and internal improvements. This "Cause" supported, or derived from, cultural and financial dependence on the South's slavery-based economy. The convergence of race and slavery, politics, and economics raised almost all South-related policy questions to the status of moral questions over way of life, commingling love of things Southern and hatred of things Northern. Not only did national political parties split, but national churches and interstate families as well divided along sectional lines as the war approached. According to historian John M. Coski,
Southern Democrats had chosen John Breckinridge as their candidate during the U.S. presidential election of 1860, but in no Southern state (other than South Carolina, where the legislature chose the electors) was support for him unanimous; all the other states recorded at least some popular votes for one or more of the other three candidates (Abraham Lincoln, Stephen A. Douglas and John Bell). Support for these candidates, collectively, ranged from significant to an outright majority, with extremes running from 25% in Texas to 81% in Missouri. There were minority views everywhere, especially in the upland and plateau areas of the South, being particularly concentrated in western Virginia and eastern Tennessee.
Following South Carolina's unanimous 1860 secession vote, no other Southern states considered the question until 1861, and when they did none had a unanimous vote. All had residents who cast significant numbers of Unionist votes in either the legislature, conventions, popular referendums, or in all three. Voting to remain in the Union did not necessarily mean that individuals were sympathizers with the North. Once hostilities began, many of these who voted to remain in the Union, particularly in the Deep South, accepted the majority decision, and supported the Confederacy.
Many writers have evaluated the Civil War as an American tragedy—a "Brothers' War", pitting "brother against brother, father against son, kin against kin of every degree".
According to historian Avery O. Craven in 1950, the Confederate States of America nation, as a state power, was created by secessionists in Southern slave states, who believed that the federal government was making them second-class citizens and refused to honor their belief – that slavery was beneficial to the Negro. They judged the agents of change to be abolitionists and anti-slavery elements in the Republican Party, whom they believed used repeated insult and injury to subject them to intolerable "humiliation and degradation". The "Black Republicans" (as the Southerners called them) and their allies soon dominated the U.S. House, Senate, and Presidency. On the U.S. Supreme Court, Chief Justice Roger B. Taney (a presumed supporter of slavery) was 83 years old and ailing.
During the campaign for president in 1860, some secessionists threatened disunion should Lincoln (who opposed the expansion of slavery into the territories) be elected, including William L. Yancey. Yancey toured the North calling for secession as Stephen A. Douglas toured the South calling for union in the event of Lincoln's election. To the secessionists the Republican intent was clear: to contain slavery within its present bounds and, eventually, to eliminate it entirely. A Lincoln victory presented them with a momentous choice (as they saw it), even before his inauguration – "the Union without slavery, or slavery without the Union".
The immediate catalyst for secession was the victory of the Republican Party and the election of Abraham Lincoln as president in the 1860 elections. American Civil War historian James M. McPherson suggested that, for Southerners, the most ominous feature of the Republican victories in the congressional and presidential elections of 1860 was the magnitude of those victories: Republicans captured over 60 percent of the Northern vote and three-fourths of its Congressional delegations. The Southern press said that such Republicans represented the anti-slavery portion of the North, "a party founded on the single sentiment ... of hatred of African slavery", and now the controlling power in national affairs. The "Black Republican party" could overwhelm conservative Yankees. "The New Orleans Delta" said of the Republicans, "It is in fact, essentially, a revolutionary party" to overthrow slavery.
By 1860, sectional disagreements between North and South concerned primarily the maintenance or expansion of slavery in the United States. Historian Drew Gilpin Faust observed that "leaders of the secession movement across the South cited slavery as the most compelling reason for southern independence". Although most white Southerners did not own slaves, the majority supported the institution of slavery and benefited indirectly from the slave society. For struggling yeomen and subsistence farmers, the slave society provided a large class of people ranked lower in the social scale than themselves. Secondary differences related to issues of free speech, runaway slaves, expansion into Cuba, and states' rights.
Historian Emory Thomas assessed the Confederacy's self-image by studying correspondence sent by the Confederate government in 1861–62 to foreign governments. He found that Confederate diplomacy projected multiple contradictory self-images:
In what later became known as the Cornerstone Speech, Confederate Vice President Alexander H. Stephens declared that the "cornerstone" of the new government "rest[ed] upon the great truth that the negro is not equal to the white man; that slavery – subordination to the superior race – is his natural and normal condition. This, our new government, is the first, in the history of the world, based upon this great physical, philosophical, and moral truth". After the war Stephens tried to qualify his remarks, claiming they were extemporaneous, metaphorical, and intended to refer to public sentiment rather than "the principles of the new Government on this subject".
Four of the seceding states, the Deep South states of South Carolina,
Mississippi, Georgia, and Texas, issued formal declarations of the causes of their decision, each of which identified the threat to slaveholders' rights as the cause of, or a major cause of, secession. Georgia also claimed a general Federal policy of favoring Northern over Southern economic interests. Texas mentioned slavery 21 times, but also listed the failure of the federal government to live up to its obligations, in the original annexation agreement, to protect settlers along the exposed western frontier. Texas resolutions further stated that governments of the states and the nation were established "exclusively by the white race, for themselves and their posterity". They also stated that although equal civil and political rights applied to all white men, they did not apply to those of the "African race", further opining that the end of racial enslavement would "bring inevitable calamities upon both [races] and desolation upon the fifteen slave-holding states".
Alabama did not provide a separate declaration of causes. Instead, the Alabama ordinance stated "the election of Abraham Lincoln ... by a sectional party, avowedly hostile to the domestic institutions and to the peace and security of the people of the State of Alabama, preceded by many and dangerous infractions of the Constitution of the United States by many of the States and people of the northern section, is a political wrong of so insulting and menacing a character as to justify the people of the State of Alabama in the adoption of prompt and decided measures for their future peace and security". The ordinance invited "the slaveholding States of the South, who may approve such purpose, in order to frame a provisional as well as a permanent Government upon the principles of the Constitution of the United States" to participate in a February 4, 1861 convention in Montgomery, Alabama.
The secession ordinances of the remaining two states, Florida and Louisiana, simply declared their severing ties with the federal Union, without stating any causes. Afterward, the Florida secession convention formed a committee to draft a declaration of causes, but the committee was discharged before completion of the task. Only an undated, untitled draft remains.
Four of the Upper South states (Virginia, Arkansas, Tennessee, and North Carolina) rejected secession until after the clash at Ft. Sumter. Virginia's ordinance stated a kinship with the slave-holding states of the Lower South, but did not name the institution itself as a primary reason for its course.
Arkansas's secession ordinance encompassed a strong objection to the use of military force to preserve the Union as its motivating reason. Prior to the outbreak of war, the Arkansas Convention had on March 20 given as their first resolution: "The people of the Northern States have organized a political party, purely sectional in its character, the central and controlling idea of which is hostility to the institution of African slavery, as it exists in the Southern States; and that party has elected a President ... pledged to administer the Government upon principles inconsistent with the rights and subversive of the interests of the Southern States."
North Carolina and Tennessee limited their ordinances to simply withdrawing, although Tennessee went so far as to make clear they wished to make no comment at all on the "abstract doctrine of secession".
In a message to the Confederate Congress on April 29, 1861 Jefferson Davis cited both the tariff and slavery for the South's secession.
The pro-slavery "Fire-Eaters" group of Southern Democrats, calling for immediate secession, were opposed by two factions. "Cooperationists" in the Deep South would delay secession until several states left the union, perhaps in a Southern Convention. Under the influence of men such as Texas Governor Sam Houston, delay would have the effect of sustaining the Union. "Unionists", especially in the Border South, often former Whigs, appealed to sentimental attachment to the United States. Southern Unionists' favorite presidential candidate was John Bell of Tennessee, sometimes running under an "Opposition Party" banner.
Many secessionists were active politically. Governor William Henry Gist of South Carolina corresponded secretly with other Deep South governors, and most southern governors exchanged clandestine commissioners. Charleston's secessionist "1860 Association" published over 200,000 pamphlets to persuade the youth of the South. The most influential were: "The Doom of Slavery" and "The South Alone Should Govern the South", both by John Townsend of South Carolina; and James D. B. De Bow's "The Interest of Slavery of the Southern Non-slaveholder".
Developments in South Carolina started a chain of events. The foreman of a jury refused the legitimacy of federal courts, so Federal Judge Andrew Magrath ruled that U.S. judicial authority in South Carolina was vacated. A mass meeting in Charleston celebrating the Charleston and Savannah railroad and state cooperation led to the South Carolina legislature to call for a Secession Convention. U.S. Senator James Chesnut, Jr. resigned, as did Senator James Henry Hammond.
Elections for Secessionist conventions were heated to "an almost raving pitch, no one dared dissent", according to historian William W. Freehling. Even once–respected voices, including the Chief Justice of South Carolina, John Belton O'Neall, lost election to the Secession Convention on a Cooperationist ticket. Across the South mobs expelled Yankees and (in Texas) executed German-Americans suspected of loyalty to the United States. Generally, seceding conventions which followed did not call for a referendum to ratify, although Texas, Arkansas, and Tennessee did, as well as Virginia's second convention. Kentucky declared neutrality, while Missouri had its own civil war until the Unionists took power and drove the Confederate legislators out of the state.
In the antebellum months, the Corwin Amendment was an unsuccessful attempt by the Congress to bring the seceding states back to the Union and to convince the border slave states to remain. It was a proposed amendment to the United States Constitution by Ohio Congressman Thomas Corwin that would shield "domestic institutions" of the states (which in 1861 included slavery) from the constitutional amendment process and from abolition or interference by Congress.
It was passed by the 36th Congress on March 2, 1861. The House approved it by a vote of 133 to 65 and the United States Senate adopted it, with no changes, on a vote of 24 to 12. It was then submitted to the state legislatures for ratification. In his inaugural address Lincoln endorsed the proposed amendment.
The text was as follows:
Had it been ratified by the required number of states prior to 1865, it would have made institutionalized slavery immune to the constitutional amendment procedures and to interference by Congress.
The first secession state conventions from the Deep South sent representatives to meet at the Montgomery Convention in Montgomery, Alabama, on February 4, 1861. There the fundamental documents of government were promulgated, a provisional government was established, and a representative Congress met for the Confederate States of America.
The new 'provisional' Confederate President Jefferson Davis issued a call for 100,000 men from the various states' militias to defend the newly formed Confederacy. All Federal property was seized, along with gold bullion and coining dies at the U.S. mints in Charlotte, North Carolina; Dahlonega, Georgia; and New Orleans. The Confederate capital was moved from Montgomery to Richmond, Virginia, in May 1861. On February 22, 1862, Davis was inaugurated as president with a term of six years.
The newly inaugurated Confederate administration pursued a policy of national territorial integrity, continuing earlier state efforts in 1860 and early 1861 to remove U.S. government presence from within their boundaries. These efforts included taking possession of U.S. courts, custom houses, post offices, and most notably, arsenals and forts. But after the Confederate attack and capture of Fort Sumter in April 1861, Lincoln called up 75,000 of the states' militia to muster under his command. The stated purpose was to re-occupy U.S. properties throughout the South, as the U.S. Congress had not authorized their abandonment. The resistance at Fort Sumter signaled his change of policy from that of the Buchanan Administration. Lincoln's response ignited a firestorm of emotion. The people of both North and South demanded war, and young men rushed to their colors in the hundreds of thousands. Four more states (Virginia, North Carolina, Tennessee, and Arkansas) refused Lincoln's call for troops and declared secession, while Kentucky maintained an uneasy "neutrality".
Secessionists argued that the United States Constitution was a contract among sovereign states that could be abandoned at any time without consultation and that each state had a right to secede. After intense debates and statewide votes, seven Deep South cotton states passed secession ordinances by February 1861 (before Abraham Lincoln took office as president), while secession efforts failed in the other eight slave states. Delegates from those seven formed the CSA in February 1861, selecting Jefferson Davis as the provisional president. Unionist talk of reunion failed and Davis began raising a 100,000 man army.
Initially, some secessionists may have hoped for a peaceful departure. Moderates in the Confederate Constitutional Convention included a provision against importation of slaves from Africa to appeal to the Upper South. Non-slave states might join, but the radicals secured a two-thirds requirement in both houses of Congress to accept them.
Seven states declared their secession from the United States before Lincoln took office on March 4, 1861. After the Confederate attack on Fort Sumter April 12, 1861, and Lincoln's subsequent call for troops on April 15, four more states declared their secession:
Kentucky declared neutrality but after Confederate troops moved in, the state government asked for Union troops to drive them out. The splinter Confederate state government relocated to accompany western Confederate armies and never controlled the state population. By the end of the war, 90,000 Kentuckians had fought on the side of the Union, compared to 35,000 for the Confederate States.
In Missouri, a constitutional convention was approved and delegates elected by voters. The convention rejected secession 89–1 on March 19, 1861. The governor maneuvered to take control of the St. Louis Arsenal and restrict Federal movements. This led to confrontation, and in June Federal forces drove him and the General Assembly from Jefferson City. The executive committee of the constitutional convention called the members together in July. The convention declared the state offices vacant, and appointed a Unionist interim state government. The exiled governor called a rump session of the former General Assembly together in Neosho and, on October 31, 1861, passed an ordinance of secession. It is still a matter of debate as to whether a quorum existed for this vote. The Confederate state government was unable to control very much the Missouri territory. It had its capital first at Neosho, then at Cassville, before being driven out of the state. For the remainder of the war, it operated as a government in exile at Marshall, Texas.
Neither Kentucky nor Missouri was declared in rebellion in Lincoln's Emancipation Proclamation. The Confederacy recognized the pro-Confederate claimants in both Kentucky (December 10, 1861) and Missouri (November 28, 1861) and laid claim to those states, granting them Congressional representation and adding two stars to the Confederate flag. Voting for the representatives was mostly done by Confederate soldiers from Kentucky and Missouri.
The order of secession resolutions and dates are:
In Virginia, the populous counties along the Ohio and Pennsylvania borders rejected the Confederacy. Unionists held a Convention in Wheeling in June 1861, establishing a "restored government" with a rump legislature, but sentiment in the region remained deeply divided. In the 50 counties that would make up the state of West Virginia, voters from 24 counties had voted for disunion in Virginia's May 23 referendum on the ordinance of secession. In the 1860 Presidential election "Constitutional Democrat" Breckenridge had outpolled "Constitutional Unionist" Bell in the 50 counties by 1,900 votes, 44% to 42%. Regardless of scholarly disputes over election procedures and results county by county, altogether they simultaneously supplied over 20,000 soldiers to each side of the conflict. Representatives for most of the counties were seated in both state legislatures at Wheeling and at Richmond for the duration of the war.
Attempts to secede from the Confederacy by some counties in East Tennessee were checked by martial law. Although slave-holding Delaware and Maryland did not secede, citizens from those states exhibited divided loyalties. Regiments of Marylanders fought in Lee's Army of Northern Virginia. But overall, 24,000 men from Maryland joined the Confederate armed forces, compared to 63,000 who joined Union forces.
Delaware never produced a full regiment for the Confederacy, but neither did it emancipate slaves as did Missouri and West Virginia. District of Columbia citizens made no attempts to secede and through the war years, referendums sponsored by President Lincoln approved systems of compensated emancipation and slave confiscation from "disloyal citizens".
Citizens at Mesilla and Tucson in the southern part of New Mexico Territory formed a secession convention, which voted to join the Confederacy on March 16, 1861, and appointed Dr. Lewis S. Owings as the new territorial governor. They won the Battle of Mesilla and established a territorial government with Mesilla serving as its capital. The Confederacy proclaimed the Confederate Arizona Territory on February 14, 1862, north to the 34th parallel. Marcus H. MacWillie served in both Confederate Congresses as Arizona's delegate. In 1862 the Confederate New Mexico Campaign to take the northern half of the U.S. territory failed and the Confederate territorial government in exile relocated to San Antonio, Texas.
Confederate supporters in the trans-Mississippi west also claimed portions of United States Indian Territory after the United States evacuated the federal forts and installations. Over half of the American Indian troops participating in the Civil War from the Indian Territory supported the Confederacy; troops and one general were enlisted from each tribe. On July 12, 1861, the Confederate government signed a treaty with both the Choctaw and Chickasaw Indian nations. After several battles Union armies took control of the territory.
The Indian Territory never formally joined the Confederacy, but it did receive representation in the Confederate Congress. Many Indians from the Territory were integrated into regular Confederate Army units. After 1863 the tribal governments sent representatives to the Confederate Congress: Elias Cornelius Boudinot representing the Cherokee and Samuel Benton Callahan representing the Seminole and Creek people. The Cherokee Nation aligned with the Confederacy. They practiced and supported slavery, opposed abolition, and feared their lands would be seized by the Union. After the war, the Indian territory was disestablished, their black slaves were freed, and the tribes lost some of their lands.
Montgomery, Alabama, served as the capital of the Confederate States of America from February 4 until May 29, 1861, in the Alabama State Capitol. Six states created the Confederate States of America there on February 8, 1861. The Texas delegation was seated at the time, so it is counted in the "original seven" states of the Confederacy; it had no roll call vote until after its referendum made secession "operative". Two sessions of the Provisional Congress were held in Montgomery, adjourning May 21. The Permanent Constitution was adopted there on March 12, 1861.
The permanent capital provided for in the Confederate Constitution called for a state cession of a ten-miles square (100 square mile) district to the central government. Atlanta, which had not yet supplanted Milledgeville, Georgia, as its state capital, put in a bid noting its central location and rail connections, as did Opelika, Alabama, noting its strategically interior situation, rail connections and nearby deposits of coal and iron.
Richmond, Virginia, was chosen for the interim capital at the Virginia State Capitol. The move was used by Vice President Stephens and others to encourage other border states to follow Virginia into the Confederacy. In the political moment it was a show of "defiance and strength". The war for Southern independence was surely to be fought in Virginia, but it also had the largest Southern military-aged white population, with infrastructure, resources, and supplies required to sustain a war. The Davis Administration's policy was that, "It must be held at all hazards."
The naming of Richmond as the new capital took place on May 30, 1861, and the last two sessions of the Provisional Congress were held in the new capital. The Permanent Confederate Congress and President were elected in the states and army camps on November 6, 1861. The First Congress met in four sessions in Richmond from February 18, 1862, to February 17, 1864. The Second Congress met there in two sessions, from May 2, 1864, to March 18, 1865.
As war dragged on, Richmond became crowded with training and transfers, logistics and hospitals. Prices rose dramatically despite government efforts at price regulation. A movement in Congress led by Henry S. Foote of Tennessee argued for moving the capital from Richmond. At the approach of Federal armies in mid-1862, the government's archives were readied for removal. As the Wilderness Campaign progressed, Congress authorized Davis to remove the executive department and call Congress to session elsewhere in 1864 and again in 1865. Shortly before the end of the war, the Confederate government evacuated Richmond, planning to relocate farther south. Little came of these plans before Lee's surrender at Appomattox Court House, Virginia on April 9, 1865. Davis and most of his cabinet fled to Danville, Virginia, which served as their headquarters for about a week.
Unionism—opposition to the Confederacy—was widespread, especially in the mountain regions of Appalachia and the Ozarks. Unionists, led by Parson Brownlow and Senator Andrew Johnson, took control of eastern Tennessee in 1863. Unionists also attempted control over western Virginia but never effectively held more than half the counties that formed the new state of West Virginia. Union forces captured parts of coastal North Carolina, and at first were welcomed by local unionists. That changed as the occupiers became perceived as oppressive, callous, radical and favorable to the Freedmen. Occupiers engaged in pillaging, freeing of slaves, and eviction of those refusing to take or reneging on the loyalty oaths, as ex-Unionists began to support the Confederate cause.
Support for the Confederacy was perhaps weakest in Texas; Claude Elliott estimates that only a third of the population actively supported the Confederacy. Many Unionists supported the Confederacy after the war began, but many others clung to their Unionism throughout the war, especially in the northern counties, the German districts, and the Mexican areas. According to Ernest Wallace: "This account of a dissatisfied Unionist minority, although historically essential, must be kept in its proper perspective, for throughout the war the overwhelming majority of the people zealously supported the Confederacy ..." Randolph B. Campbell states, "In spite of terrible losses and hardships, most Texans continued throughout the war to support the Confederacy as they had supported secession". Dale Baum in his analysis of Texas politics in the era counters: "This idea of a Confederate Texas united politically against northern adversaries was shaped more by nostalgic fantasies than by wartime realities." He characterizes Texas Civil War history as "a morose story of intragovernmental rivalries coupled with wide-ranging disaffection that prevented effective implementation of state wartime policies".
In Texas, local officials harassed Unionists and engaged in large-scale massacres against Unionists and Germans. In Cooke County 150 suspected Unionists were arrested; 25 were lynched without trial and 40 more were hanged after a summary trial. Draft resistance was widespread especially among Texans of German or Mexican descent; many of the latter went to Mexico. Potential draftees went into hiding, Confederate officials hunted them down, and many were shot.
Civil liberties were of small concern in both the North and South. Lincoln and Davis both took a hard line against dissent. Neely explores how the Confederacy became a virtual police state with guards and patrols all about, and a domestic passport system whereby everyone needed official permission each time they wanted to travel. Over 4,000 suspected Unionists were imprisoned without trial.
During the four years of its existence under trial by war, the Confederate States of America asserted its independence and appointed dozens of diplomatic agents abroad. None were ever officially recognized by a foreign government. The United States government regarded the Southern states as being in rebellion or insurrection and so refused any formal recognition of their status.
Even before Fort Sumter, U.S. Secretary of State William H. Seward issued formal instructions to the American minister to Britain, Charles Francis Adams:
Seward instructed Adams that if the British government seemed inclined to recognize the Confederacy, or even waver in that regard, it was to receive a sharp warning, with a strong hint of war:
The United States government never declared war on those "kindred and countrymen" in the Confederacy, but conducted its military efforts beginning with a presidential proclamation issued April 15, 1861. It called for troops to recapture forts and suppress what Lincoln later called an "insurrection and rebellion".
Mid-war parleys between the two sides occurred without formal political recognition, though the laws of war predominantly governed military relationships on both sides of uniformed conflict.
On the part of the Confederacy, immediately following Fort Sumter the Confederate Congress proclaimed that "war exists between the Confederate States and the Government of the United States, and the States and Territories thereof". A state of war was not to formally exist between the Confederacy and those states and territories in the United States allowing slavery, although Confederate Rangers were compensated for destruction they could effect there throughout the war.
Concerning the international status and nationhood of the Confederate States of America, in 1869 the United States Supreme Court in ruled Texas' declaration of secession was legally null and void. Jefferson Davis, former President of the Confederacy, and Alexander H. Stephens, its former vice-president, both wrote postwar arguments in favor of secession's legality and the international legitimacy of the Government of the Confederate States of America, most notably Davis' "The Rise and Fall of the Confederate Government".
Once war with the United States began, the Confederacy pinned its hopes for survival on military intervention by Great Britain and France. The Confederates who had believed that "cotton is king", that is, that Britain had to support the Confederacy to obtain cotton, proved mistaken. The British had stocks to last over a year and had been developing alternative sources of cotton, most notably India and Egypt. They were not about to go to war with the U.S. to acquire more cotton at the risk of losing the large quantities of food imported from the North. The Confederate government repeatedly sent delegations to Europe, but historians give them low marks for their poor diplomacy. James M. Mason went to London and John Slidell traveled to Paris. They were unofficially interviewed, but neither secured official recognition for the Confederacy.
In the United Kingdom, which had abolished slavery in 1833, Confederate diplomats found little support for American slavery, cotton trade or no. A series of slave narratives about American slavery was being published in London. It was in London that the first World Anti-Slavery Convention had been held in 1840; it was followed by regular smaller conferences. A string of eloquent and sometimes well-educated Negro abolitionist speakers criss-crossed not just England but Scotland and Ireland as well. In addition to exposing the reality of America's shameful and sinful chattel slavery—some were fugitive slaves—they put the lie to the Confederate position that negroes were "unintellectual, timid, and dependant", and "not equal to the white man...the superior race," as it was put by Confederate Vice-President Alexander H. Stephens in his famous Cornerstone Speech. Frederick Douglass, Henry Highland Garnet, Sarah Parker Remond, her brother Charles Lenox Remond, James W. C. Pennington, Martin Delany, Samuel Ringgold Ward, and William G. Allen all spent years in the United Kingdom, where fugitive slaves were safe and, as Allen said, there was an "absence of prejudice against color. Here the colored man feels himself among friends, and not among enemies". One speaker alone, William Wells Brown, gave more than 1,000 lectures on the shame of American chattel slavery.
In late 1861, the seizure of two senior Confederate diplomats aboard a British ship by the U.S. navy outraged Britain and led to a war scare in the Trent Affair. Queen Victoria insisted on giving the Americans an exit route and Lincoln took it, releasing the two diplomats. Tensions cooled, and the Confederacy gained no advantage. In recent years most historians argue that the risk of actual war over the Trent Affair was small, because it would have hurt both sides.
Throughout the early years of the war, British foreign secretary Lord John Russell, Emperor Napoleon III of France, and, to a lesser extent, British Prime Minister Lord Palmerston, showed interest in recognition of the Confederacy or at least mediation of the war. William Ewart Gladstone, the British Chancellor of the Exchequer (finance minister, in office 1859–1866), whose family wealth was based on slavery, was the key Minister calling for intervention to help the Confederacy achieve independence. He failed to convince prime minister Palmerston. By September 1862 the Union victory at the Battle of Antietam, Lincoln's preliminary Emancipation Proclamation and abolitionist opposition in Britain put an end to these possibilities. The cost to Britain of a war with the U.S. would have been high: the immediate loss of American grain-shipments, the end of British exports to the U.S., and the seizure of billions of pounds invested in American securities. War would have meant higher taxes in Britain, another invasion of Canada, and full-scale worldwide attacks on the British merchant fleet. Outright recognition would have meant certain war with the United States; in mid-1862 fears of race war (as had transpired in the Haitian Revolution of 1791–1804) led to the British considering intervention for humanitarian reasons. Lincoln's Emancipation Proclamation did not lead to interracial violence, let alone a bloodbath, but it did give the friends of the Union strong talking points in the arguments that raged across Britain.
John Slidell, the Confederate States emissary to France, did succeed in negotiating a loan of $15,000,000 from Erlanger and other French capitalists. The money went to buy ironclad warships, as well as military supplies that came in with blockade runners. The British government did allow the construction of blockade runners in Britain; they were owned and operated by British financiers and sailors; a few were owned and operated by the Confederacy. The British investors' goal was to get highly profitable cotton.
Several European nations maintained diplomats in place who had been appointed to the U.S., but no country appointed any diplomat to the Confederacy. Those nations recognized the Union and Confederate sides as belligerents. In 1863 the Confederacy expelled European diplomatic missions for advising their resident subjects to refuse to serve in the Confederate army. Both Confederate and Union agents were allowed to work openly in British territories. Some state governments in northern Mexico negotiated local agreements to cover trade on the Texas border. Pope Pius IX wrote a letter to Jefferson Davis in which he addressed Davis as the "Honorable President of the Confederate States of America". The Confederacy appointed Ambrose Dudley Mann as special agent to the Holy See on September 24, 1863. But the Holy See never released a formal statement supporting or recognizing the Confederacy. In November 1863, Mann met Pope Pius IX in person and received a letter supposedly addressed "to the Illustrious and Honorable Jefferson Davis, President of the Confederate States of America"; Mann had mistranslated the address. In his report to Richmond, Mann claimed a great diplomatic achievement for himself, asserting the letter was "a positive recognition of our Government". The letter was indeed used in propaganda, but Confederate Secretary of State Judah P. Benjamin told Mann it was "a mere inferential recognition, unconnected with political action or the regular establishment of diplomatic relations" and thus did not assign it the weight of formal recognition.
Nevertheless, the Confederacy was seen internationally as a serious attempt at nationhood, and European governments sent military observers, both official and unofficial, to assess whether there had been a "de facto" establishment of independence. These observers included Arthur Lyon Fremantle of the British Coldstream Guards, who entered the Confederacy via Mexico, Fitzgerald Ross of the Austrian Hussars, and Justus Scheibert of the Prussian Army. European travelers visited and wrote accounts for publication. Importantly in 1862, the Frenchman Charles Girard's "Seven months in the rebel states during the North American War" testified "this government ... is no longer a trial government ... but really a normal government, the expression of popular will".
Fremantle went on to write in his book "Three Months in the Southern States" that he had
French Emperor Napoleon III assured Confederate diplomat John Slidell that he would make "direct proposition" to Britain for joint recognition. The Emperor made the same assurance to British Members of Parliament John A. Roebuck and John A. Lindsay. Roebuck in turn publicly prepared a bill to submit to Parliament June 30 supporting joint Anglo-French recognition of the Confederacy. "Southerners had a right to be optimistic, or at least hopeful, that their revolution would prevail, or at least endure." Following the dual reverses at Vicksburg and Gettysburg in July 1863, the Confederates "suffered a severe loss of confidence in themselves", and withdrew into an interior defensive position. There would be no help from the Europeans.
By December 1864, Davis considered sacrificing slavery in order to enlist recognition and aid from Paris and London; he secretly sent Duncan F. Kenner to Europe with a message that the war was fought solely for "the vindication of our rights to self-government and independence" and that "no sacrifice is too great, save that of honor". The message stated that if the French or British governments made their recognition conditional on anything at all, the Confederacy would consent to such terms. Davis's message could not explicitly acknowledge that slavery was on the bargaining table due to still-strong domestic support for slavery among the wealthy and politically influential. European leaders all saw that the Confederacy was on the verge of total defeat.
The great majority of young white men voluntarily joined Confederate national or state military units. Perman (2010) says historians are of two minds on why millions of men seemed so eager to fight, suffer and die over four years:
Civil War historian E. Merton Coulter wrote that for those who would secure its independence, "The Confederacy was unfortunate in its failure to work out a general strategy for the whole war". Aggressive strategy called for offensive force concentration. Defensive strategy sought dispersal to meet demands of locally minded governors. The controlling philosophy evolved into a combination "dispersal with a defensive concentration around Richmond". The Davis administration considered the war purely defensive, a "simple demand that the people of the United States would cease to war upon us". Historian James M. McPherson is a critic of Lee's offensive strategy: "Lee pursued a faulty military strategy that ensured Confederate defeat".
As the Confederate government lost control of territory in campaign after campaign, it was said that "the vast size of the Confederacy would make its conquest impossible". The enemy would be struck down by the same elements which so often debilitated or destroyed visitors and transplants in the South. Heat exhaustion, sunstroke, endemic diseases such as malaria and typhoid would match the destructive effectiveness of the Moscow winter on the invading armies of Napoleon.
Early in the war both sides believed that one great battle would decide the conflict; the Confederates won a great victory at the First Battle of Bull Run, also known as First Manassas (the name used by Confederate forces). It drove the Confederate people "insane with joy"; the public demanded a forward movement to capture Washington, relocate the Confederate capital there, and admit Maryland to the Confederacy. A council of war by the victorious Confederate generals decided not to advance against larger numbers of fresh Federal troops in defensive positions. Davis did not countermand it. Following the Confederate incursion halted at the Battle of Antietam in October 1862, generals proposed concentrating forces from state commands to re-invade the north. Nothing came of it. Again in mid-1863 at his incursion into Pennsylvania, Lee requested of Davis that Beauregard simultaneously attack Washington with troops taken from the Carolinas. But the troops there remained in place during the Gettysburg Campaign.
The eleven states of the Confederacy were outnumbered by the North about four to one in white men of military age. It was overmatched far more in military equipment, industrial facilities, railroads for transport, and wagons supplying the front.
Confederate military policy innovated to slow the invaders, but at heavy cost to the Southern infrastructure. The Confederates burned bridges, laid land mines in the roads, and made harbors inlets and inland waterways unusable with sunken mines (called "torpedoes" at the time). Coulter reports:
The Confederacy relied on external sources for war materials. The first came from trade with the enemy. "Vast amounts of war supplies" came through Kentucky, and thereafter, western armies were "to a very considerable extent" provisioned with illicit trade via Federal agents and northern private traders. But that trade was interrupted in the first year of war by Admiral Porter's river gunboats as they gained dominance along navigable rivers north–south and east–west. Overseas blockade running then came to be of "outstanding importance". On April 17, President Davis called on privateer raiders, the "militia of the sea", to make war on U.S. seaborne commerce. Despite noteworthy effort, over the course of the war the Confederacy was found unable to match the Union in ships and seamanship, materials and marine construction.
Perhaps the greatest obstacle to success in the 19th-century warfare of mass armies was the Confederacy's lack of manpower, and sufficient numbers of disciplined, equipped troops in the field at the point of contact with the enemy. During the winter of 1862–63, Lee observed that none of his famous victories had resulted in the destruction of the opposing army. He lacked reserve troops to exploit an advantage on the battlefield as Napoleon had done. Lee explained, "More than once have most promising opportunities been lost for want of men to take advantage of them, and victory itself had been made to put on the appearance of defeat, because our diminished and exhausted troops have been unable to renew a successful struggle against fresh numbers of the enemy."
The military armed forces of the Confederacy comprised three branches: Army, Navy and Marine Corps.
The Confederate military leadership included many veterans from the United States Army and United States Navy who had resigned their Federal commissions and had won appointment to senior positions in the Confederate armed forces. Many had served in the Mexican–American War (including Robert E. Lee and Jefferson Davis), but some such as Leonidas Polk (who graduated from West Point but did not serve in the Army) had little or no experience.
The Confederate officer corps consisted of men from both slave-owning and non-slave-owning families. The Confederacy appointed junior and field grade officers by election from the enlisted ranks. Although no Army service academy was established for the Confederacy, some colleges (such as The Citadel and Virginia Military Institute) maintained cadet corps that trained Confederate military leadership. A naval academy was established at Drewry's Bluff, Virginia in 1863, but no midshipmen graduated before the Confederacy's end.
The soldiers of the Confederate armed forces consisted mainly of white males aged between 16 and 28. The median year of birth was 1838, so half the soldiers were 23 or older by 1861. In early 1862, the Confederate Army was allowed to disintegrate for two months following expiration of short-term enlistments. A majority of those in uniform would not re-enlist following their one-year commitment, so on April 16, 1862, the Confederate Congress enacted the first mass conscription on the North American continent. (The U.S. Congress followed a year later on March 3, 1863, with the Enrollment Act.) Rather than a universal draft, the initial program was a selective service with physical, religious, professional and industrial exemptions. These were narrowed as the war progressed. Initially substitutes were permitted, but by December 1863 these were disallowed. In September 1862 the age limit was increased from 35 to 45 and by February 1864, all men under 18 and over 45 were conscripted to form a reserve for state defense inside state borders. By March 1864, the Superintendent of Conscription reported that all across the Confederacy, every officer in constituted authority, man and woman, "engaged in opposing the enrolling officer in the execution of his duties". Although challenged in the state courts, the Confederate State Supreme Courts routinely rejected legal challenges to conscription.
Many thousands of slaves served as personal servants to their owner, or were hired as laborers, cooks, and pioneers. Some freed blacks and men of color served in local state militia units of the Confederacy, primarily in Louisiana and South Carolina, but their officers deployed them for "local defense, not combat". Depleted by casualties and desertions, the military suffered chronic manpower shortages. In early 1865, the Confederate Congress, influenced by the public support by General Lee, approved the recruitment of black infantry units. Contrary to Lee's and Davis's recommendations, the Congress refused "to guarantee the freedom of black volunteers". No more than two hundred black combat troops were ever raised.
The immediate onset of war meant that it was fought by the "Provisional" or "Volunteer Army". State governors resisted concentrating a national effort. Several wanted a strong state army for self-defense. Others feared large "Provisional" armies answering only to Davis. When filling the Confederate government's call for 100,000 men, another 200,000 were turned away by accepting only those enlisted "for the duration" or twelve-month volunteers who brought their own arms or horses.
It was important to raise troops; it was just as important to provide capable officers to command them. With few exceptions the Confederacy secured excellent general officers. Efficiency in the lower officers was "greater than could have been reasonably expected". As with the Federals, political appointees could be indifferent. Otherwise, the officer corps was governor-appointed or elected by unit enlisted. Promotion to fill vacancies was made internally regardless of merit, even if better officers were immediately available.
Anticipating the need for more "duration" men, in January 1862 Congress provided for company level recruiters to return home for two months, but their efforts met little success on the heels of Confederate battlefield defeats in February. Congress allowed for Davis to require numbers of recruits from each governor to supply the volunteer shortfall. States responded by passing their own draft laws.
The veteran Confederate army of early 1862 was mostly twelve-month volunteers with terms about to expire. Enlisted reorganization elections disintegrated the army for two months. Officers pleaded with the ranks to re-enlist, but a majority did not. Those remaining elected majors and colonels whose performance led to officer review boards in October. The boards caused a "rapid and widespread" thinning out of 1,700 incompetent officers. Troops thereafter would elect only second lieutenants.
In early 1862, the popular press suggested the Confederacy required a million men under arms. But veteran soldiers were not re-enlisting, and earlier secessionist volunteers did not reappear to serve in war. One Macon, Georgia, newspaper asked how two million brave fighting men of the South were about to be overcome by four million northerners who were said to be cowards.
The Confederacy passed the first American law of national conscription on April 16, 1862. The white males of the Confederate States from 18 to 35 were declared members of the Confederate army for three years, and all men then enlisted were extended to a three-year term. They would serve only in units and under officers of their state. Those under 18 and over 35 could substitute for conscripts, in September those from 35 to 45 became conscripts. The cry of "rich man's war and a poor man's fight" led Congress to abolish the substitute system altogether in December 1863. All principals benefiting earlier were made eligible for service. By February 1864, the age bracket was made 17 to 50, those under eighteen and over forty-five to be limited to in-state duty.
Confederate conscription was not universal; it was a selective service. The First Conscription Act of April 1862 exempted occupations related to transportation, communication, industry, ministers, teaching and physical fitness. The Second Conscription Act of October 1862 expanded exemptions in industry, agriculture and conscientious objection. Exemption fraud proliferated in medical examinations, army furloughs, churches, schools, apothecaries and newspapers.
Rich men's sons were appointed to the socially outcast "overseer" occupation, but the measure was received in the country with "universal odium". The legislative vehicle was the controversial Twenty Negro Law that specifically exempted one white overseer or owner for every plantation with at least 20 slaves. Backpedalling six months later, Congress provided overseers under 45 could be exempted only if they held the occupation before the first Conscription Act. The number of officials under state exemptions appointed by state Governor patronage expanded significantly. By law, substitutes could not be subject to conscription, but instead of adding to Confederate manpower, unit officers in the field reported that over-50 and under-17-year-old substitutes made up to 90% of the desertions.
The Conscription Act of February 1864 "radically changed the whole system" of selection. It abolished industrial exemptions, placing detail authority in President Davis. As the shame of conscription was greater than a felony conviction, the system brought in "about as many volunteers as it did conscripts." Many men in otherwise "bombproof" positions were enlisted in one way or another, nearly 160,000 additional volunteers and conscripts in uniform. Still there was shirking. To administer the draft, a Bureau of Conscription was set up to use state officers, as state Governors would allow. It had a checkered career of "contention, opposition and futility". Armies appointed alternative military "recruiters" to bring in the out-of-uniform 17–50-year-old conscripts and deserters. Nearly 3,000 officers were tasked with the job. By late 1864, Lee was calling for more troops. "Our ranks are constantly diminishing by battle and disease, and few recruits are received; the consequences are inevitable." By March 1865 conscription was to be administered by generals of the state reserves calling out men over 45 and under 18 years old. All exemptions were abolished. These regiments were assigned to recruit conscripts ages 17–50, recover deserters, and repel enemy cavalry raids. The service retained men who had lost but one arm or a leg in home guards. Ultimately, conscription was a failure, and its main value was in goading men to volunteer.
The survival of the Confederacy depended on a strong base of civilians and soldiers devoted to victory. The soldiers performed well, though increasing numbers deserted in the last year of fighting, and the Confederacy never succeeded in replacing casualties as the Union could. The civilians, although enthusiastic in 1861–62, seem to have lost faith in the future of the Confederacy by 1864, and instead looked to protect their homes and communities. As Rable explains, "This contraction of civic vision was more than a crabbed libertarianism; it represented an increasingly widespread disillusionment with the Confederate experiment."
The American Civil War broke out in April 1861 with a Confederate victory at the Battle of Fort Sumter in Charleston.
In January, President James Buchanan had attempted to resupply the garrison with the steamship, "Star of the West", but Confederate artillery drove it away. In March, President Lincoln notified South Carolina Governor Pickens that without Confederate resistance to the resupply there would be no military reinforcement without further notice, but Lincoln prepared to force resupply if it were not allowed. Confederate President Davis, in cabinet, decided to seize Fort Sumter before the relief fleet arrived, and on April 12, 1861, General Beauregard forced its surrender.
Following Sumter, Lincoln directed states to provide 75,000 troops for three months to recapture the Charleston Harbor forts and all other federal property. This emboldened secessionists in Virginia, Arkansas, Tennessee and North Carolina to secede rather than provide troops to march into neighboring Southern states. In May, Federal troops crossed into Confederate territory along the entire border from the Chesapeake Bay to New Mexico. The first battles were Confederate victories at Big Bethel (Bethel Church, Virginia), First Bull Run (First Manassas) in Virginia July and in August, Wilson's Creek (Oak Hills) in Missouri. At all three, Confederate forces could not follow up their victory due to inadequate supply and shortages of fresh troops to exploit their successes. Following each battle, Federals maintained a military presence and occupied Washington, DC; Fort Monroe, Virginia; and Springfield, Missouri. Both North and South began training up armies for major fighting the next year. Union General George B. McClellan's forces gained possession of much of northwestern Virginia in mid-1861, concentrating on towns and roads; the interior was too large to control and became the center of guerrilla activity. General Robert E. Lee was defeated at Cheat Mountain in September and no serious Confederate advance in western Virginia occurred until the next year.
Meanwhile, the Union Navy seized control of much of the Confederate coastline from Virginia to South Carolina. It took over plantations and the abandoned slaves. Federals there began a war-long policy of burning grain supplies up rivers into the interior wherever they could not occupy. The Union Navy began a blockade of the major southern ports and prepared an invasion of Louisiana to capture New Orleans in early 1862.
The victories of 1861 were followed by a series of defeats east and west in early 1862. To restore the Union by military force, the Federal strategy was to (1) secure the Mississippi River, (2) seize or close Confederate ports, and (3) march on Richmond. To secure independence, the Confederate intent was to (1) repel the invader on all fronts, costing him blood and treasure, and (2) carry the war into the North by two offensives in time to affect the mid-term elections.
Much of northwestern Virginia was under Federal control.
In February and March, most of Missouri and Kentucky were Union "occupied, consolidated, and used as staging areas for advances further South". Following the repulse of Confederate counter-attack at the Battle of Shiloh, Tennessee, permanent Federal occupation expanded west, south and east. Confederate forces repositioned south along the Mississippi River to Memphis, Tennessee, where at the naval Battle of Memphis, its River Defense Fleet was sunk. Confederates withdrew from northern Mississippi and northern Alabama. New Orleans was captured April 29 by a combined Army-Navy force under U.S. Admiral David Farragut, and the Confederacy lost control of the mouth of the Mississippi River. It had to concede extensive agricultural resources that had supported the Union's sea-supplied logistics base.
Although Confederates had suffered major reverses everywhere, as of the end of April the Confederacy still controlled territory holding 72% of its population. Federal forces disrupted Missouri and Arkansas; they had broken through in western Virginia, Kentucky, Tennessee and Louisiana. Along the Confederacy's shores, Union forces had closed ports and made garrisoned lodgments on every coastal Confederate state except Alabama and Texas. Although scholars sometimes assess the Union blockade as ineffectual under international law until the last few months of the war, from the first months it disrupted Confederate privateers, making it "almost impossible to bring their prizes into Confederate ports". British firms developed small fleets of blockade running companies, such as John Fraser and Company, and the Ordnance Department secured its own blockade runners for dedicated munitions cargoes.
During the Civil War fleets of armored warships were deployed for the first time in sustained blockades at sea. After some success against the Union blockade, in March the ironclad CSS "Virginia" was forced into port and burned by Confederates at their retreat. Despite several attempts mounted from their port cities, CSA naval forces were unable to break the Union blockade. Attempts were made by Commodore Josiah Tattnall's ironclads from Savannah in 1862 with the CSS "Atlanta". Secretary of the Navy Stephen Mallory placed his hopes in a European-built ironclad fleet, but they were never realized. On the other hand, four new English-built commerce raiders served the Confederacy, and several fast blockade runners were sold in Confederate ports. They were converted into commerce-raiding cruisers, and manned by their British crews.
In the east, Union forces could not close on Richmond. General McClellan landed his army on the Lower Peninsula of Virginia. Lee subsequently ended that threat from the east, then Union General John Pope attacked overland from the north only to be repulsed at Second Bull Run (Second Manassas). Lee's strike north was turned back at Antietam MD, then Union Major General Ambrose Burnside's offensive was disastrously ended at Fredericksburg VA in December. Both armies then turned to winter quarters to recruit and train for the coming spring.
In an attempt to seize the initiative, reprovision, protect farms in mid-growing season and influence U.S. Congressional elections, two major Confederate incursions into Union territory had been launched in August and September 1862. Both Braxton Bragg's invasion of Kentucky and Lee's invasion of Maryland were decisively repulsed, leaving Confederates in control of but 63% of its population. Civil War scholar Allan Nevins argues that 1862 was the strategic high-water mark of the Confederacy. The failures of the two invasions were attributed to the same irrecoverable shortcomings: lack of manpower at the front, lack of supplies including serviceable shoes, and exhaustion after long marches without adequate food. Also in September Confederate General William W. Loring pushed Federal forces from Charleston, Virginia, and the Kanawha Valley in western Virginia, but lacking re-inforcements Loring abandoned his position and by November the region was back in Federal control.
The failed Middle Tennessee campaign was ended January 2, 1863, at the inconclusive Battle of Stones River (Murfreesboro), both sides losing the largest percentage of casualties suffered during the war. It was followed by another strategic withdrawal by Confederate forces. The Confederacy won a significant victory April 1863, repulsing the Federal advance on Richmond at Chancellorsville, but the Union consolidated positions along the Virginia coast and the Chesapeake Bay.
Without an effective answer to Federal gunboats, river transport and supply, the Confederacy lost the Mississippi River following the capture of Vicksburg, Mississippi, and Port Hudson in July, ending Southern access to the trans-Mississippi West. July brought short-lived counters, Morgan's Raid into Ohio and the New York City draft riots. Robert E. Lee's strike into Pennsylvania was repulsed at Gettysburg, Pennsylvania despite Pickett's famous charge and other acts of valor. Southern newspapers assessed the campaign as "The Confederates did not gain a victory, neither did the enemy."
September and November left Confederates yielding Chattanooga, Tennessee, the gateway to the lower south. For the remainder of the war fighting was restricted inside the South, resulting in a slow but continuous loss of territory. In early 1864, the Confederacy still controlled 53% of its population, but it withdrew further to reestablish defensive positions. Union offensives continued with Sherman's March to the Sea to take Savannah and Grant's Wilderness Campaign to encircle Richmond and besiege Lee's army at Petersburg.
In April 1863, the C.S. Congress authorized a uniformed Volunteer Navy, many of whom were British. Wilmington and Charleston had more shipping while "blockaded" than before the beginning of hostilities. The Confederacy had altogether eighteen commerce-destroying cruisers, which seriously disrupted Federal commerce at sea and increased shipping insurance rates 900%. Commodore Tattnall again unsuccessfully attempted to break the Union blockade on the Savannah River in Georgia with an ironclad in 1863. Beginning in April 1864 the ironclad CSS "Albemarle" engaged Union gunboats and sank or cleared them for six months on the Roanoke River North Carolina. The Federals closed Mobile Bay by sea-based amphibious assault in August, ending Gulf coast trade east of the Mississippi River. In December, the Battle of Nashville ended Confederate operations in the western theater.
Large numbers of families relocated to safer places, usually remote rural areas, bringing along household slaves if they had any. Mary Massey argues these elite exiles introduced an element of defeatism into the southern outlook.
The first three months of 1865 saw the Federal Carolinas Campaign, devastating a wide swath of the remaining Confederate heartland. The "breadbasket of the Confederacy" in the Great Valley of Virginia was occupied by Philip Sheridan. The Union Blockade captured Fort Fisher in North Carolina, and Sherman finally took Charleston, South Carolina, by land attack.
The Confederacy controlled no ports, harbors or navigable rivers. Railroads were captured or had ceased operating. Its major food producing regions had been war-ravaged or occupied. Its administration survived in only three pockets of territory holding only one-third of its population. Its armies were defeated or disbanding. At the February 1865 Hampton Roads Conference with Lincoln, senior Confederate officials rejected his invitation to restore the Union with compensation for emancipated slaves. The three pockets of unoccupied Confederacy were southern Virginia – North Carolina, central Alabama – Florida, and Texas, the latter two areas less from any notion of resistance than from the disinterest of Federal forces to occupy them. The Davis policy was independence or nothing, while Lee's army was wracked by disease and desertion, barely holding the trenches defending Jefferson Davis' capital.
The Confederacy's last remaining blockade-running port, Wilmington, North Carolina, was lost. When the Union broke through Lee's lines at Petersburg, Richmond fell immediately. Lee surrendered a remnant of 50,000 from the Army of Northern Virginia at Appomattox Court House, Virginia, on April 9, 1865. "The Surrender" marked the end of the Confederacy.
The CSS "Stonewall" sailed from Europe to break the Union blockade in March; on making Havana, Cuba, it surrendered. Some high officials escaped to Europe, but President Davis was captured May 10; all remaining Confederate land forces surrendered by June 1865. The U.S. Army took control of the Confederate areas without post-surrender insurgency or guerrilla warfare against them, but peace was subsequently marred by a great deal of local violence, feuding and revenge killings. The last confederate military unit, the commerce raider CSS "Shenandoah", surrendered on November 6, 1865 in Liverpool.
Historian Gary Gallagher concluded that the Confederacy capitulated in early 1865 because northern armies crushed "organized southern military resistance". The Confederacy's population, soldier and civilian, had suffered material hardship and social disruption. They had expended and extracted a profusion of blood and treasure until collapse; "the end had come". Jefferson Davis' assessment in 1890 determined, "With the capture of the capital, the dispersion of the civil authorities, the surrender of the armies in the field, and the arrest of the President, the Confederate States of America disappeared ... their history henceforth became a part of the history of the United States."
When the war ended over 14,000 Confederates petitioned President Johnson for a pardon; he was generous in giving them out. He issued a general amnesty to all Confederate participants in the "late Civil War" in 1868. Congress passed additional Amnesty Acts in May 1866 with restrictions on office holding, and the Amnesty Act in May 1872 lifting those restrictions. There was a great deal of discussion in 1865 about bringing treason trials, especially against Jefferson Davis. There was no consensus in President Johnson's cabinet and there were no treason trials against anyone. In the case of Davis there was a strong possibility of acquittal which would have been humiliating for the government.
Davis was indicted for treason but never tried; he was released from prison on bail in May 1867. The amnesty of December 25, 1868, by President Johnson eliminated any possibility of Jefferson Davis (or anyone else associated with the Confederacy) standing trial for treason. | https://en.wikipedia.org/wiki?curid=7023 |
Cranberry
Cranberries are a group of evergreen dwarf shrubs or trailing vines in the subgenus Oxycoccus of the genus "Vaccinium". In Britain, cranberry may refer to the native species "Vaccinium oxycoccos", while in North America, cranberry may refer to "Vaccinium macrocarpon". "Vaccinium oxycoccos" is cultivated in central and northern Europe, while "Vaccinium macrocarpon" is cultivated throughout the northern United States, Canada and Chile. In some methods of classification, "Oxycoccus" is regarded as a genus in its own right. They can be found in acidic bogs throughout the cooler regions of the Northern Hemisphere.
Cranberries are low, creeping shrubs or vines up to long and in height; they have slender, wiry stems that are not thickly woody and have small evergreen leaves. The flowers are dark pink, with very distinct "reflexed" petals, leaving the style and stamens fully exposed and pointing forward. They are pollinated by bees. The fruit is a berry that is larger than the leaves of the plant; it is initially light green, turning red when ripe. It is edible, but with an acidic taste that usually overwhelms its sweetness.
In 2017, the United States, Canada, and Chile accounted for 98% of the world production of cranberries. Most cranberries are processed into products such as juice, sauce, jam, and sweetened dried cranberries, with the remainder sold fresh to consumers. Cranberry sauce is a traditional accompaniment to turkey at Christmas dinner in the United Kingdom, and at Christmas and Thanksgiving dinners in the United States and Canada.
Cranberries are related to bilberries, blueberries, and huckleberries, all in "Vaccinium" subgenus "Vaccinium". These differ in having bell-shaped flowers, the petals not being reflexed, and woodier stems, forming taller shrubs. There are 3-4 species of cranberry, classified by "subgenus":
The name, "cranberry", derives from the German, "kraanbere" (English translation, "craneberry"), first named as "cranberry" in English by the missionary John Eliot in 1647. Around 1694, German and Dutch colonists in New England used the word, cranberry, to represent the expanding flower, stem, calyx, and petals resembling the neck, head, and bill of a crane. The traditional English name for the plant more common in Europe, "Vaccinium oxycoccos", , originated from plants with small red berries found growing in fen (marsh) lands of England.
In North America, the Narragansett people of the Algonquian nation in the regions of New England appeared to be using cranberries in pemmican for food and for dye. Calling the red berries, "sasemineash", the Narragansett people may have introduced cranberries to colonists in Massachusetts. In 1550, James White Norwood made reference to Native Americans using cranberries, and it was the first reference to American cranberries up until this point. In James Rosier's book "The Land of Virginia" there is an account of Europeans coming ashore and being met with Native Americans bearing bark cups full of cranberries. In Plymouth, Massachusetts, there is a 1633 account of the husband of Mary Ring auctioning her cranberry-dyed petticoat for 16 shillings. In 1643, Roger Williams's book "A Key Into the Language of America" described cranberries, referring to them as "bearberries" because bears ate them. In 1648, preacher John Elliott was quoted in Thomas Shepard's book "Clear Sunshine of the Gospel" with an account of the difficulties the Pilgrims were having in using the Indians to harvest cranberries as they preferred to hunt and fish. In 1663, the Pilgrim cookbook appears with a recipe for cranberry sauce. In 1667, New Englanders sent to King Charles ten barrels of cranberries, three barrels of codfish and some Indian corn as a means of appeasement for his anger over their local coining of the Pine-tree shilling. In 1669, Captain Richard Cobb had a banquet in his house (to celebrate both his marriage to Mary Gorham and his election to the Convention of Assistance), serving wild turkey with sauce made from wild cranberries. In the 1672 book "New England Rarities Discovered" author John Josselyn described cranberries, writing:
Sauce for the Pilgrims, cranberry or bearberry, is a small trayling ["sic"] plant that grows in salt marshes that are overgrown with moss. The berries are of a pale yellow color, afterwards red, as big as a cherry, some perfectly round, others oval, all of them hollow with sower ["sic"] astringent taste; they are ripe in August and September. They are excellent against the Scurvy. They are also good to allay the fervor of hoof diseases. The Indians and English use them mush, boyling ["sic"] them with sugar for sauce to eat with their meat; and it is a delicate sauce, especially with roasted mutton. Some make tarts with them as with gooseberries.
"The Compleat Cook's Guide", published in 1683, made reference to cranberry juice. In 1703, cranberries were served at the Harvard University commencement dinner. In 1787, James Madison wrote Thomas Jefferson in France for background information on constitutional government to use at the Constitutional Convention. Jefferson sent back a number of books on the subject and in return asked for a gift of apples, pecans and cranberries. William Aiton, a Scottish botanist, included an entry for the cranberry in volume II of his 1789 work "Hortus Kewensis". He notes that "Vaccinium macrocarpon" (American cranberry) was cultivated by James Gordon in 1760. In 1796, cranberries were served at the first celebration of the landing of the Pilgrims, and Amelia Simmons (an American orphan) wrote a book entitled "American Cookery" which contained a recipe for cranberry tarts.
American Revolutionary War veteran Henry Hall first cultivated cranberries in the Cape Cod town of Dennis around 1816. In the 1820s, Hall was shipping cranberries to New York City and Boston from which shipments were also sent to Europe. In 1843, Eli Howes planted his own crop of cranberries on Cape Cod, using the "Howes" variety. In 1847, Cyrus Cahoon planted a crop of "Early Black" variety near Pleasant Lake, Harwich, Massachusetts.
By 1900, were under cultivation in the New England region. In 2014, the total area of cranberries harvested in the United States was , with Massachusetts as the second largest producer after Wisconsin.
Historically, cranberry beds were constructed in wetlands. Today's cranberry beds are constructed in upland areas with a shallow water table. The topsoil is scraped off to form dykes around the bed perimeter. Clean sand is hauled in and spread to a depth of four to eight inches (10 to 20 centimeters). The surface is laser leveled flat to provide even drainage. Beds are frequently drained with socked tile in addition to the perimeter ditch. In addition to making it possible to hold water, the dykes allow equipment to service the beds without driving on the vines. Irrigation equipment is installed in the bed to provide irrigation for vine growth and for spring and autumn frost protection.
A common misconception about cranberry production is that the beds remain flooded throughout the year. During the growing season cranberry beds are not flooded, but are irrigated regularly to maintain soil moisture. Beds are flooded in the autumn to facilitate harvest and again during the winter to protect against low temperatures. In cold climates like Wisconsin, New England, and eastern Canada, the winter flood typically freezes into ice, while in warmer climates the water remains liquid. When ice forms on the beds, trucks can be driven onto the ice to spread a thin layer of sand to control pests and rejuvenate the vines. Sanding is done every three to five years. Additionally, climate change could prove to be an issue for the cultivation of cranberries in the future. It is possible that, given rising temperatures over the next 50 years, chilling temperatures for harvesting cranberries may be insufficient for the process.
Cranberry vines are propagated by moving vines from an established bed. The vines are spread on the surface of the sand of the new bed and pushed into the sand with a blunt disk. The vines are watered frequently during the first few weeks until roots form and new shoots grow. Beds are given frequent, light application of nitrogen fertilizer during the first year. The cost of renovating cranberry beds is estimated to be between .
Cranberries are harvested in the fall when the fruit takes on its distinctive deep red color. Berries that receive sun turn a deep red when fully ripe, while those that do not fully mature are a pale pink or white color. This is usually in September through the first part of November. To harvest cranberries, the beds are flooded with six to eight inches (15 to 20 centimeters) of water above the vines. A harvester is driven through the beds to remove the fruit from the vines. For the past 50 years, water reel type harvesters have been used. Harvested cranberries float in the water and can be corralled into a corner of the bed and conveyed or pumped from the bed. From the farm, cranberries are taken to receiving stations where they are cleaned, sorted, and stored prior to packaging or processing. While cranberries are harvested when they take on their deep red color, they can also be harvested beforehand when they are still white, which is how white cranberry juice is made. Yields are lower on beds harvested early and the early flooding tends to damage vines, but not severely. Vines can also be trained through dry picking to help avoid damage in subsequent harvests.
Although most cranberries are wet-picked as described above, 5–10% of the US crop is still dry-picked. This entails higher labor costs and lower yield, but dry-picked berries are less bruised and can be sold as fresh fruit instead of having to be immediately frozen or processed. Originally performed with two-handed comb scoops, dry picking is today accomplished by motorized, walk-behind harvesters which must be small enough to traverse beds without damaging the vines.
Cranberries for fresh market are stored in shallow bins or boxes with perforated or slatted bottoms, which deter decay by allowing air to circulate. Because harvest occurs in late autumn, cranberries for fresh market are frequently stored in thick walled barns without mechanical refrigeration. Temperatures are regulated by opening and closing vents in the barn as needed. Cranberries destined for processing are usually frozen in bulk containers shortly after arriving at a receiving station.
In 2017, world production of cranberry was 625,181 tonnes, mainly by the United States, Canada, and Chile, which collectively accounted for 97% of the global total (table). Wisconsin (65% of US production) and Quebec were the two largest regional producers of cranberries in North America. Cranberries are also a major commercial crop in Massachusetts (23% of US production), New Jersey, Oregon, and Washington, as well as in the Canadian provinces of British Columbia, New Brunswick, Ontario, Nova Scotia, Prince Edward Island, and Newfoundland.
As fresh cranberries are hard, sour, and bitter, about 95% of cranberries are processed and used to make cranberry juice and sauce. They are also sold dried and sweetened. Cranberry juice is usually sweetened or blended with other fruit juices to reduce its natural tartness. At one teaspoon of sugar per ounce, cranberry juice cocktail is more highly sweetened than even soda drinks that have been linked to obesity.
Usually cranberries as fruit are cooked into a compote or jelly, known as cranberry sauce. Such preparations are traditionally served with roast turkey, as a staple of English Christmas dinners, and Thanksgiving (both in Canada and in the United States). The berry is also used in baking (muffins, scones, cakes and breads). In baking it is often combined with orange or orange zest. Less commonly, cranberries are used to add tartness to savory dishes such as soups and stews.
Fresh cranberries can be frozen at home, and will keep up to nine months; they can be used directly in recipes without thawing.
There are several alcoholic cocktails, including the Cosmopolitan, that include cranberry juice.
Raw cranberries are 87% water, 12% carbohydrates, and contain negligible protein and fat (table). In a 100 gram reference amount, raw cranberries supply 46 calories and moderate levels of vitamin C, dietary fiber, and the essential dietary mineral, manganese, each with more than 10% of its Daily Value. Other micronutrients have low content (table).
A comprehensive review in 2012 of available research concluded there is no evidence that cranberry juice or cranberry extract as tablets or capsules are effective in preventing urinary tract infections (UTIs). The European Food Safety Authority reviewed the evidence for one brand of cranberry extract and concluded a cause and effect relationship had not been established between cranberry consumption and reduced risk of UTIs.
One 2017 systematic review showed that consuming cranberry products reduced the incidence of UTIs in women with "recurrent" infections. Another review of small clinical studies indicated that consuming cranberry products could reduce the risk of UTIs by 26% in otherwise healthy women, although the authors indicated that larger studies were needed to confirm such an effect.
When the quality of meta-analyses on the efficacy of consuming cranberry products for preventing or treating UTIs is examined, large variation and uncertainty of effect are seen, resulting from inconsistencies of clinical research design and inadequate numbers of subjects.
Raw cranberries, cranberry juice and cranberry extracts are a source of polyphenols – including proanthocyanidins, flavonols and quercetin. These phytochemical compounds are being studied in vivo and in vitro for possible effects on the cardiovascular system, immune system and cancer. However, there is no confirmation from human studies that consuming cranberry polyphenols provides anti-cancer, immune, or cardiovascular benefits. Potential is limited by poor absorption and rapid excretion.
Cranberry juice contains a high molecular weight non-dializable material that is under research for its potential to affect formation of plaque by "Streptococcus mutans" pathogens that cause tooth decay. Cranberry juice components are also being studied for possible effects on kidney stone formation.
Problems may arise with the lack of validation for quantifying of A-type proanthocyanidins (PAC) extracted from cranberries. For instance, PAC extract quality and content can be performed using different methods including the European Pharmacopoeia method, liquid chromatography–mass spectrometry, or a modified 4-dimethylaminocinnamaldehyde colorimetric method. Variations in extract analysis can lead to difficulties in assessing the quality of PAC extracts from different cranberry starting material, such as by regional origin, ripeness at time of harvest and post-harvest processing. Assessments show that quality varies greatly from one commercial PAC extract product to another.
The anticoagulant effects of warfarin may be increased by consuming cranberry juice, resulting in adverse effects such as increased incidence of bleeding and bruising. Other safety concerns from consuming large quantities of cranberry juice or using cranberry supplements include potential for nausea, increasing stomach inflammation, sugar intake or kidney stone formation.
Cranberry sales in the United States have traditionally been associated with holidays of Thanksgiving and Christmas.
In the U.S., large-scale cranberry cultivation has been developed as opposed to other countries. American cranberry growers have a long history of cooperative marketing. As early as 1904, John Gaynor, a Wisconsin grower, and A.U. Chaney, a fruit broker from Des Moines, Iowa, organized Wisconsin growers into a cooperative called the Wisconsin Cranberry Sales Company to receive a uniform price from buyers. Growers in New Jersey and Massachusetts were also organized into cooperatives, creating the National Fruit Exchange that marketed fruit under the Eatmor brand. The success of cooperative marketing almost led to its failure. With consistent and high prices, area and production doubled between 1903 and 1917 and prices fell.
With surplus cranberries and changing American households some enterprising growers began canning cranberries that were below-grade for fresh market. Competition between canners was fierce because profits were thin. The Ocean Spray cooperative was established in 1930 through a merger of three primary processing companies: Ocean Spray Preserving company, Makepeace Preserving Co, and Cranberry Products Co. The new company was called Cranberry Canners, Inc. and used the Ocean Spray label on their products. Since the new company represented over 90% of the market, it would have been illegal (cf. antitrust) had attorney John Quarles not found an exemption for agricultural cooperatives. Morris April Brothers were the producers of Eatmor brand cranberry sauce, in Tuckahoe, New Jersey; Morris April Brothers brought an action against Ocean Spray for violation of the Sherman Antitrust Act and won $200,000 in real damages plus triple damages, in 1958, just in time for the Great Cranberry Scare of 1959. , about 65% of the North American industry belongs to the Ocean Spray cooperative. (The percentage may be slightly higher in Canada than in the U.S.)
A turning point for the industry occurred on 9 November 1959, when the secretary of the United States Department of Health, Education, and Welfare Arthur S. Flemming announced that some of the 1959 crop was tainted with traces of the herbicide aminotriazole. The market for cranberries collapsed and growers lost millions of dollars. However, the scare taught the industry that they could not be completely dependent on the holiday market for their products: they had to find year-round markets for their fruit. They also had to be exceedingly careful about their use of pesticides. After the aminotriazole scare, Ocean Spray reorganized and spent substantial sums on product development. New products such as cranberry/apple juice blends were introduced, followed by other juice blends.
A Federal Marketing Order that is authorized to synchronize supply and demand was approved in 1962. The order has been renewed and modified slightly in subsequent years, but it has allowed for more stable marketing. The market order has been invoked during six crop years: 1962 (12%), 1963 (5%), 1970 (10%), 1971 (12%), 2000 (15%), and 2001 (35%). Even though supply still exceeds demand, there is little will to invoke the Federal Marketing Order out of the realization that any pullback in supply by U.S. growers would easily be filled by Canadian production.
Prices and production increased steadily during the 1980s and 1990s. Prices peaked at about $65.00 per barrel ()—a cranberry barrel equals . in 1996 then fell to $18.00 per barrel () in 2001. The cause for the precipitous drop was classic oversupply. Production had outpaced consumption leading to substantial inventory in freezers or as concentrate.
Cranberry handlers (processors) include Ocean Spray, Cliffstar Corporation, Northland Cranberries Inc. (Sun Northland LLC), Clement Pappas & Co., and Decas Cranberry Products as well as a number of small handlers and processors.
The Cranberry Marketing Committee is an organization that represents United States cranberry growers in four marketing order districts. The committee was established in 1962 as a Federal Marketing Order to ensure a stable, orderly supply of good quality product. The Cranberry Marketing Committee, based in Wareham, Massachusetts, represents more than 1,100 cranberry growers and 60 cranberry handlers across Massachusetts, Rhode Island, Connecticut, New Jersey, Wisconsin, Michigan, Minnesota, Oregon, Washington and New York (Long Island). The authority for the actions taken by the Cranberry Marketing Committee is provided in Chapter IX, Title 7, Code of Federal Regulations which is called the Federal Cranberry Marketing Order. The Order is part of the Agricultural Marketing Agreement Act of 1937, identifying cranberries as a commodity good that can be regulated by Congress. The Federal Cranberry Marketing Order has been altered over the years to expand the Cranberry Marketing Committee's ability to develop projects in the United States and around the world. The Cranberry Marketing Committee currently runs promotional programs in the United States, China, India, Mexico, Pan-Europe, and South Korea.
, the European Union was the largest importer of American cranberries, followed individually by Canada, China, Mexico, and South Korea. From 2013 to 2017, US cranberry exports to China grew exponentially, making China the second largest country importer, reaching $36 million in cranberry products. The China–United States trade war resulted in many Chinese businesses cutting off ties with their U.S. cranberry suppliers. | https://en.wikipedia.org/wiki?curid=7025 |
Code coverage
In computer science, test coverage is a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage. Many different metrics can be used to calculate test coverage; some of the most basic are the percentage of program subroutines and the percentage of program statements called during execution of the test suite.
Test coverage was among the first methods invented for systematic software testing. The first published reference was by Miller and Maloney in "Communications of the ACM" in 1963.
To measure what percentage of code has been exercised by a test suite, one or more "coverage criteria" are used. Coverage criteria are usually defined as rules or requirements, which a test suite needs to satisfy.
There are a number of coverage criteria, the main ones being:
For example, consider the following C function:
int foo (int x, int y)
Assume this function is a part of some bigger program and this program was run with some test suite.
Condition coverage does not necessarily imply branch coverage. For example, consider the following fragment of code:
if a and b then
Condition coverage can be satisfied by two tests:
However, this set of tests does not satisfy branch coverage since neither case will meet the codice_5 condition.
Fault injection may be necessary to ensure that all conditions and branches of exception handling code have adequate coverage during testing.
A combination of function coverage and branch coverage is sometimes also called decision coverage. This criterion requires that every point of entry and exit in the program has been invoked at least once, and every decision in the program has taken on all possible outcomes at least once. In this context the decision is a boolean expression composed of conditions and zero or more boolean operators. This definition is not the same as branch coverage, however, some do use the term "decision coverage" as a synonym for "branch coverage".
Condition/decision coverage requires that both decision and condition coverage be satisfied. However, for safety-critical applications (e.g., for avionics software) it is often required that modified condition/decision coverage (MC/DC) be satisfied. This criterion extends condition/decision criteria with requirements that each condition should affect the decision outcome independently. For example, consider the following code:
if (a or b) and c then
The condition/decision criteria will be satisfied by the following set of tests:
However, the above tests set will not satisfy modified condition/decision coverage, since in the first test, the value of 'b' and in the second test the value of 'c' would not influence the output. So, the following test set is needed to satisfy MC/DC:
This criterion requires that all combinations of conditions inside each decision are tested. For example, the code fragment from the previous section will require eight tests:
Parameter value coverage (PVC) requires that in a method taking parameters, all the common values for such parameters be considered.
The idea is that all common possible values for a parameter are tested. For example, common values for a string are: 1) null, 2) empty, 3) whitespace (space, tabs, newline), 4) valid string, 5) invalid string, 6) single-byte string, 7) double-byte string. It may also be appropriate to use very long strings. Failure to test each possible parameter value may leave a bug. Testing only one of these could result in 100% code coverage as each line is covered, but as only one of seven options are tested, there is only 14.2% PVC.
There are further coverage criteria, which are used less often:
Safety-critical or dependable applications are often required to demonstrate 100% of some form of test coverage.
For example, the ECSS-E-ST-40C standard demands 100% statement and decision coverage for two out of four different criticality levels; for the other ones, target coverage values are up to negotiation between supplier and customer.
However, setting specific target values - and, in particular, 100% - has been criticized by practitioners for various reasons (cf.)
Martin Fowler writes: "I would be suspicious of anything like 100% - it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing".
Some of the coverage criteria above are connected. For instance, path coverage implies decision, statement and entry/exit coverage. Decision coverage implies statement coverage, because every statement is part of a branch.
Full path coverage, of the type described above, is usually impractical or impossible. Any module with a succession of formula_1 decisions in it can have up to formula_2 paths within it; loop constructs can result in an infinite number of paths. Many paths may also be infeasible, in that there is no input to the program under test that can cause that particular path to be executed. However, a general-purpose algorithm for identifying infeasible paths has been proven to be impossible (such an algorithm could be used to solve the halting problem). Basis path testing is for instance a method of achieving complete branch coverage without achieving complete path coverage.
Methods for practical path coverage testing instead attempt to identify classes of code paths that differ only in the number of loop executions, and to achieve "basis path" coverage the tester must cover all the path classes.
The target software is built with special options or libraries and run under a controlled environment, to map every executed function to the function points in the source code. This allows testing parts of the target software that are rarely or never accessed under normal conditions, and helps reassure that the most important conditions (function points) have been tested. The resulting output is then analyzed to see what areas of code have not been exercised and the tests are updated to include these areas as necessary. Combined with other test coverage methods, the aim is to develop a rigorous, yet manageable, set of regression tests.
In implementing test coverage policies within a software development environment, one must consider the following:
Software authors can look at test coverage results to devise additional tests and input or configuration sets to increase the coverage over vital functions. Two common forms of test coverage are statement (or line) coverage and branch (or edge) coverage. Line coverage reports on the execution footprint of testing in terms of which lines of code were executed to complete the test. Edge coverage reports which branches or code decision points were executed to complete the test. They both report a coverage metric, measured as a percentage. The meaning of this depends on what form(s) of coverage have been used, as 67% branch coverage is more comprehensive than 67% statement coverage.
Generally, test coverage tools incur computation and logging in addition to the actual program thereby slowing down the application, so typically this analysis is not done in production. As one might expect, there are classes of software that cannot be feasibly subjected to these coverage tests, though a degree of coverage mapping can be approximated through analysis rather than direct testing.
There are also some sorts of defects which are affected by such tools. In particular, some race conditions or similar real time sensitive operations can be masked when run under test environments; though conversely, some of these defects may become easier to find as a result of the additional overhead of the testing code.
Most professional software developers use C1 and C2 coverage. C1 stands for statement coverage and C2 for branch or condition coverage. With a combination of C1 and C2, it is possible to cover most statements in a code base. The other coverage criteria are C3 and C4. C3 stands for statement coverage within a block following a condition and C4 means path coverage which tests all possible paths in a program. With these methods, it is possible to achieve nearly 100% code coverage in most software projects.
Test coverage is one consideration in the safety certification of avionics equipment. The guidelines by which avionics gear is certified by the Federal Aviation Administration (FAA) is documented in DO-178B and DO-178C.
Test coverage is also a requirement in part 6 of the automotive safety standard ISO 26262 "Road Vehicles - Functional Safety". | https://en.wikipedia.org/wiki?curid=7030 |
Caitlin Clarke
Caitlin Clarke (May 3, 1952 – September 9, 2004) was an American theater and film actress best known for her role as Valerian in the 1981 fantasy film "Dragonslayer" and for her role as Charlotte Cardoza in the 1998–1999 Broadway musical "Titanic".
Clarke was born Katherine Anne Clarke in Pittsburgh, the oldest of five sisters, the youngest of whom is Victoria Clarke. Her family moved to Sewickley when she was ten.
Clarke received her B.A. in theater arts from Mount Holyoke College in 1974 and her M.F.A. from the Yale School of Drama in 1978. During her final year at Yale Clarke performed with the Yale Repertory Theater in such plays as Tales from the Vienna Woods.
The first few years of Clarke's professional career were largely theatrical, apart from her role in "Dragonslayer". After appearing in three Broadway plays in 1985, Clarke moved to Los Angeles for several years as a film and television actress. She appeared in the 1986 film "Crocodile Dundee" as Simone, a friendly prostitute. She returned to theater in the early 1990s, and to Broadway as Charlotte Cardoza in "Titanic".
Clarke was diagnosed with ovarian cancer in 2000. She returned to Pittsburgh to teach theater at the University of Pittsburgh and at the Pittsburgh Musical Theater's Rauh Conservatory as well as to perform in Pittsburgh theatre until her death on September 9, 2004.
Series: "Northern Exposure", "The Equalizer", "Once a Hero", "Moonlighting", "Sex And The City", "Law & Order" ("Menace", "Juvenile", "Stiff"), "Matlock" ("The Witness").
Movies: "Mayflower Madam" (1986), "Love, Lies and Murder" (1991), "The Stepford Husbands" (1996). | https://en.wikipedia.org/wiki?curid=7033 |
Cruiser
A cruiser is a type of warship. Modern cruisers are generally the largest ships in a fleet after aircraft carriers and amphibious assault ships, and can usually perform several roles.
The term has been in use for several hundred years, and has had different meanings throughout this period. During the Age of Sail, the term "cruising" referred to certain kinds of missions—independent scouting, commerce protection, or raiding—fulfilled by a frigate or sloop-of-war, which were the "cruising warships" of a fleet.
In the middle of the 19th century, "cruiser" came to be a classification for the ships intended for cruising distant waters, commerce raiding, and scouting for the battle fleet. Cruisers came in a wide variety of sizes, from the medium-sized protected cruiser to large armored cruisers that were nearly as big (although not as powerful or as well-armored) as a pre-dreadnought battleship. With the advent of the dreadnought battleship before World War I, the armored cruiser evolved into a vessel of similar scale known as the battlecruiser. The very large battlecruisers of the World War I era that succeeded armored cruisers were now classified, along with dreadnought battleships, as capital ships.
By the early 20th century after World War I, the direct successors to protected cruisers could be placed on a consistent scale of warship size, smaller than a battleship but larger than a destroyer. In 1922, the Washington Naval Treaty placed a formal limit on these cruisers, which were defined as warships of up to 10,000 tons displacement carrying guns no larger than 8 inches in calibre; heavy cruisers had 8-inch guns, while those with guns of 6.1 inches or less were light cruisers, which shaped cruiser design until the end of World War II. Some variations on the Treaty cruiser design included the German "pocket battleships" which had heavier armament at the expense of speed compared to standard heavy cruisers, and the American , which was a scaled-up heavy cruiser design designated as a "cruiser-killer".
In the later 20th century, the obsolescence of the battleship left the cruiser as the largest and most powerful surface combatant after the aircraft carrier. The role of the cruiser varied according to ship and navy, often including air defense and shore bombardment. During the Cold War, the Soviet Navy's cruisers had heavy anti-ship missile armament designed to sink NATO carrier task forces via saturation attack. The U.S. Navy built guided-missile cruisers upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification) primarily designed to provide air defense while often adding anti-submarine capabilities, being larger and having longer-range surface-to-air missiles (SAMs) than early "Charles F. Adams" guided-missile destroyers tasked with the short-range air defense role. By the end of the Cold War, the line between cruisers and destroyers had blurred, with the cruiser using the hull of the destroyer but receiving the cruiser designation due to their enhanced mission and combat systems.
Currently only two nations operate vessels formally classed as cruisers: the United States and Russia, and in both cases the vessels are primarily armed with guided missiles. was the last gun cruiser in service, serving with the Peruvian Navy until 2017.
Nevertheless, several navies operate destroyers that have many of the characteristics of vessels that are sometimes classified as cruisers. Notably, the US Navy's Zumwalt-class destroyers have been rated by the International Institute for Strategic Studies as cruisers. Other classes of destroyer, including the US Navy's Arleigh Burke-class destroyer, the Japanese Maritime Self Defence Force's Maya-class destroyer, Atago-class destroyer and Kongo-class destroyer, the South Korean Navy's Sejong the Great-class destroyer and the Chinese PLA Navy Type 055-class destroyer-class carry many of the attributes of cruisers.
The term "cruiser" or "cruizer" was first commonly used in the 17th century to refer to an independent warship. "Cruiser" meant the purpose or mission of a ship, rather than a category of vessel. However, the term was nonetheless used to mean a smaller, faster warship suitable for such a role. In the 17th century, the ship of the line was generally too large, inflexible, and expensive to be dispatched on long-range missions (for instance, to the Americas), and too strategically important to be put at risk of fouling and foundering by continual patrol duties.
The Dutch navy was noted for its cruisers in the 17th century, while the Royal Navy—and later French and Spanish navies—subsequently caught up in terms of their numbers and deployment. The British Cruiser and Convoy Acts were an attempt by mercantile interests in Parliament to focus the Navy on commerce defence and raiding with cruisers, rather than the more scarce and expensive ships of the line. During the 18th century the frigate became the preeminent type of cruiser. A frigate was a small, fast, long range, lightly armed (single gun-deck) ship used for scouting, carrying dispatches, and disrupting enemy trade. The other principal type of cruiser was the sloop, but many other miscellaneous types of ship were used as well.
During the 19th century, navies began to use steam power for their fleets. The 1840s saw the construction of experimental steam-powered frigates and sloops. By the middle of the 1850s, the British and U.S. Navies were both building steam frigates with very long hulls and a heavy gun armament, for instance or .
The 1860s saw the introduction of the ironclad. The first ironclads were frigates, in the sense of having one gun deck; however, they were also clearly the most powerful ships in the navy, and were principally to serve in the line of battle. In spite of their great speed, they would have been wasted in a cruising role.
The French constructed a number of smaller ironclads for overseas cruising duties, starting with the , commissioned 1865. These "station ironclads" were the beginning of the development of the armored cruisers, a type of ironclad specifically for the traditional cruiser missions of fast, independent raiding and patrol.
The first true armored cruiser was the Russian , completed in 1874, and followed by the British a few years later.
Until the 1890s armored cruisers were still built with masts for a full sailing rig, to enable them to operate far from friendly coaling stations.
Unarmored cruising warships, built out of wood, iron, steel or a combination of those materials, remained popular until towards the end of the 19th century. The ironclad's armor often meant that they were limited to short range under steam, and many ironclads were unsuited to long-range missions or for work in distant colonies. The unarmored cruiser – often a screw sloop or screw frigate – could continue in this role. Even though mid- to late-19th century cruisers typically carried up-to-date guns firing explosive shells, they were unable to face ironclads in combat. This was evidenced by the clash between , a modern British cruiser, and the Peruvian monitor "Huáscar". Even though the Peruvian vessel was obsolete by the time of the encounter, it stood up well to roughly 50 hits from British shells.
In the 1880s, naval engineers began to use steel as a material for construction and armament. A steel cruiser could be lighter and faster than one built of iron or wood. The "Jeune Ecole" school of naval doctrine suggested that a fleet of fast unprotected steel cruisers were ideal for commerce raiding, while the torpedo boat would be able to destroy an enemy battleship fleet.
Steel also offered the cruiser a way of acquiring the protection needed to survive in combat. Steel armor was considerably stronger, for the same weight, than iron. By putting a relatively thin layer of steel armor above the vital parts of the ship, and by placing the coal bunkers where they might stop shellfire, a useful degree of protection could be achieved without slowing the ship too much. Protected cruisers generally had an armored deck with sloped sides, providing similar protection to a light armored belt at less weight and expense.
The first protected cruiser was the Chilean ship "Esmeralda", launched in 1883. Produced by a shipyard at Elswick, in Britain, owned by Armstrong, she inspired a group of protected cruisers produced in the same yard and known as the "Elswick cruisers". Her forecastle, poop deck and the wooden board deck had been removed, replaced with an armored deck.
"Esmeralda"s armament consisted of fore and aft 10-inch (25.4 cm) guns and 6-inch (15.2 cm) guns in the midships positions. It could reach a speed of , and was propelled by steam alone. It also had a displacement of less than 3,000 tons. During the two following decades, this cruiser type came to be the inspiration for combining heavy artillery, high speed and low displacement.
The torpedo cruiser (known in the Royal Navy as the torpedo gunboat) was a smaller unarmored cruiser, which emerged in the 1880s–1890s. These ships could reach speeds up to and were armed with medium to small calibre guns as well as torpedoes. These ships were tasked with guard and reconnaissance duties, to repeat signals and all other fleet duties for which smaller vessels were suited. These ships could also function as flagships of torpedo boat flotillas. After the 1900s, these ships were usually traded for faster ships with better sea going qualities.
Steel also affected the construction and role of armored cruisers. Steel meant that new designs of battleship, later known as pre-dreadnought battleships, would be able to combine firepower and armor with better endurance and speed than ever before. The armored cruisers of the 1890s greatly resembled the battleships of the day; they tended to carry slightly smaller main armament ( rather than 12-inch) and have somewhat thinner armor in exchange for a faster speed (perhaps rather than 18). Because of their similarity, the lines between battleships and armored cruisers became blurred.
Shortly after the turn of the 20th century there were difficult questions about the design of future cruisers. Modern armored cruisers, almost as powerful as battleships, were also fast enough to outrun older protected and unarmored cruisers. In the Royal Navy, Jackie Fisher cut back hugely on older vessels, including many cruisers of different sorts, calling them "a miser's hoard of useless junk" that any modern cruiser would sweep from the seas. The scout cruiser also appeared in this era; this was a small, fast, lightly armed and armored type designed primarily for reconnaissance. The Royal Navy and the Italian Navy were the primary developers of this type.
The growing size and power of the armored cruiser resulted in the battlecruiser, with an armament and size similar to the revolutionary new dreadnought battleship; the brainchild of British admiral Jackie Fisher. He believed that to ensure British naval dominance in its overseas colonial possessions, a fleet of large, fast, powerfully armed vessels which would be able to hunt down and mop up enemy cruisers and armored cruisers with overwhelming fire superiority was needed. They were equipped with the same gun types as battleships, though usually with fewer guns, and were intended to engage enemy capital ships as well. This type of vessel came to be known as the "battlecruiser", and the first were commissioned into the Royal Navy in 1907. The British battlecruisers sacrificed protection for speed, as they were intended to "choose their range" (to the enemy) with superior speed and only engage the enemy at long range. When engaged at moderate ranges, the lack of protection combined with unsafe ammunition handling practices became tragic with the loss of three of them at the Battle of Jutland. Germany and eventually Japan followed suit to build these vessels, replacing armored cruisers in most frontline roles. German battlecruisers were generally better protected but slower than British battlecruisers. Battlecruisers were in many cases larger and more expensive than contemporary battleships, due to their much-larger propulsion plants.
At around the same time as the battlecruiser was developed, the distinction between the armored and the unarmored cruiser finally disappeared. By the British , the first of which was launched in 1909, it was possible for a small, fast cruiser to carry both belt and deck armor, particularly when turbine engines were adopted. These light armored cruisers began to occupy the traditional cruiser role once it became clear that the battlecruiser squadrons were required to operate with the battle fleet.
Some light cruisers were built specifically to act as the leaders of flotillas of destroyers.
These vessels were essentially large coastal patrol boats armed with multiple light guns. One such warship was "Grivița" of the Romanian Navy. She displaced 110 tons, measured 60 meters in length and was armed with four light guns.
The auxiliary cruiser was a merchant ship hastily armed with small guns on the outbreak of war. Auxiliary cruisers were used to fill gaps in their long-range lines or provide escort for other cargo ships, although they generally proved to be useless in this role because of their low speed, feeble firepower and lack of armor. In both world wars the Germans also used small merchant ships armed with cruiser guns to surprise Allied merchant ships.
Some large liners were armed in the same way. In British service these were known as Armed Merchant Cruisers (AMC). The Germans and French used them in World War I as raiders because of their high speed (around 30 knots (56 km/h)), and they were used again as raiders early in World War II by the Germans and Japanese. In both the First World War and in the early part of the Second, they were used as convoy escorts by the British.
Cruisers were one of the workhorse types of warship during World War I. By the time of World War I, cruisers had accelerated their development and improved their quality significantly, with drainage volume reaching 3000–4000 tons, a speed of 25–30 knots and a calibre of 127–152 mm.
Naval construction in the 1920s and 1930s was limited by international treaties designed to prevent the repetition of the Dreadnought arms race of the early 20th century. The Washington Naval Treaty of 1922 placed limits on the construction of ships with a standard displacement of more than 10,000 tons and an armament of guns larger than 8-inch (203 mm). A number of navies commissioned classes of cruisers at the top end of this limit, known as "treaty cruisers".
The London Naval Treaty in 1930 then formalised the distinction between these "heavy" cruisers and light cruisers: a "heavy" cruiser was one with guns of more than 6.1-inch (155 mm) calibre. The Second London Naval Treaty attempted to reduce the tonnage of new cruisers to 8,000 or less, but this had little effect; Japan and Germany were not signatories, and some navies had already begun to evade treaty limitations on warships. The first London treaty did touch off a period of the major powers building 6-inch or 6.1-inch gunned cruisers, nominally of 10,000 tons and with up to fifteen guns, the treaty limit. Thus, most light cruisers ordered after 1930 were the size of heavy cruisers but with more and smaller guns. The Imperial Japanese Navy began this new race with the , launched in 1934. After building smaller light cruisers with six or eight 6-inch guns launched 1931–35, the British Royal Navy followed with the 12-gun in 1936. To match foreign developments and potential treaty violations, in the 1930s the US developed a series of new guns firing "super-heavy" armor piercing ammunition; these included the 6-inch (152 mm)/47 caliber gun Mark 16 introduced with the 15-gun s in 1936, and the 8-inch (203 mm)/55 caliber gun Mark 12 introduced with in 1937.
The heavy cruiser was a type of cruiser designed for long range, high speed and an armament of naval guns around 203 mm (8 in) in calibre. The first heavy cruisers were built in 1915, although it only became a widespread classification following the London Naval Treaty in 1930. The heavy cruiser's immediate precursors were the light cruiser designs of the 1910s and 1920s; the US lightly armored 8-inch "treaty cruisers" of the 1920s (built under the Washington Naval Treaty) were originally classed as light cruisers until the London Treaty forced their redesignation.
Initially, all cruisers built under the Washington treaty had torpedo tubes, regardless of nationality. However, in 1930, results of war games caused the US Naval War College to conclude that only perhaps half of cruisers would use their torpedoes in action. In a surface engagement, long-range gunfire and destroyer torpedoes would decide the issue, and under air attack numerous cruisers would be lost before getting within torpedo range. Thus, beginning with launched in 1933, new cruisers were built without torpedoes, and torpedoes were removed from older heavy cruisers due to the perceived hazard of their being exploded by shell fire. The Japanese took exactly the opposite approach with cruiser torpedoes, and this proved crucial to their tactical victories in most of the numerous cruiser actions of 1942. Beginning with the launched in 1925, every Japanese heavy cruiser was armed with torpedoes, larger than any other cruisers'. By 1933 Japan had developed the Type 93 torpedo for these ships, eventually nicknamed "Long Lance" by the Allies. This type used compressed oxygen instead of compressed air, allowing it to achieve ranges and speeds unmatched by other torpedoes. It could achieve a range of at , compared with the US Mark 15 torpedo with at . The Mark 15 had a maximum range of at , still well below the "Long Lance". The Japanese were able to keep the Type 93's performance and oxygen power secret until the Allies recovered one in early 1943, thus the Allies faced a great threat they were not aware of in 1942. The Type 93 was also fitted to Japanese post-1930 light cruisers and the majority of their World War II destroyers.
Heavy cruisers continued in use until after World War II, with some converted to guided missile cruisers for air defense or strategic attack and some used for shore bombardment by the United States in the Korean War and the Vietnam War.
The German was a series of three "Panzerschiffe" ("armored ships"), a form of heavily armed cruiser, designed and built by the German Reichsmarine in nominal accordance with restrictions imposed by the Treaty of Versailles. All three ships were launched between 1931 and 1934, and served with Germany's Kriegsmarine during World War II. Within the Kriegsmarine, the Panzerschiffe had the propaganda value of capital ships: heavy cruisers with battleship guns, torpedoes, and scout aircraft. The similar Swedish "Panzerschiffe" were tactically used as centers of battlefleets and not as cruisers. They were deployed by Nazi Germany in support of the German interests in the Spanish Civil War. Panzerschiff "Admiral Graf Spee" represented Germany in the 1937 Cornation Fleet Review.
The British press referred to the vessels as pocket battleships, in reference to the heavy firepower contained in the relatively small vessels; they were considerably smaller than contemporary battleships, though at 28 knots were slower than battlecruisers. At up to 16,000 tons at full load, they were not treaty compliant 10,000 ton cruisers. And although their displacement and scale of armor protection were that of a heavy cruiser, their main armament was heavier than the guns of other nations' heavy cruisers, and the latter two members of the class also had tall conning towers resembling battleships. The Panzerschiffe were listed as Ersatz replacements for retiring Reichsmarine coastal defense battleships, which added to their propaganda status in the Kriegsmarine as Ersatz battleships; within the Royal Navy, only battlecruisers HMS "Hood", "Repulse" and "Renown" were capable of both outrunning and outgunning the Panzerschiffe. They were seen in the 1930s as a new and serious threat by both Britain and France. While the Kriegsmarine reclassified them as heavy cruisers in 1940, "Deutschland"-class ships continued to be called "pocket battleships" in the popular press.
The American represented the supersized cruiser design. Due to the German pocket battleships, the , and rumored Japanese "super cruisers", all of which carried guns larger than the standard heavy cruiser's 8-inch size dictated by naval treaty limitations, the "Alaska"s were intended to be "cruiser-killers". While superficially appearing similar to a battleship/battlecruiser and mounting three triple turrets of 12-inch guns, their actual protection scheme and design resembled a scaled-up heavy cruiser design. Their hull classification symbol of CB (cruiser, big) reflected this.
A precursor to the anti-aircraft cruiser was the Romanian British-built protected cruiser "Elisabeta". After the start of World War I, her four 120 mm main guns were landed and her four 75 mm (12-pounder) secondary guns were modified for anti-aircraft fire.
The development of the anti-aircraft cruiser began in 1935 when the Royal Navy re-armed and . Torpedo tubes and low-angle guns were removed from these World War I light cruisers and replaced with ten high-angle guns, with appropriate fire-control equipment to provide larger warships with protection against high-altitude bombers.
A tactical shortcoming was recognised after completing six additional conversions of s. Having sacrificed anti-ship weapons for anti-aircraft armament, the converted anti-aircraft cruisers might themselves need protection against surface units. New construction was undertaken to create cruisers of similar speed and displacement with dual-purpose guns, which offered good anti-aircraft protection with anti-surface capability for the traditional light cruiser role of defending capital ships from destroyers.
The first purpose built anti-aircraft cruiser was the British , completed in 1940–42. The US Navy's cruisers (CLAA: light cruiser with anti-aircraft capability) were designed to match the capabilities of the Royal Navy. Both "Dido" and "Atlanta" cruisers initially carried torpedo tubes; the "Atlanta" cruisers at least were originally designed as destroyer leaders, were originally designated CL (light cruiser), and did not receive the CLAA designation until 1949.
The concept of the quick-firing dual-purpose gun anti-aircraft cruiser was embraced in several designs completed too late to see combat, including: , completed in 1948; , completed in 1949; two s, completed in 1947; two s, completed in 1953; , completed in 1955; , completed in 1959; and , and , all completed between 1959 and 1961.
Most post-World War II cruisers were tasked with air defense roles. In the early 1950s, advances in aviation technology forced the move from anti-aircraft artillery to anti-aircraft missiles. Therefore, most modern cruisers are equipped with surface-to-air missiles as their main armament. Today's equivalent of the anti-aircraft cruiser is the guided missile cruiser (CAG/CLG/CG/CGN).
Cruisers participated in a number of surface engagements in the early part of World War II, along with escorting carrier and battleship groups throughout the war. In the later part of the war, Allied cruisers primarily provided anti-aircraft (AA) escort for carrier groups and performed shore bombardment. Japanese cruisers similarly escorted carrier and battleship groups in the later part of the war, notably in the disastrous Battle of the Philippine Sea and Battle of Leyte Gulf. In 1937–41 the Japanese, having withdrawn from all naval treaties, upgraded or completed the "Mogami" and es as heavy cruisers by replacing their triple turrets with twin turrets. Torpedo refits were also made to most heavy cruisers, resulting in up to sixteen tubes per ship, plus a set of reloads. In 1941 the 1920s light cruisers and were converted to torpedo cruisers with four guns and forty torpedo tubes. In 1944 "Kitakami" was further converted to carry up to eight "Kaiten" human torpedoes in place of ordinary torpedoes. Before World War II, cruisers were mainly divided into three types: heavy cruisers, light cruisers and auxiliary cruisers. Heavy cruiser tonnage reached 20–30,000 tons, speed 32–34 knots, endurance of more than 10,000 nautical miles, armor thickness of 127–203 mm. Heavy cruisers were equipped with eight or nine guns with a range of more than 20 nautical miles. They were mainly used to attack enemy surface ships and shore-based targets. In addition, there were 10–16 secondary guns with a caliber of less than . Also, dozens of automatic antiaircraft guns were installed to fight aircraft and small vessels such as torpedo boats. For example, in World War II, American Alaska-class cruisers were more than 30,000 tons, equipped with nine guns. Some cruisers could also carry three or four seaplanes to correct the accuracy of gunfire and perform reconnaissance. Together with battleships, these heavy cruisers formed powerful naval task forces, which dominated the world's oceans for more than a century. After the signing of the Washington Treaty on Arms Limitation in 1922, the tonnage and quantity of battleships, aircraft carriers and cruisers were severely restricted. In order not to violate the treaty, countries began to develop light cruisers. Light cruisers of the 1920s had displacements of less than 10,000 tons and a speed of up to 35 knots. They were equipped with 6–12 main guns with a caliber of 127–133 mm (5–5.5 inches). In addition, they were equipped with 8–12 secondary guns under 127 mm (5 in) and dozens of small caliber cannons, as well as torpedoes and mines. Some ships also carried 2–4 seaplanes, mainly for reconnaissance. In 1930 the London Naval Treaty allowed large light cruisers to be built, with the same tonnage as heavy cruisers and armed with up to fifteen guns. The Japanese "Mogami" class were built to this treaty's limit, the Americans and British also built similar ships. However, in 1939 the "Mogami"s were refitted as heavy cruisers with ten guns.
In December 1939, three British cruisers engaged the German "pocket battleship" "Admiral Graf Spee" (which was on a commerce raiding mission) in the Battle of the River Plate; "Admiral Graf Spee" then took refuge in neutral Montevideo, Uruguay. By broadcasting messages indicating capital ships were in the area, the British caused "Admiral Graf Spee"s captain to think he faced a hopeless situation while low on ammunition and order his ship scuttled. On 8 June 1940 the German capital ships and , classed as battleships but with large cruiser armament, sank the aircraft carrier with gunfire. From October 1940 through March 1941 the German heavy cruiser (also known as "pocket battleship", see above) conducted a successful commerce-raiding voyage in the Atlantic and Indian Oceans.
On 27 May 1941, attempted to finish off the German battleship with torpedoes, probably causing the Germans to scuttle the ship. "Bismarck" (accompanied by the heavy cruiser ) previously sank the battlecruiser and damaged the battleship with gunfire in the Battle of the Denmark Strait.
On 19 November 1941 sank in a mutually fatal engagement with the German raider "Kormoran" in the Indian Ocean near Western Australia.
Twenty-three British cruisers were lost to enemy action, mostly to air attack and submarines, in operations in the Atlantic, Mediterranean, and Indian Ocean. Sixteen of these losses were in the Mediterranean. The British included cruisers and anti-aircraft cruisers among convoy escorts in the Mediterranean and to northern Russia due to the threat of surface and air attack. Almost all cruisers in World War II were vulnerable to submarine attack due to a lack of anti-submarine sonar and weapons. Also, until 1943–44 the light anti-aircraft armament of most cruisers was weak.
In July 1942 an attempt to intercept Convoy PQ 17 with surface ships, including the heavy cruiser "Admiral Scheer", failed due to multiple German warships grounding, but air and submarine attacks sank 2/3 of the convoy's ships. In August 1942 "Admiral Scheer" conducted Operation Wunderland, a solo raid into northern Russia's Kara Sea. She bombarded Dikson Island but otherwise had little success.
On 31 December 1942 the Battle of the Barents Sea was fought, a rare action for a Murmansk run because it involved cruisers on both sides. Four British destroyers and five other vessels were escorting Convoy JW 51B from the UK to the Murmansk area. Another British force of two cruisers ( and ) and two destroyers were in the area. Two heavy cruisers (one the "pocket battleship" "Lützow"), accompanied by six destroyers, attempted to intercept the convoy near North Cape after it was spotted by a U-boat. Although the Germans sank a British destroyer and a minesweeper (also damaging another destroyer), they failed to damage any of the convoy's merchant ships. A German destroyer was lost and a heavy cruiser damaged. Both sides withdrew from the action for fear of the other side's torpedoes.
On 26 December 1943 the German capital ship "Scharnhorst" was sunk while attempting to intercept a convoy in the Battle of the North Cape. The British force that sank her was led by Vice Admiral Bruce Fraser in the battleship , accompanied by four cruisers and nine destroyers. One of the cruisers was the preserved .
"Scharnhorst"s sister "Gneisenau", damaged by a mine and a submerged wreck in the Channel Dash of 13 February 1942 and repaired, was further damaged by a British air attack on 27 February 1942. She began a conversion process to mount six guns instead of nine guns, but in early 1943 Hitler (angered by the recent failure at the Battle of the Barents Sea) ordered her disarmed and her armament used as coast defence weapons. One 28 cm triple turret survives near Trondheim, Norway.
The attack on Pearl Harbor on 7 December 1941 brought the United States into the war, but with eight battleships sunk or damaged by air attack. On 10 December 1941 HMS "Prince of Wales" and the battlecruiser were sunk by land-based torpedo bombers northeast of Singapore. It was now clear that surface ships could not operate near enemy aircraft in daylight without air cover; most surface actions of 1942–43 were fought at night as a result. Generally, both sides avoided risking their battleships until the Japanese attack at Leyte Gulf in 1944.
Six of the battleships from Pearl Harbor were eventually returned to service, but no US battleships engaged Japanese surface units at sea until the Naval Battle of Guadalcanal in November 1942, and not thereafter until the Battle of Surigao Strait in October 1944. was on hand for the initial landings at Guadalcanal on 7 August 1942, and escorted carriers in the Battle of the Eastern Solomons later that month. However, on 15 September she was torpedoed while escorting a carrier group and had to return to the US for repairs.
Generally, the Japanese held their capital ships out of all surface actions in the 1941–42 campaigns or they failed to close with the enemy; the Naval Battle of Guadalcanal in November 1942 was the sole exception. The four ships performed shore bombardment in Malaya, Singapore, and Guadalcanal and escorted the raid on Ceylon and other carrier forces in 1941–42. Japanese capital ships also participated ineffectively (due to not being engaged) in the Battle of Midway and the simultaneous Aleutian diversion; in both cases they were in battleship groups well to the rear of the carrier groups. Sources state that sat out the entire Guadalcanal Campaign due to lack of high-explosive bombardment shells, poor nautical charts of the area, and high fuel consumption. It is likely that the poor charts affected other battleships as well. Except for the "Kongō" class, most Japanese battleships spent the critical year of 1942, in which most of the war's surface actions occurred, in home waters or at the fortified base of Truk, far from any risk of attacking or being attacked.
From 1942 through mid-1943, US and other Allied cruisers were the heavy units on their side of the numerous surface engagements of the Dutch East Indies campaign, the Guadalcanal Campaign, and subsequent Solomon Islands fighting; they were usually opposed by strong Japanese cruiser-led forces equipped with Long Lance torpedoes. Destroyers also participated heavily on both sides of these battles and provided essentially all the torpedoes on the Allied side, with some battles in these campaigns fought entirely between destroyers.
Along with lack of knowledge of the capabilities of the Long Lance torpedo, the US Navy was hampered by a deficiency it was initially unaware of – the unreliability of the Mark 15 torpedo used by destroyers. This weapon shared the Mark 6 exploder and other problems with the more famously unreliable Mark 14 torpedo; the most common results of firing either of these torpedoes were a dud or a miss. The problems with these weapons were not solved until mid-1943, after almost all of the surface actions in the Solomon Islands had taken place. Another factor that shaped the early surface actions was the pre-war training of both sides. The US Navy concentrated on long-range 8-inch gunfire as their primary offensive weapon, leading to rigid battle line tactics, while the Japanese trained extensively for nighttime torpedo attacks. Since all post-1930 Japanese cruisers had 8-inch guns by 1941, almost all of the US Navy's cruisers in the South Pacific in 1942 were the 8-inch-gunned (203 mm) "treaty cruisers"; most of the 6-inch-gunned (152 mm) cruisers were deployed in the Atlantic.
Although their battleships were held out of surface action, Japanese cruiser-destroyer forces rapidly isolated and mopped up the Allied naval forces in the Dutch East Indies campaign of February–March 1942. In three separate actions, they sank five Allied cruisers (two Dutch and one each British, Australian, and American) with torpedoes and gunfire, against one Japanese cruiser damaged. With one other Allied cruiser withdrawn for repairs, the only remaining Allied cruiser in the area was the damaged . Despite their rapid success, the Japanese proceeded methodically, never leaving their air cover and rapidly establishing new air bases as they advanced.
After the key carrier battles of the Coral Sea and Midway in mid-1942, Japan had lost four of the six fleet carriers that launched the Pearl Harbor raid and was on the strategic defensive. On 7 August 1942 US Marines were landed on Guadalcanal and other nearby islands, beginning the Guadalcanal Campaign. This campaign proved to be a severe test for the Navy as well as the Marines. Along with two carrier battles, several major surface actions occurred, almost all at night between cruiser-destroyer forces.
Battle of Savo Island
On the night of 8–9 August 1942 the Japanese counterattacked near Guadalcanal in the Battle of Savo Island with a cruiser-destroyer force. In a controversial move, the US carrier task forces were withdrawn from the area on the 8th due to heavy fighter losses and low fuel. The Allied force included six heavy cruisers (two Australian), two light cruisers (one Australian), and eight US destroyers. Of the cruisers, only the Australian ships had torpedoes. The Japanese force included five heavy cruisers, two light cruisers, and one destroyer. Numerous circumstances combined to reduce Allied readiness for the battle. The results of the battle were three American heavy cruisers sunk by torpedoes and gunfire, one Australian heavy cruiser disabled by gunfire and scuttled, one heavy cruiser damaged, and two US destroyers damaged. The Japanese had three cruisers lightly damaged. This was the most lopsided outcome of the surface actions in the Solomon Islands. Along with their superior torpedoes, the opening Japanese gunfire was accurate and very damaging. Subsequent analysis showed that some of the damage was due to poor housekeeping practices by US forces. Stowage of boats and aircraft in midships hangars with full gas tanks contributed to fires, along with full and unprotected ready-service ammunition lockers for the open-mount secondary armament. These practices were soon corrected, and US cruisers with similar damage sank less often thereafter. Savo was the first surface action of the war for almost all the US ships and personnel; few US cruisers and destroyers were targeted or hit at Coral Sea or Midway.
Battle of the Eastern Solomons
On 24–25 August 1942 the Battle of the Eastern Solomons, a major carrier action, was fought. Part of the action was a Japanese attempt to reinforce Guadalcanal with men and equipment on troop transports. The Japanese troop convoy was attacked by Allied aircraft, resulting in the Japanese subsequently reinforcing Guadalcanal with troops on fast warships at night. These convoys were called the "Tokyo Express" by the Allies. Although the Tokyo Express often ran unopposed, most surface actions in the Solomons revolved around Tokyo Express missions. Also, US air operations had commenced from Henderson Field, the airfield on Guadalcanal. Fear of air power on both sides resulted in all surface actions in the Solomons being fought at night.
Battle of Cape Esperance
The Battle of Cape Esperance occurred on the night of 11–12 October 1942. A Tokyo Express mission was underway for Guadalcanal at the same time as a separate cruiser-destroyer bombardment group loaded with high explosive shells for bombarding Henderson Field. A US cruiser-destroyer force was deployed in advance of a convoy of US Army troops for Guadalcanal that was due on 13 October. The Tokyo Express convoy was two seaplane tenders and six destroyers; the bombardment group was three heavy cruisers and two destroyers, and the US force was two heavy cruisers, two light cruisers, and five destroyers. The US force engaged the Japanese bombardment force; the Tokyo Express convoy was able to unload on Guadalcanal and evade action. The bombardment force was sighted at close range () and the US force opened fire. The Japanese were surprised because their admiral was anticipating sighting the Tokyo Express force, and withheld fire while attempting to confirm the US ships' identity. One Japanese cruiser and one destroyer were sunk and one cruiser damaged, against one US destroyer sunk with one light cruiser and one destroyer damaged. The bombardment force failed to bring its torpedoes into action, and turned back. The next day US aircraft from Henderson Field attacked several of the Japanese ships, sinking two destroyers and damaging a third. The US victory resulted in overconfidence in some later battles, reflected in the initial after-action report claiming two Japanese heavy cruisers, one light cruiser, and three destroyers sunk by the gunfire of alone. The battle had little effect on the overall situation, as the next night two Kongō-class battleships bombarded and severely damaged Henderson Field unopposed, and the following night another Tokyo Express convoy delivered 4,500 troops to Guadalcanal. The US convoy delivered the Army troops as scheduled on the 13th.
Battle of the Santa Cruz Islands
The Battle of the Santa Cruz Islands took place 25–27 October 1942. It was a pivotal battle, as it left the US and Japanese with only two large carriers each in the South Pacific (another large Japanese carrier was damaged and under repair until May 1943). Due to the high carrier attrition rate with no replacements for months, for the most part both sides stopped risking their remaining carriers until late 1943, and each side sent in a pair of battleships instead. The next major carrier operations for the US were the carrier raid on Rabaul and support for the invasion of Tarawa, both in November 1943.
Naval Battle of Guadalcanal
The Naval Battle of Guadalcanal occurred 12–15 November 1942 in two phases. A night surface action on 12–13 November was the first phase. The Japanese force consisted of two Kongō-class battleships with high explosive shells for bombarding Henderson Field, one small light cruiser, and 11 destroyers. Their plan was that the bombardment would neutralize Allied airpower and allow a force of 11 transport ships and 12 destroyers to reinforce Guadalcanal with a Japanese division the next day. However, US reconnaissance aircraft spotted the approaching Japanese on the 12th and the Americans made what preparations they could. The American force consisted of two heavy cruisers, one light cruiser, two anti-aircraft cruisers, and eight destroyers. The Americans were outgunned by the Japanese that night, and a lack of pre-battle orders by the US commander led to confusion. The destroyer closed with the battleship , firing all torpedoes (though apparently none hit or detonated) and raking the battleship's bridge with gunfire, wounding the Japanese admiral and killing his chief of staff. The Americans initially lost four destroyers including "Laffey", with both heavy cruisers, most of the remaining destroyers, and both anti-aircraft cruisers damaged. The Japanese initially had one battleship and four destroyers damaged, but at this point they withdrew, possibly unaware that the US force was unable to further oppose them. At dawn US aircraft from Henderson Field, , and Espiritu Santo found the damaged battleship and two destroyers in the area. The battleship ("Hiei") was sunk by aircraft (or possibly scuttled), one destroyer was sunk by the damaged , and the other destroyer was attacked by aircraft but was able to withdraw. Both of the damaged US anti-aircraft cruisers were lost on 13 November, one () torpedoed by a Japanese submarine, and the other sank on the way to repairs. "Juneau"s loss was especially tragic; the submarine's presence prevented immediate rescue, over 100 survivors of a crew of nearly 700 were adrift for eight days, and all but ten died. Among the dead were the five Sullivan brothers.
The Japanese transport force was rescheduled for the 14th and a new cruiser-destroyer force (belatedly joined by the surviving battleship ) was sent to bombard Henderson Field the night of 13 November. Only two cruisers actually bombarded the airfield, as "Kirishima" had not arrived yet and the remainder of the force was on guard for US warships. The bombardment caused little damage. The cruiser-destroyer force then withdrew, while the transport force continued towards Guadalcanal. Both forces were attacked by US aircraft on the 14th. The cruiser force lost one heavy cruiser sunk and one damaged. Although the transport force had fighter cover from the carrier , six transports were sunk and one heavily damaged. All but four of the destroyers accompanying the transport force picked up survivors and withdrew. The remaining four transports and four destroyers approached Guadalcanal at night, but stopped to await the results of the night's action.
On the night of 14–15 November a Japanese force of "Kirishima", two heavy and two light cruisers, and nine destroyers approached Guadalcanal. Two US battleships ( and ) were there to meet them, along with four destroyers. This was one of only two battleship-on-battleship encounters during the Pacific War; the other was the lopsided Battle of Surigao Strait in October 1944, part of the Battle of Leyte Gulf. The battleships had been escorting "Enterprise", but were detached due to the urgency of the situation. With nine 16-inch (406 mm) guns apiece against eight 14-inch (356 mm) guns on "Kirishima", the Americans had major gun and armor advantages. All four destroyers were sunk or severely damaged and withdrawn shortly after the Japanese attacked them with gunfire and torpedoes. Although her main battery remained in action for most of the battle, "South Dakota" spent much of the action dealing with major electrical failures that affected her radar, fire control, and radio systems. Although her armor was not penetrated, she was hit by 26 shells of various calibers and temporarily rendered, in a US admiral's words, "deaf, dumb, blind, and impotent". "Washington" went undetected by the Japanese for most of the battle, but withheld shooting to avoid "friendly fire" until "South Dakota" was illuminated by Japanese fire, then rapidly set "Kirishima" ablaze with a jammed rudder and other damage. "Washington", finally spotted by the Japanese, then headed for the Russell Islands to hopefully draw the Japanese away from Guadalcanal and "South Dakota", and was successful in evading several torpedo attacks. Unusually, only a few Japanese torpedoes scored hits in this engagement. "Kirishima" sank or was scuttled before the night was out, along with two Japanese destroyers. The remaining Japanese ships withdrew, except for the four transports, which beached themselves in the night and started unloading. However, dawn (and US aircraft, US artillery, and a US destroyer) found them still beached, and they were destroyed.
Battle of Tassafaronga
The Battle of Tassafaronga took place on the night of 30 November-1 December 1942. The US had four heavy cruisers, one light cruiser, and four destroyers. The Japanese had eight destroyers on a Tokyo Express run to deliver food and supplies in drums to Guadalcanal. The Americans achieved initial surprise, damaging one destroyer with gunfire which later sank, but the Japanese torpedo counterattack was devastating. One American heavy cruiser was sunk and three others heavily damaged, with the bows blown off of two of them. It was significant that these two were not lost to Long Lance hits as happened in previous battles; American battle readiness and damage control had improved. Despite defeating the Americans, the Japanese withdrew without delivering the crucial supplies to Guadalcanal. Another attempt on 3 December dropped 1,500 drums of supplies near Guadalcanal, but Allied strafing aircraft sank all but 300 before the Japanese Army could recover them. On 7 December PT boats interrupted a Tokyo Express run, and the following night sank a Japanese supply submarine. The next day the Japanese Navy proposed stopping all destroyer runs to Guadalcanal, but agreed to do just one more. This was on 11 December and was also intercepted by PT boats, which sank a destroyer; only 200 of 1,200 drums dropped off the island were recovered. The next day the Japanese Navy proposed abandoning Guadalcanal; this was approved by the Imperial General Headquarters on 31 December and the Japanese left the island in early February 1943.
After the Japanese abandoned Guadalcanal in February 1943, Allied operations in the Pacific shifted to the New Guinea campaign and isolating Rabaul. The Battle of Kula Gulf was fought on the night of 5–6 July. The US had three light cruisers and four destroyers; the Japanese had ten destroyers loaded with 2,600 troops destined for Vila to oppose a recent US landing on Rendova. Although the Japanese sank a cruiser, they lost two destroyers and were able to deliver only 850 troops. On the night of 12–13 July, the Battle of Kolombangara occurred. The Allies had three light cruisers (one New Zealand) and ten destroyers; the Japanese had one small light cruiser and five destroyers, a Tokyo Express run for Vila. All three Allied cruisers were heavily damaged, with the New Zealand cruiser put out of action for 25 months by a Long Lance hit. The Allies sank only the Japanese light cruiser, and the Japanese landed 1,200 troops at Vila. Despite their tactical victory, this battle caused the Japanese to use a different route in the future, where they were more vulnerable to destroyer and PT boat attacks.
The Battle of Empress Augusta Bay was fought on the night of 1–2 November 1943, immediately after US Marines invaded Bougainville in the Solomon Islands. A Japanese heavy cruiser was damaged by a nighttime air attack shortly before the battle; it is likely that Allied airborne radar had progressed far enough to allow night operations. The Americans had four of the new cruisers and eight destroyers. The Japanese had two heavy cruisers, two small light cruisers, and six destroyers. Both sides were plagued by collisions, shells that failed to explode, and mutual skill in dodging torpedoes. The Americans suffered significant damage to three destroyers and light damage to a cruiser, but no losses. The Japanese lost one light cruiser and a destroyer, with four other ships damaged. The Japanese withdrew; the Americans pursued them until dawn, then returned to the landing area to provide anti-aircraft cover.
After the Battle of the Santa Cruz Islands in October 1942, both sides were short of large aircraft carriers. The US suspended major carrier operations until sufficient carriers could be completed to destroy the entire Japanese fleet at once should it appear. The Central Pacific carrier raids and amphibious operations commenced in November 1943 with a carrier raid on Rabaul (preceded and followed by Fifth Air Force attacks) and the bloody but successful invasion of Tarawa. The air attacks on Rabaul crippled the Japanese cruiser force, with four heavy and two light cruisers damaged; they were withdrawn to Truk. The US had built up a force in the Central Pacific of six large, five light, and six escort carriers prior to commencing these operations.
From this point on, US cruisers primarily served as anti-aircraft escorts for carriers and in shore bombardment. The only major Japanese carrier operation after Guadalcanal was the disastrous (for Japan) Battle of the Philippine Sea in June 1944, nicknamed the "Marianas Turkey Shoot" by the US Navy.
The Imperial Japanese Navy's last major operation was the Battle of Leyte Gulf, an attempt to dislodge the American invasion of the Philippines in October 1944. The two actions at this battle in which cruisers played a significant role were the Battle off Samar and the Battle of Surigao Strait.
Battle of Surigao Strait
The Battle of Surigao Strait was fought on the night of 24–25 October, a few hours before the Battle off Samar. The Japanese had a small battleship group composed of and , one heavy cruiser, and four destroyers. They were followed at a considerable distance by another small force of two heavy cruisers, a small light cruiser, and four destroyers. Their goal was to head north through Surigao Strait and attack the invasion fleet off Leyte. The Allied force, known as the 7th Fleet Support Force, guarding the strait was overwhelming. It included six battleships (all but one previously damaged in 1941 at Pearl Harbor), four heavy cruisers (one Australian), four light cruisers, and 28 destroyers, plus a force of 39 PT boats. The only advantage to the Japanese was that most of the battleships and cruisers were loaded mainly with high explosive shells, although a significant number of armor-piercing shells were also loaded. The lead Japanese force evaded the PT boats' torpedoes, but were hit hard by the destroyers' torpedoes, losing a battleship. Then they encountered the battleship and cruiser guns. Only one destroyer survived. The engagement is notable for being one of only two occasions in which battleships fired on battleships in the Pacific Theater, the other being the Naval Battle of Guadalcanal. Due to the starting arrangement of the opposing forces, the Allied force was in a "crossing the T" position, so this was the last battle in which this occurred, but it was not a planned maneuver. The following Japanese cruiser force had several problems, including a light cruiser damaged by a PT boat and two heavy cruisers colliding, one of which fell behind and was sunk by air attack the next day. An American veteran of Surigao Strait, , was transferred to Argentina in 1951 as , becoming most famous for being sunk by in the Falklands War on 2 May 1982. She was the first ship sunk by a nuclear submarine outside of accidents, and only the second ship sunk by a submarine since World War II.
Battle off Samar
At the Battle off Samar, a Japanese battleship group moving towards the invasion fleet off Leyte engaged a minuscule American force known as "Taffy 3" (formally Task Unit 77.4.3), composed of six escort carriers with about 28 aircraft each, three destroyers, and four destroyer escorts. The biggest guns in the American force were /38 caliber guns, while the Japanese had , , and guns. Aircraft from six additional escort carriers also participated for a total of around 330 US aircraft, a mix of F6F Hellcat fighters and TBF Avenger torpedo bombers. The Japanese had four battleships including "Yamato", six heavy cruisers, two small light cruisers, and 11 destroyers. The Japanese force had earlier been driven off by air attack, losing "Yamato"s sister . Admiral Halsey then decided to use his Third Fleet carrier force to attack the Japanese carrier group, located well to the north of Samar, which was actually a decoy group with few aircraft. The Japanese were desperately short of aircraft and pilots at this point in the war, and Leyte Gulf was the first battle in which "kamikaze" attacks were used. Due to a tragedy of errors, Halsey took the American battleship force with him, leaving San Bernardino Strait guarded only by the small Seventh Fleet escort carrier force. The battle commenced at dawn on 25 October 1944, shortly after the Battle of Surigao Strait. In the engagement that followed, the Americans exhibited uncanny torpedo accuracy, blowing the bows off several Japanese heavy cruisers. The escort carriers' aircraft also performed very well, attacking with machine guns after their carriers ran out of bombs and torpedoes. The unexpected level of damage, and maneuvering to avoid the torpedoes and air attacks, disorganized the Japanese and caused them to think they faced at least part of the Third Fleet's main force. They had also learned of the defeat a few hours before at Surigao Strait, and did not hear that Halsey's force was busy destroying the decoy fleet. Convinced that the rest of the Third Fleet would arrive soon if it hadn't already, the Japanese withdrew, eventually losing three heavy cruisers sunk with three damaged to air and torpedo attacks. The Americans lost two escort carriers, two destroyers, and one destroyer escort sunk, with three escort carriers, one destroyer, and two destroyer escorts damaged, thus losing over one-third of their engaged force sunk with nearly all the remainder damaged.
The US built cruisers in quantity through the end of the war, notably 14 heavy cruisers and 27 "Cleveland"-class light cruisers, along with eight "Atlanta"-class anti-aircraft cruisers. The "Cleveland" class was the largest cruiser class ever built in number of ships completed, with nine additional "Cleveland"s completed as light aircraft carriers. The large number of cruisers built was probably due to the significant cruiser losses of 1942 in the Pacific theater (seven American and five other Allied) and the perceived need for several cruisers to escort each of the numerous s being built. Losing four heavy and two small light cruisers in 1942, the Japanese built only five light cruisers during the war; these were small ships with six guns each. Losing 20 cruisers in 1940–42, the British completed no heavy cruisers, thirteen light cruisers ( and classes), and sixteen anti-aircraft cruisers ("Dido" class) during the war.
The rise of air power during World War II dramatically changed the nature of naval combat. Even the fastest cruisers could not maneuver quickly enough to evade aerial attack, and aircraft now had torpedoes, allowing moderate-range standoff capabilities. This change led to the end of independent operations by single ships or very small task groups, and for the second half of the 20th century naval operations were based on very large fleets believed able to fend off all but the largest air attacks, though this was not tested by any war in that period. The US Navy became centered around carrier groups, with cruisers and battleships primarily providing anti-aircraft defense and shore bombardment. Until the Harpoon missile entered service in the late 1970s, the US Navy was almost entirely dependent on carrier-based aircraft and submarines for conventionally attacking enemy warships. Lacking aircraft carriers, the Soviet Navy depended on anti-ship cruise missiles; in the 1950s these were primarily delivered from heavy land-based bombers. Soviet submarine-launched cruise missiles at the time were primarily for land attack; but by 1964 anti-ship missiles were deployed in quantity on cruisers, destroyers, and submarines.
The US Navy was aware of the potential missile threat as soon as World War II ended, and had considerable related experience due to Japanese "kamikaze" attacks in that war. The initial response was to upgrade the light AA armament of new cruisers from 40 mm and 20 mm weapons to twin 3-inch (76 mm)/50 caliber gun mounts. For the longer term, it was thought that gun systems would be inadequate to deal with the missile threat, and by the mid-1950s three naval SAM systems were developed: Talos (long range), Terrier (medium range), and Tartar (short range). Talos and Terrier were nuclear-capable and this allowed their use in anti-ship or shore bombardment roles in the event of nuclear war. Chief of Naval Operations Admiral Arleigh Burke is credited with speeding the development of these systems.
Terrier was initially deployed on two converted "Baltimore"-class cruisers (CAG), with conversions completed in 1955–56. Further conversions of six "Cleveland"-class cruisers (CLG) ( and classes), redesign of the as guided missile "frigates" (DLG), and development of the DDGs resulted in the completion of numerous additional guided missile ships deploying all three systems in 1959–1962. Also completed during this period was the nuclear-powered , with two Terrier and one Talos launchers, plus an ASROC anti-submarine launcher the World War II conversions lacked. The converted World War II cruisers up to this point retained one or two main battery turrets for shore bombardment. However, in 1962–1964 three additional "Baltimore" and cruisers were more extensively converted as the . These had two Talos and two Tartar launchers plus ASROC and two 5-inch (127 mm) guns for self-defense, and were primarily built to get greater numbers of Talos launchers deployed. Of all these types, only the "Farragut" DLGs were selected as the design basis for further production, although their successors were significantly larger (5,670 tons standard versus 4,150 tons standard) due to a second Terrier launcher and greater endurance. An economical crew size compared with World War II conversions was probably a factor, as the "Leahy"s required a crew of only 377 versus 1,200 for the "Cleveland"-class conversions. Through 1980, the ten "Farragut"s were joined by four additional classes and two one-off ships for a total of 36 guided missile frigates, eight of them nuclear-powered (DLGN). In 1975 the "Farragut"s were reclassified as guided missile destroyers (DDG) due to their small size, and the remaining DLG/DLGN ships became guided missile cruisers (CG/CGN). The World War II conversions were gradually retired between 1970 and 1980; the Talos missile was withdrawn in 1980 as a cost-saving measure and the "Albany"s were decommissioned. "Long Beach" had her Talos launcher removed in a refit shortly thereafter; the deck space was used for Harpoon missiles. Around this time the Terrier ships were upgraded with the RIM-67 Standard ER missile. The guided missile frigates and cruisers served in the Cold War and the Vietnam War; off Vietnam they performed shore bombardment and shot down enemy aircraft or, as Positive Identification Radar Advisory Zone (PIRAZ) ships, guided fighters to intercept enemy aircraft. By 1995 the former guided missile frigates were replaced by the s and s.
The U.S. Navy's guided-missile cruisers were built upon destroyer-style hulls (some called "destroyer leaders" or "frigates" prior to the 1975 reclassification). As the U.S. Navy's strike role was centered around aircraft carriers, cruisers were primarily designed to provide air defense while often adding anti-submarine capabilities. These U.S. cruisers that were built in the 1960s and 1970s were larger, often nuclear-powered for extended endurance in escorting nuclear-powered fleet carriers, and carried longer-range surface-to-air missiles (SAMs) than early "Charles F. Adams" guided-missile destroyers that were tasked with the short-range air defense role. The U.S. cruiser was a major contrast to their contemporaries, Soviet "rocket cruisers" that were armed with large numbers of anti-ship cruise missiles (ASCMs) as part of the combat doctrine of saturation attack, though in the early 1980s the U.S. Navy retrofitted some of these existing cruisers to carry a small number of Harpoon anti-ship missiles and Tomahawk cruise missiles.
The line between U.S. Navy cruisers and destroyers blurred with the . While originally designed for anti-submarine warfare, a "Spruance" destroyer was comparable in size to existing U.S. cruisers, while having the advantage of an enclosed hangar (with space for up to two medium-lift helicopters) which was a considerable improvement over the basic aviation facilities of earlier cruisers. The "Spruance" hull design was used as the basis for two classes; the which had comparable anti-air capabilities to cruisers at the time, and then the DDG-47-class destroyers which were redesignated as the "Ticonderoga"-class guided missile cruisers to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff. In addition, 24 members of the "Spruance" class were upgraded with the vertical launch system (VLS) for Tomahawk cruise missiles due to its modular hull design, along with the similarly VLS-equipped "Ticonderoga" class, these ships had anti-surface strike capabilities beyond the 1960s–1970s cruisers that received Tomahawk armored-box launchers as part of the New Threat Upgrade. Like the "Ticonderoga" ships with VLS, the "Arleigh Burke" and , despite being classified as destroyers, actually have much heavier anti-surface armament than previous U.S. ships classified as cruisers.
Prior to the introduction of the "Ticonderoga"s, the US Navy used odd naming conventions that left its fleet seemingly without many cruisers, although a number of their ships were cruisers in all but name. From the 1950s to the 1970s, US Navy cruisers were large vessels equipped with heavy offensive missiles (mostly surface-to-air, but for several years including the Regulus nuclear cruise missile) for wide-ranging combat against land-based and sea-based targets. All save one— USS "Long Beach"—were converted from World War II cruisers of the "Oregon City", "Baltimore" and "Cleveland" classes. "Long Beach" was also the last cruiser built with a World War II-era cruiser style hull (characterized by a long lean hull); later new-build cruisers were actually converted frigates (DLG/CG , , and the "Leahy", , , and classes) or uprated destroyers (the DDG/CG "Ticonderoga" class was built on a "Spruance"-class destroyer hull).
Frigates under this scheme were almost as large as the cruisers and optimized for anti-aircraft warfare, although they were capable anti-surface warfare combatants as well. In the late 1960s, the US government perceived a "cruiser gap"—at the time, the US Navy possessed six ships designated as cruisers, compared to 19 for the Soviet Union, even though the USN had 21 ships designated as frigates with equal or superior capabilities to the Soviet cruisers at the time. Because of this, in 1975 the Navy performed a massive redesignation of its forces:
Also, a series of Patrol Frigates of the , originally designated PFG, were redesignated into the FFG line. The cruiser-destroyer-frigate realignment and the deletion of the Ocean Escort type brought the US Navy's ship designations into line with the rest of the world's, eliminating confusion with foreign navies. In 1980, the Navy's then-building DDG-47-class destroyers were redesignated as cruisers ("Ticonderoga" guided missile cruisers) to emphasize the additional capability provided by the ships' Aegis combat systems, and their flag facilities suitable for an admiral and his staff.
In the Soviet Navy, cruisers formed the basis of combat groups. In the immediate post-war era it built a fleet of gun-armed light cruisers, but replaced these beginning in the early 1960s with large ships called "rocket cruisers", carrying large numbers of anti-ship cruise missiles (ASCMs) and anti-aircraft missiles. The Soviet combat doctrine of saturation attack meant that their cruisers (as well as destroyers and even missile boats) mounted multiple missiles in large container/launch tube housings and carried far more ASCMs than their NATO counterparts, while NATO combatants instead used individually smaller and lighter missiles (while appearing under-armed when compared to Soviet ships).
In 1962–1965 the four s entered service; these had launchers for eight long-range SS-N-3 Shaddock ASCMs with a full set of reloads; these had a range of up to with mid-course guidance. The four more modest s, with launchers for four SS-N-3 ASCMs and no reloads, entered service in 1967–69. In 1969–79 Soviet cruiser numbers more than tripled with ten s and seven s entering service. These had launchers for eight large-diameter missiles whose purpose was initially unclear to NATO. This was the SS-N-14 Silex, an over/under rocket-delivered heavyweight torpedo primarily for the anti-submarine role, but capable of anti-surface action with a range of up to . Soviet doctrine had shifted; powerful anti-submarine vessels (these were designated "Large Anti-Submarine Ships", but were listed as cruisers in most references) were needed to destroy NATO submarines to allow Soviet ballistic missile submarines to get within range of the United States in the event of nuclear war. By this time Long Range Aviation and the Soviet submarine force could deploy numerous ASCMs. Doctrine later shifted back to overwhelming carrier group defenses with ASCMs, with the "Slava" and "Kirov" classes.
The most recent Soviet/Russian rocket cruisers, the four s, were built in the 1970s and 1980s. Two of the "Kirov" class are in refit until 2020, and one was scheduled to leave refit in 2018, with the in active service. Russia also operates three s and one "Admiral Kuznetsov"-class carrier which is officially designated as a cruiser.
Currently, the "Kirov"-class heavy missile cruisers are used for command purposes, as "Pyotr Velikiy" is the flagship of the Northern Fleet. However, their air defense capabilities are still powerful, as shown by the array of point defense missiles they carry, from 44 OSA-MA missiles to 196 9K311 Tor missiles. For longer range targets, the S-300 is used. For closer range targets, AK-630 or Kashtan CIWSs are used. Aside from that, "Kirov"s have 20 P-700 Granit missiles for anti-ship warfare. For target acquisition beyond the radar horizon, three helicopters can be used. Besides a vast array of armament, "Kirov"-class cruisers are also outfitted with many sensors and communications equipment, allowing them to lead the fleet.
The United States Navy has centered on the aircraft carrier since World War II. The "Ticonderoga"-class cruisers, built in the 1980s, were originally designed and designated as a class of destroyer, intended to provide a very powerful air-defense in these carrier-centered fleets.
Outside the US and Soviet navies, new cruisers were rare following World War II. Most navies use guided missile destroyers for fleet air defense, and destroyers and frigates for cruise missiles. The need to operate in task forces has led most navies to change to fleets designed around ships dedicated to a single role, anti-submarine or anti-aircraft typically, and the large "generalist" ship has disappeared from most forces. The United States Navy and the Russian Navy are the only remaining navies which operate cruisers. Italy used until 2003; France operated a single helicopter cruiser until May 2010, , for training purposes only. While Type 055 of the Chinese Navy is classified as a cruiser by the U.S. Department of Defense, the Chinese consider it a guided missile destroyer.
In the years since the launch of in 1981, the class has received a number of upgrades that have dramatically improved its members' capabilities for anti-submarine and land attack (using the Tomahawk missile). Like their Soviet counterparts, the modern "Ticonderoga"s can also be used as the basis for an entire battle group. Their cruiser designation was almost certainly deserved when first built, as their sensors and combat management systems enable them to act as flagships for a surface warship flotilla if no carrier is present, but newer ships rated as destroyers and also equipped with Aegis approach them very closely in capability, and once more blur the line between the two classes.
From time to time, some navies have experimented with aircraft-carrying cruisers. One example is the Swedish . Another was the Japanese "Mogami", which was converted to carry a large floatplane group in 1942. Another variant is the "helicopter cruiser". The last example in service was the Soviet Navy's , whose last unit was converted to a pure aircraft carrier and sold to India as . The Russian Navy's is nominally designated as an aviation cruiser but otherwise resembles a standard medium aircraft carrier, albeit with a surface-to-surface missile battery. The Royal Navy's aircraft-carrying and the Italian Navy's aircraft-carrying vessels were originally designated 'through-deck cruisers', but have since been designated as small aircraft carriers. Similarly, the Japan Maritime Self-Defense Force's and "helicopter destroyers" are really more along the lines of helicopter cruisers in function and aircraft complement, but due to the Treaty of San Francisco, must be designated as destroyers.
One cruiser alternative studied in the late 1980s by the United States was variously entitled a Mission Essential Unit (MEU) or CG V/STOL. In a return to the thoughts of the independent operations cruiser-carriers of the 1930s and the Soviet "Kiev" class, the ship was to be fitted with a hangar, elevators, and a flight deck. The mission systems were Aegis, SQS-53 sonar, 12 SV-22 ASW aircraft and 200 VLS cells. The resulting ship would have had a waterline length of 700 feet, a waterline beam of 97 feet, and a displacement of about 25,000 tons. Other features included an integrated electric drive and advanced computer systems, both stand-alone and networked. It was part of the U.S. Navy's "Revolution at Sea" effort. The project was curtailed by the sudden end of the Cold War and its aftermath, otherwise the first of class would have been likely ordered in the early 1990s.
Few cruisers are still operational in the world navies. Those that remain in service today are:
The following is under construction/in layup:
The following are classified as destroyers by their respective operators, but, due to their size, are considered to be cruisers by some:
As of 2019, several decommissioned cruisers have been saved from scrapping and exist worldwide as museum ships. They are: | https://en.wikipedia.org/wiki?curid=7034 |
Chlamydia
Chlamydia, or more specifically a chlamydia infection, is a sexually transmitted infection caused by the bacterium "Chlamydia trachomatis". Most people who are infected have no symptoms. When symptoms do appear in can be several weeks after infection. Symptoms in women may include vaginal discharge or burning with urination. Symptoms in men may include discharge from the penis, burning with urination, or pain and swelling of one or both testicles. The infection can spread to the upper genital tract in women, causing pelvic inflammatory disease, which may result in future infertility or ectopic pregnancy. Repeated infections of the eyes that go without treatment can result in trachoma, a common cause of blindness in the developing world.
Chlamydia can be spread during vaginal, anal, or oral sex, and can be passed from an infected mother to her baby during childbirth. The eye infections may also be spread by personal contact, flies, and contaminated towels in areas with poor sanitation. "Chlamydia trachomatis" only occurs in humans. Diagnosis is often by screening which is recommended yearly in sexually active women under the age of twenty-five, others at higher risk, and at the first prenatal visit. Testing can be done on the urine or a swab of the cervix, vagina, or urethra. Rectal or mouth swabs are required to diagnose infections in those areas.
Prevention is by not having sex, the use of condoms, or having sex with only one other person, who is not infected. Chlamydia can be cured by antibiotics with typically either azithromycin or doxycycline being used. Erythromycin or azithromycin is recommended in babies and during pregnancy. Sexual partners should also be treated and the infected people advised not to have sex for seven days and until symptom free. Gonorrhea, syphilis, and HIV should be tested for in those who have been infected. Following treatment people should be tested again after three months.
Chlamydia is one of the most common sexually transmitted infections, affecting about 4.2% of women and 2.7% of men worldwide. In 2015 about 61 million new cases occurred globally. In the United States about 1.4 million cases were reported in 2014. Infections are most common among those between the ages of 15 and 25 and are more common in women than men. In 2015 infections resulted in about 200 deaths. The word "chlamydia" is from the Greek χλαμύδα, meaning "cloak".
Chlamydial infection of the cervix (neck of the womb) is a sexually transmitted infection which has no symptoms for 50–70% of women infected. The infection can be passed through vaginal, anal, or oral sex. Of those who have an asymptomatic infection that is not detected by their doctor, approximately half will develop pelvic inflammatory disease (PID), a generic term for infection of the uterus, fallopian tubes, and/or ovaries. PID can cause scarring inside the reproductive organs, which can later cause serious complications, including chronic pelvic pain, difficulty becoming pregnant, ectopic (tubal) pregnancy, and other dangerous complications of pregnancy.
Chlamydia is known as the "silent epidemic", as in women it may not cause any symptoms in 70–80% of cases, and can linger for months or years before being discovered. Signs and symptoms may include abnormal vaginal bleeding or discharge, abdominal pain, painful sexual intercourse, fever, painful urination or the urge to urinate more often than usual (urinary urgency).
For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. Guidelines recommend all women attending for emergency contraceptive are offered Chlamydia testing, with studies showing up to 9% of women aged <25 years had Chlamydia.
In men, those with a chlamydial infection show symptoms of infectious inflammation of the urethra in about 50% of cases. Symptoms that may occur include: a painful or burning sensation when urinating, an unusual discharge from the penis, testicular pain or swelling, or fever. If left untreated, chlamydia in men can spread to the testicles causing epididymitis, which in rare cases can lead to sterility if not treated. Chlamydia is also a potential cause of prostatic inflammation in men, although the exact relevance in prostatitis is difficult to ascertain due to possible contamination from urethritis.
Trachoma is a chronic conjunctivitis caused by "Chlamydia trachomatis". It was once the most important cause of blindness worldwide, but its role diminished from 15% of blindness cases by trachoma in 1995 to 3.6% in 2002. The infection can be spread from eye to eye by fingers, shared towels or cloths, coughing and sneezing and eye-seeking flies. Symptoms include mucopurulent ocular discharge, irritation, redness, and lid swelling. Newborns can also develop chlamydia eye infection through childbirth (see below). Using the SAFE strategy (acronym for surgery for in-growing or in-turned lashes, antibiotics, facial cleanliness, and environmental improvements), the World Health Organization aims for the global elimination of trachoma by 2020 (GET 2020 initiative).
Chlamydia may also cause reactive arthritis—the triad of arthritis, conjunctivitis and urethral inflammation—especially in young men. About 15,000 men develop reactive arthritis due to chlamydia infection each year in the U.S., and about 5,000 are permanently affected by it. It can occur in both sexes, though is more common in men.
As many as half of all infants born to mothers with chlamydia will be born with the disease. Chlamydia can affect infants by causing spontaneous abortion; premature birth; conjunctivitis, which may lead to blindness; and pneumonia. Conjunctivitis due to chlamydia typically occurs one week after birth (compared with chemical causes (within hours) or gonorrhea (2–5 days)).
A different serovar of Chlamydia trachomatis is also the cause of lymphogranuloma venereum, an infection of the lymph nodes and lymphatics. It usually presents with genital ulceration and swollen lymph nodes in the groin, but it may also manifest as rectal inflammation, fever or swollen lymph nodes in other regions of the body.
Chlamydia can be transmitted during vaginal, anal, or oral sex or direct contact with infected tissue such as conjunctiva. Chlamydia can also be passed from an infected mother to her baby during vaginal childbirth.
"Chlamydiae" have the ability to establish long-term associations with host cells. When an infected host cell is starved for various nutrients such as amino acids (for example, tryptophan), iron, or vitamins, this has a negative consequence for "Chlamydiae" since the organism is dependent on the host cell for these nutrients. Long-term cohort studies indicate that approximately 50% of those infected clear within a year, 80% within two years, and 90% within three years.
The starved chlamydiae enter a persistent growth state wherein they stop cell division and become morphologically aberrant by increasing in size. Persistent organisms remain viable as they are capable of returning to a normal growth state once conditions in the host cell improve.
There is debate as to whether persistence has relevance. Some believe that persistent chlamydiae are the cause of chronic chlamydial diseases. Some antibiotics such as β-lactams have been found to induce a persistent-like growth state.
The diagnosis of genital chlamydial infections evolved rapidly from the 1990s through 2006. Nucleic acid amplification tests (NAAT), such as polymerase chain reaction (PCR), transcription mediated amplification (TMA), and the DNA strand displacement amplification (SDA) now are the mainstays. NAAT for chlamydia may be performed on swab specimens sampled from the cervix (women) or urethra (men), on self-collected vaginal swabs, or on voided urine. NAAT has been estimated to have a sensitivity of approximately 90% and a specificity of approximately 99%, regardless of sampling from a cervical swab or by urine specimen. In women seeking an STI clinic and a urine test is negative, a subsequent cervical swab has been estimated to be positive in approximately 2% of the time.
At present, the NAATs have regulatory approval only for testing urogenital specimens, although rapidly evolving research indicates that they may give reliable results on rectal specimens.
Because of improved test accuracy, ease of specimen management, convenience in specimen management, and ease of screening sexually active men and women, the NAATs have largely replaced culture, the historic gold standard for chlamydia diagnosis, and the non-amplified probe tests. The latter test is relatively insensitive, successfully detecting only 60–80% of infections in asymptomatic women, and often giving falsely-positive results. Culture remains useful in selected circumstances and is currently the only assay approved for testing non-genital specimens. Other method also exist including: ligase chain reaction (LCR), direct fluorescent antibody resting, enzyme immunoassay, and cell culture.
Rapid point-of-care tests are, as of 2020, not thought to be effective for diagnosing chlamydia in men of reproductive age and nonpregnant women because of a high false-negative rates.
Prevention is by not having sex, the use of condoms, or having sex with only partners, who are not infected.
For sexually active women who are not pregnant, screening is recommended in those under 25 and others at risk of infection. Risk factors include a history of chlamydial or other sexually transmitted infection, new or multiple sexual partners, and inconsistent condom use. For pregnant women, guidelines vary: screening women with age or other risk factors is recommended by the U.S. Preventive Services Task Force (USPSTF) (which recommends screening women under 25) and the American Academy of Family Physicians (which recommends screening women aged 25 or younger). The American College of Obstetricians and Gynecologists recommends screening all at risk, while the Centers for Disease Control and Prevention recommend universal screening of pregnant women. The USPSTF acknowledges that in some communities there may be other risk factors for infection, such as ethnicity. Evidence-based recommendations for screening initiation, intervals and termination are currently not possible. For men, the USPSTF concludes evidence is currently insufficient to determine if regular screening of men for chlamydia is beneficial. They recommend regular screening of men who are at increased risk for HIV or syphilis infection. A Cochrane review found that the effects of screening are uncertain in terms of chlamydia transmission but that screening probably reduces the risk of pelvic inflammatory disease in women.
In the United Kingdom the National Health Service (NHS) aims to:
"C. trachomatis" infection can be effectively cured with antibiotics. Guidelines recommend azithromycin, doxycycline, erythromycin, levofloxacin or ofloxacin. In men, doxycycline (100 mg twice a day for 7 days) is probably more effective than azithromycin (1 g single dose) but evidence for the relative effectiveness of antibiotics in women is very uncertain. Agents recommended during pregnancy include erythromycin or amoxicillin.
An option for treating sexual partners of those with chlamydia or gonorrhea includes patient-delivered partner therapy (PDT or PDPT), which is the practice of treating the sex partners of index cases by providing prescriptions or medications to the patient to take to his/her partner without the health care provider first examining the partner.
Following treatment people should be tested again after three months to check for reinfection.
Globally, as of 2015, sexually transmitted chlamydia affects approximately 61 million people. It is more common in women (3.8%) than men (2.5%). In 2015 it resulted in about 200 deaths.
In the United States about 1.6 million cases were reported in 2016. The CDC estimates that if one includes unreported cases there are about 2.9 million each year. It affects around 2% of young people. Chlamydial infection is the most common bacterial sexually transmitted infection in the UK.
Chlamydia causes more than 250,000 cases of epididymitis in the U.S. each year. Chlamydia causes 250,000 to 500,000 cases of PID every year in the United States. Women infected with chlamydia are up to five times more likely to become infected with HIV, if exposed. | https://en.wikipedia.org/wiki?curid=7037 |
Candidiasis
Candidiasis is a fungal infection due to any type of "Candida" (a type of yeast). When it affects the mouth, in some countries it is commonly called thrush. Signs and symptoms include white patches on the tongue or other areas of the mouth and throat. Other symptoms may include soreness and problems swallowing. When it affects the vagina, it may be referred to as a yeast infection or thrush. Signs and symptoms include genital itching, burning, and sometimes a white "cottage cheese-like" discharge from the vagina. Yeast infections of the penis are less common and typically present with an itchy rash. Very rarely, yeast infections may become invasive, spreading to other parts of the body. This may result in fevers along with other symptoms depending on the parts involved.
More than 20 types of "Candida" can cause infection with "Candida albicans" being the most common. Infections of the mouth are most common among children less than one month old, the elderly, and those with weak immune systems. Conditions that result in a weak immune system include HIV/AIDS, the medications used after organ transplantation, diabetes, and the use of corticosteroids. Other risks include dentures, following antibiotic therapy, and breastfeeding. Vaginal infections occur more commonly during pregnancy, in those with weak immune systems, and following antibiotic use. Individuals at risk for invasive candidiasis include low birth weight babies, people recovering from surgery, people admitted to an intensive care units, and those with an otherwise compromised immune system.
Efforts to prevent infections of the mouth include the use of chlorhexidine mouth wash in those with poor immune function and washing out the mouth following the use of inhaled steroids. Little evidence supports probiotics for either prevention or treatment even among those with frequent vaginal infections. For infections of the mouth, treatment with topical clotrimazole or nystatin is usually effective. By mouth or intravenous fluconazole, itraconazole, or amphotericin B may be used if these do not work. A number of topical antifungal medications may be used for vaginal infections including clotrimazole. In those with widespread disease, an echinocandin such as caspofungin or micafungin is used. A number of weeks of intravenous amphotericin B may be used as an alternative. In certain groups at very high risk, antifungal medications may be used preventatively.
Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease. About three-quarters of women have at least one yeast infection at some time during their lives. Widespread disease is rare except in those who have risk factors.
Signs and symptoms of candidiasis vary depending on the area affected. Most candidal infections result in minimal complications such as redness, itching, and discomfort, though complications may be severe or even fatal if left untreated in certain populations. In healthy (immunocompetent) persons, candidiasis is usually a localized infection of the skin, fingernails or toenails (onychomycosis), or mucosal membranes, including the oral cavity and pharynx (thrush), esophagus, and the genitalia (vagina, penis, etc.); less commonly in healthy individuals, the gastrointestinal tract, urinary tract, and respiratory tract are sites of candida infection.
In immunocompromised individuals, "Candida" infections in the esophagus occur more frequently than in healthy individuals and have a higher potential of becoming systemic, causing a much more serious condition, a fungemia called candidemia. Symptoms of esophageal candidiasis include difficulty swallowing, painful swallowing, abdominal pain, nausea, and vomiting.
Infection in the mouth is characterized by white discolorations in the tongue, around the mouth, and throat. Irritation may also occur, causing discomfort when swallowing.
Thrush is commonly seen in infants. It is not considered abnormal in infants unless it lasts longer than a few weeks.
Infection of the vagina or vulva may cause severe itching, burning, soreness, irritation, and a whitish or whitish-gray cottage cheese-like discharge. Symptoms of infection of the male genitalia (balanitis thrush) include red skin around the head of the penis, swelling, irritation, itchiness and soreness of the head of the penis, thick, lumpy discharge under the foreskin, unpleasant odour, difficulty retracting the foreskin (phimosis), and pain when passing urine or during sex.
Signs and symptoms of candidiasis in the skin include itching, irritation, and chafing or broken skin.
Common symptoms of gastrointestinal candidiasis in healthy individuals are anal itching, belching, bloating, indigestion, nausea, diarrhea, gas, intestinal cramps, vomiting, and gastric ulcers. Perianal candidiasis can cause anal itching; the lesion can be red, papular, or ulcerative in appearance, and it is not considered to be a sexually transmissible disease. Abnormal proliferation of the candida in the gut may lead to dysbiosis. While it is not yet clear, this alteration may be the source of symptoms generally described as the irritable bowel syndrome, and other gastrointestinal diseases.
"Candida" yeasts are generally present in healthy humans, frequently part of the human body's normal oral and intestinal flora, and particularly on the skin; however, their growth is normally limited by the human immune system and by competition of other microorganisms, such as bacteria occupying the same locations in the human body.
"Candida" requires moisture for growth, notably on the skin. For example, wearing wet swimwear for long periods of time is believed to be a risk factor. Additionally, candida can also cause diaper rashes in babies. In extreme cases, superficial infections of the skin or mucous membranes may enter into the bloodstream and cause systemic "Candida" infections.
Factors that increase the risk of candidiasis include HIV/AIDS, mononucleosis, cancer treatments, steroids, stress, antibiotic usage, diabetes, and nutrient deficiency. Hormone replacement therapy and infertility treatments may also be predisposing factors. Use of inhaled corticosteroids increases risk of candidiasis of the mouth. Inhaled corticosteroids with other risk factors such as antibiotics, oral glucocorticoids, not rinsing mouth after use of inhaled corticosteroids or high dose of inhaled corticosteroids put people at even higher risk. Treatment with antibiotics can lead to eliminating the yeast's natural competitors for resources in the oral and intestinal flora; thereby increasing the severity of the condition. A weakened or undeveloped immune system or metabolic illnesses are significant predisposing factors of candidiasis. Almost 15% of people with weakened immune systems develop a systemic illness caused by "Candida" species. Diets high in simple carbohydrates have been found to affect rates of oral candidiases.
"C. albicans" was isolated from the vaginas of 19% of apparently healthy women, i.e., those who experienced few or no symptoms of infection. External use of detergents or douches or internal disturbances (hormonal or physiological) can perturb the normal vaginal flora, consisting of lactic acid bacteria, such as lactobacilli, and result in an overgrowth of "Candida" cells, causing symptoms of infection, such as local inflammation. Pregnancy and the use of oral contraceptives have been reported as risk factors. Diabetes mellitus and the use of antibiotics are also linked to increased rates of yeast infections.
In penile candidiasis, the causes include sexual intercourse with an infected individual, low immunity, antibiotics, and diabetes. Male genital yeast infections are less common, but a yeast infection on the penis caused from direct contact via sexual intercourse with an infected partner is not uncommon.
Breast-feeding mothers may also develop candidiasis on and around the nipple as a result of moisture created by excessive milk-production.
Vaginal candidiasis can cause congenital candidiasis in newborns.
In oral candidiasis, simply inspecting the person's mouth for white patches and irritation may make the diagnosis. They may also take a sample of the infected area to determine what organism is causing the infection.
Symptoms of vaginal candidiasis are also present in the more common bacterial vaginosis; aerobic vaginitis is distinct and should be excluded in the differential diagnosis. In a 2002 study, only 33% of women who were self-treating for a yeast infection actually had such an infection, while most had either bacterial vaginosis or a mixed-type infection.
Diagnosis of a yeast infection is done either via microscopic examination or culturing. For identification by light microscopy, a scraping or swab of the affected area is placed on a microscope slide. A single drop of 10% potassium hydroxide (KOH) solution is then added to the specimen. The KOH dissolves the skin cells, but leaves the "Candida" cells intact, permitting visualization of pseudohyphae and budding yeast cells typical of many "Candida" species.
For the culturing method, a sterile swab is rubbed on the infected skin surface. The swab is then streaked on a culture medium. The culture is incubated at 37 °C (98.6 °F) for several days, to allow development of yeast or bacterial colonies. The characteristics (such as morphology and colour) of the colonies may allow initial diagnosis of the organism causing disease symptoms.
Respiratory, gastrointestinal, and esophageal candidiasis require an endoscopy to diagnose. For gastrointestinal candidiasis, it is necessary to obtain a 3–5 milliliter sample of fluid from the duodenum for fungal culture. The diagnosis of gastrointestinal candidiasis is based upon the culture containing in excess of 1,000 colony-forming units per milliliter.
Candidiasis may be divided into these types:
A diet that supports the immune system and is not high in simple carbohydrates contributes to a healthy balance of the oral and intestinal flora. While yeast infections are associated with diabetes, the level of blood sugar control may not affect the risk. Wearing cotton underwear may help to reduce the risk of developing skin and vaginal yeast infections, along with not wearing wet clothes for long periods of time. For women who experience recurrent yeast infections, there is limited evidence that oral or intravaginal probiotics help to prevent future infections. This includes either as pills or as yogurt.
Oral hygiene can help prevent oral candidiasis when people have a weakened immune system. For people undergoing cancer treatment, chlorhexidine mouthwash can prevent or reduce thrush. People who use inhaled corticosteroids can reduce the risk of developing oral candidiasis by rinsing the mouth with water or mouthwash after using the inhaler. People with dentures should also disinfect their dentures regularly to prevent oral candidiasis.
Candidiasis is treated with antifungal medications; these include clotrimazole, nystatin, fluconazole, voriconazole, amphotericin B, and echinocandins. Intravenous fluconazole or an intravenous echinocandin such as caspofungin are commonly used to treat immunocompromised or critically ill individuals.
The 2016 revision of the clinical practice guideline for the management of candidiasis lists a large number of specific treatment regimens for "Candida" infections that involve different "Candida" species, forms of antifungal drug resistance, immune statuses, and infection localization and severity. Gastrointestinal candidiasis in immunocompetent individuals is treated with 100–200 mg fluconazole per day for 2–3 weeks.
Mouth and throat candidiasis are treated with antifungal medication. Oral candidiasis usually responds to topical treatments; otherwise, systemic antifungal medication may be needed for oral infections. Candidal skin infections in the skin folds (candidal intertrigo) typically respond well to topical antifungal treatments (e.g., nystatin or miconazole). For breastfeeding mothers topical miconazole is the most effective treatment for treating candidiasis on the breasts. Gentian violet can be used for thrush in breastfeeding babies. Systemic treatment with antifungals by mouth is reserved for severe cases or if treatment with topical therapy is unsuccessful. Candida esophagitis may be treated orally or intravenously; for severe or azole-resistant esophageal candidiasis, treatment with amphotericin B may be necessary.
Vaginal yeast infections are typically treated with topical antifungal agents. A one-time dose of fluconazole by mouth is 90% effective in treating a vaginal yeast infection. For severe nonrecurring cases, several doses of fluconazole is recommended. Local treatment may include vaginal suppositories or medicated douches. Other types of yeast infections require different dosing. "C. albicans" can develop resistance to fluconazole, this being more of an issue in those with HIV/AIDS who are often treated with multiple courses of fluconazole for recurrent oral infections.
For vaginal yeast infection in pregnancy, topical imidazole or triazole antifungals are considered the therapy of choice owing to available safety data. Systemic absorption of these topical formulations is minimal, posing little risk of transplacental transfer. In vaginal yeast infection in pregnancy, treatment with topical azole antifungals is recommended for 7 days instead of a shorter duration.
For vaginal yeast infections, many complementary treatments are proposed, however a number have side effects. No benefit from probiotics has been found for active infections.
Treatment typically consists of oral or intravenous antifungal medications. In candidal infections of the blood, intravenous fluconazole or an echinocandin such as caspofungin may be used. Amphotericin B is another option.
Among individuals being treated in intensive care units, the mortality rate is about 30–50% when systemic candidiasis develops.
Oral candidiasis is the most common fungal infection of the mouth, and it also represents the most common opportunistic oral infection in humans. Infections of the mouth occur in about 6% of babies less than a month old. About 20% of those receiving chemotherapy for cancer and 20% of those with AIDS also develop the disease.
It is estimated that 20% of women may be asymptomatically colonized by vaginal yeast. In the United States there are approximately 1.4 million doctor office visits every year for candidiasis. About three-quarters of women have at least one yeast infection at some time during their lives.
Esophageal candidiasis is the most common esophageal infection in persons with AIDS and accounts for about 50% of all esophageal infections, often coexisting with other esophageal diseases. About two-thirds of people with AIDS and esophageal candidiasis also have oral candidiasis.
Candidal sepsis is rare. Candida is the fourth most common cause of bloodstream infections among hospital patients in the United States.
Descriptions of what sounds like oral thrush go back to the time of Hippocrates "circa" 460–370 BCE.
Vulvovaginal candidiasis was first described in 1849 by Wilkinson. In 1875, Haussmann demonstrated the causative organism in both vulvovaginal and oral candidiasis is the same.
With the advent of antibiotics following World War II, the rates of candidiasis increased. The rates then decreased in the 1950s following the development of nystatin.
The colloquial term "thrush" refers to the resemblance of the white flecks present in some forms of candidiasis ("e.g." pseudomembranous candidiasis) with the breast of the bird of the same name. The term candidosis is largely used in British English, and candidiasis in American English. "Candida" is also pronounced differently; in American English, the stress is on the "i", whereas in British English the stress is on the first syllable.
The genus "Candida" and species "C. albicans" were described by botanist Christine Marie Berkhout in her doctoral thesis at the University of Utrecht in 1923. Over the years, the classification of the genera and species has evolved. Obsolete names for this genus include "Mycotorula" and "Torulopsis". The species has also been known in the past as "Monilia albicans" and "Oidium albicans". The current classification is "nomen conservandum", which means the name is authorized for use by the International Botanical Congress (IBC).
The genus "Candida" includes about 150 different species; however, only a few are known to cause human infections. "C. albicans" is the most significant pathogenic species. Other species pathogenic in humans include "C. auris", "C. tropicalis", "C. glabrata", "C. krusei", "C. parapsilosis", "C. dubliniensis", and "C. lusitaniae".
The name "Candida" was proposed by Berkhout. It is from the Latin word "toga candida", referring to the white toga (robe) worn by candidates for the Senate of the ancient Roman republic. The specific epithet "albicans" also comes from Latin, "albicare" meaning "to whiten". These names refer to the generally white appearance of "Candida" species when cultured.
A 2005 publication noted that "a large pseudoscientific cult" has developed around the topic of "Candida", with claims stating that up to one in three people are affected by yeast-related illness, particularly a condition called "Candidiasis hypersensitivity". Some practitioners of alternative medicine have promoted these purported conditions and sold dietary supplements as supposed cures; a number of them have been prosecuted. In 1990, alternative health vendor Nature's Way signed an FTC consent agreement not to misrepresent in advertising any self-diagnostic test concerning yeast conditions or to make any unsubstantiated representation concerning any food or supplement's ability to control yeast conditions, with a fine of $30,000 payable to the National Institutes of Health for research in genuine candidiasis.
High level "Candida" colonization is linked to several diseases of the gastrointestinal tract including Crohn's disease.
There has been an increase in resistance to antifungals worldwide over the past 30–40 years. | https://en.wikipedia.org/wiki?curid=7038 |
Control theory
Control theory deals with the control of continuously operating dynamical systems in engineered processes and machines. The objective is to develop a control model for controlling such systems using a control action in an optimum manner without "delay or overshoot" and ensuring control stability. Control theory may be considered a branch of control engineering, computer engineering, mathematics, cybernetics and operations research, since it relies on the theoretical and practical application of the related disciplines.
To do this, a "controller" with the requisite corrective behavior is required. This controller monitors the controlled process variable (PV), and compares it with the reference or set point (SP). The difference between actual and desired value of the process variable, called the "error" signal, or SP-PV error, is applied as feedback to generate a control action to bring the controlled process variable to the same value as the set point. Other aspects which are also studied are controllability and observability. This is the basis for the advanced type of automation that revolutionized manufacturing, aircraft, communications and other industries. This is "feedback control", which is usually "continuous" and involves taking measurements using a sensor and making calculated adjustments to keep the measured variable within a set range by means of a "final control element", such as a control valve.
Extensive use is usually made of a diagrammatic style known as the block diagram. In it the transfer function, also known as the system function or network function, is a mathematical model of the relation between the input and output based on the differential equations describing the system.
Control theory dates from the 19th century, when the theoretical basis for the operation of governors was first described by James Clerk Maxwell. Control theory was further advanced by Edward Routh in 1874, Charles Sturm and in 1895, Adolf Hurwitz, who all contributed to the establishment of control stability criteria; and from 1922 onwards, the development of PID control theory by Nicolas Minorsky.
Although a major application of control theory is in control systems engineering, which deals with the design of process control systems for industry, other applications range far beyond this. As the general theory of feedback systems, control theory is useful wherever feedback occurs.
Although control systems of various types date back to antiquity, a more formal analysis of the field began with a dynamics analysis of the centrifugal governor, conducted by the physicist James Clerk Maxwell in 1868, entitled "On Governors". A centrifugal governor was already used to regulate the velocity of windmills. Maxwell described and analyzed the phenomenon of self-oscillation, in which lags in the system may lead to overcompensation and unstable behavior. This generated a flurry of interest in the topic, during which Maxwell's classmate, Edward John Routh, abstracted Maxwell's results for the general class of linear systems. Independently, Adolf Hurwitz analyzed system stability using differential equations in 1877, resulting in what is now known as the Routh–Hurwitz theorem.
A notable application of dynamic control was in the area of manned flight. The Wright brothers made their first successful test flights on December 17, 1903 and were distinguished by their ability to control their flights for substantial periods (more so than the ability to produce lift from an airfoil, which was known). Continuous, reliable control of the airplane was necessary for flights lasting longer than a few seconds.
By World War II, control theory was becoming an important area of research. Irmgard Flügge-Lotz developed the theory of discontinuous automatic control systems, and applied the bang-bang principle to the development of automatic flight control equipment for aircraft. Other areas of application for discontinuous controls included fire-control systems, guidance systems and electronics.
Sometimes, mechanical methods are used to improve the stability of systems. For example, ship stabilizers are fins mounted beneath the waterline and emerging laterally. In contemporary vessels, they may be gyroscopically controlled active fins, which have the capacity to change their angle of attack to counteract roll caused by wind or waves acting on the ship.
The Space Race also depended on accurate spacecraft control, and control theory has also seen an increasing use in fields such as economics and artificial intelligence. Here, one might say that the goal is to find an internal model that obeys the good regulator theorem. So, for example, in economics, the more accurately a (stock or commodities) trading model represents the actions of the market, the more easily it can control that market (and extract "useful work" (profits) from it). In AI, an example might be a chatbot modelling the discourse state of humans: the more accurately it can model the human state (e.g. on a telephone voice-support hotline), the better it can manipulate the human (e.g. into performing the corrective actions to resolve the problem that caused the phone call to the help-line). These last two examples take the narrow historical interpretation of control theory as a set of differential equations modeling and regulating kinetic motion, and broaden it into a vast generalization of a regulator interacting with a plant.
Fundamentally, there are two types of control loops: open loop control and closed loop (feedback) control.
In open loop control, the control action from the controller is independent of the "process output" (or "controlled process variable" - PV). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the timed switching on/off of the boiler, the process variable is the building temperature, but neither is linked.
In closed loop control, the control action from the controller is dependent on feedback from the process in the form of the value of the process variable (PV). In the case of the boiler analogy, a closed loop would include a thermostat to compare the building temperature (PV) with the temperature set on the thermostat (the set point - SP). This generates a controller output to maintain the building at the desired temperature by switching the boiler on and off. A closed loop controller, therefore, has a feedback loop which ensures the controller exerts a control action to manipulate the process variable to be the same as the "Reference input" or "set point". For this reason, closed loop controllers are also called feedback controllers.
The definition of a closed loop control system according to the British Standard Institution is "a control system possessing monitoring feedback, the deviation signal formed as a result of this feedback being used to control the action of a final control element in such a way as to tend to reduce the deviation to zero."
Likewise; "A "Feedback Control System" is a system which tends to maintain a prescribed relationship of one system variable to another by comparing functions of these variables and using the difference as a means of control."
An example of a control system is a car's cruise control, which is a device designed to maintain vehicle speed at a constant "desired" or "reference" speed provided by the driver. The "controller" is the cruise control, the "plant" is the car, and the "system" is the car and the cruise control. The system output is the car's speed, and the control itself is the engine's throttle position which determines how much power the engine delivers.
A primitive way to implement cruise control is simply to lock the throttle position when the driver engages cruise control. However, if the cruise control is engaged on a stretch of non-flat road, then the car will travel slower going uphill and faster when going downhill. This type of controller is called an "open-loop controller" because there is no feedback; no measurement of the system output (the car's speed) is used to alter the control (the throttle position.) As a result, the controller cannot compensate for changes acting on the car, like a change in the slope of the road.
In a "closed-loop control system", data from a sensor monitoring the car's speed (the system output) enters a controller which continuously compares the quantity representing the speed with the reference quantity representing the desired speed. The difference, called the error, determines the throttle position (the control). The result is to match the car's speed to the reference speed (maintain the desired system output). Now, when the car goes uphill, the difference between the input (the sensed speed) and the reference continuously determines the throttle position. As the sensed speed drops below the reference, the difference increases, the throttle opens, and engine power increases, speeding up the vehicle. In this way, the controller dynamically counteracts changes to the car's speed. The central idea of these control systems is the "feedback loop", the controller affects the system output, which in turn is measured and fed back to the controller.
To overcome the limitations of the open-loop controller, control theory introduces feedback.
A closed-loop controller uses feedback to control states or outputs of a dynamical system. Its name comes from the information path in the system: process inputs (e.g., voltage applied to an electric motor) have an effect on the process outputs (e.g., speed or torque of the motor), which is measured with sensors and processed by the controller; the result (the control signal) is "fed back" as input to the process, closing the loop.
Closed-loop controllers have the following advantages over open-loop controllers:
In some systems, closed-loop and open-loop control are used simultaneously. In such systems, the open-loop control is termed feedforward and serves to further improve reference tracking performance.
A common closed-loop controller architecture is the PID controller.
The output of the system "y(t)" is fed back through a sensor measurement "F" to a comparison with the reference value "r(t)". The controller "C" then takes the error "e" (difference) between the reference and the output to change the inputs "u" to the system under control "P". This is shown in the figure. This kind of controller is a closed-loop controller or feedback controller.
This is called a single-input-single-output ("SISO") control system; "MIMO" (i.e., Multi-Input-Multi-Output) systems, with more than one input/output, are common. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite-dimensional (typically functions).
If we assume the controller "C", the plant "P", and the sensor "F" are linear and time-invariant (i.e., elements of their transfer function "C(s)", "P(s)", and "F(s)" do not depend on time), the systems above can be analysed using the Laplace transform on the variables. This gives the following relations:
Solving for "Y"("s") in terms of "R"("s") gives
The expression formula_5 is referred to as the "closed-loop transfer function" of the system. The numerator is the forward (open-loop) gain from "r" to "y", and the denominator is one plus the gain in going around the feedback loop, the so-called loop gain. If formula_6, i.e., it has a large norm with each value of "s", and if formula_7, then "Y(s)" is approximately equal to "R(s)" and the output closely tracks the reference input.
A proportional–integral–derivative controller (PID controller) is a control loop feedback mechanism control technique widely used in control systems.
A PID controller continuously calculates an "error value" formula_8 as the difference between a desired setpoint and a measured process variable and applies a correction based on proportional, integral, and derivative terms. "PID" is an initialism for "Proportional-Integral-Derivative", referring to the three terms operating on the error signal to produce a control signal.
The theoretical understanding and application dates from the 1920s, and they are implemented in nearly all analogue control systems; originally in mechanical controllers, and then using discrete electronics and later in industrial process computers.
The PID controller is probably the most-used feedback control design.
If "u(t)" is the control signal sent to the system, "y(t)" is the measured output and "r(t)" is the desired output, and formula_9 is the tracking error, a PID controller has the general form
The desired closed loop dynamics is obtained by adjusting the three parameters formula_11, formula_12 and formula_13, often iteratively by "tuning" and without specific knowledge of a plant model. Stability can often be ensured using only the proportional term. The integral term permits the rejection of a step disturbance (often a striking specification in process control). The derivative term is used to provide damping or shaping of the response. PID controllers are the most well-established class of control systems: however, they cannot be used in several more complicated cases, especially if MIMO systems are considered.
Applying Laplace transformation results in the transformed PID controller equation
with the PID controller transfer function
As an example of tuning a PID controller in the closed-loop system formula_17, consider a 1st order plant given by
where formula_19 and formula_20 are some constants. The plant output is fed back through
where formula_22 is also a constant. Now if we set formula_23, formula_24, and formula_25, we can express the PID controller transfer function in series form as
Plugging formula_27, formula_28, and formula_29 into the closed-loop transfer function formula_17, we find that by setting
formula_32. With this tuning in this example, the system output follows the reference input exactly.
However, in practice, a pure differentiator is neither physically realizable nor desirable due to amplification of noise and resonant modes in the system. Therefore, a phase-lead compensator type approach or a differentiator with low-pass roll-off are used instead.
The field of control theory can be divided into two branches:
Mathematical techniques for analyzing and designing control systems fall into two different categories:
In contrast to the frequency domain analysis of the classical control theory, modern control theory utilizes the time-domain state space representation, a mathematical model of a physical system as a set of input, output and state variables related by first-order differential equations. To abstract from the number of inputs, outputs, and states, the variables are expressed as vectors and the differential and algebraic equations are written in matrix form (the latter only being possible when the dynamical system is linear). The state space representation (also known as the "time-domain approach") provides a convenient and compact way to model and analyze systems with multiple inputs and outputs. With inputs and outputs, we would otherwise have to write down Laplace transforms to encode all the information about a system. Unlike the frequency domain approach, the use of the state-space representation is not limited to systems with linear components and zero initial conditions. "State space" refers to the space whose axes are the state variables. The state of the system can be represented as a point within that space.
Control systems can be divided into different categories depending on the number of inputs and outputs.
The "stability" of a general dynamical system with no input can be described with Lyapunov stability criteria.
For simplicity, the following descriptions focus on continuous-time and discrete-time linear systems.
Mathematically, this means that for a causal linear system to be stable all of the poles of its transfer function must have negative-real values, i.e. the real part of each pole must be less than zero. Practically speaking, stability requires that the transfer function complex poles reside
The difference between the two cases is simply due to the traditional method of plotting continuous time versus discrete time transfer functions. The continuous Laplace transform is in Cartesian coordinates where the formula_33 axis is the real axis and the discrete Z-transform is in circular coordinates where the formula_34 axis is the real axis.
When the appropriate conditions above are satisfied a system is said to be asymptotically stable; the variables of an asymptotically stable control system always decrease from their initial value and do not show permanent oscillations. Permanent oscillations occur when a pole has a real part exactly equal to zero (in the continuous time case) or a modulus equal to one (in the discrete time case). If a simply stable system response neither decays nor grows over time, and has no oscillations, it is marginally stable; in this case the system transfer function has non-repeated poles at the complex plane origin (i.e. their real and complex component is zero in the continuous time case). Oscillations are present when poles with real part equal to zero have an imaginary part not equal to zero.
If a system in question has an impulse response of
then the Z-transform (see this example), is given by
which has a pole in formula_37 (zero imaginary part). This system is BIBO (asymptotically) stable since the pole is "inside" the unit circle.
However, if the impulse response was
then the Z-transform is
which has a pole at formula_40 and is not BIBO stable since the pole has a modulus strictly greater than one.
Numerous tools exist for the analysis of the poles of a system. These include graphical systems like the root locus, Bode plots or the Nyquist plots.
Mechanical changes can make equipment (and control systems) more stable. Sailors add ballast to improve the stability of ships. Cruise ships use antiroll fins that extend transversely from the side of the ship for perhaps 30 feet (10 m) and are continuously rotated about their axes to develop forces that oppose the roll.
Controllability and observability are main issues in the analysis of a system before deciding the best control strategy to be applied, or whether it is even possible to control or stabilize the system. Controllability is related to the possibility of forcing the system into a particular state by using an appropriate control signal. If a state is not controllable, then no signal will ever be able to control the state. If a state is not controllable, but its dynamics are stable, then the state is termed "stabilizable". Observability instead is related to the possibility of "observing", through output measurements, the state of a system. If a state is not observable, the controller will never be able to determine the behavior of an unobservable state and hence cannot use it to stabilize the system. However, similar to the stabilizability condition above, if a state cannot be observed it might still be detectable.
From a geometrical point of view, looking at the states of each variable of the system to be controlled, every "bad" state of these variables must be controllable and observable to ensure a good behavior in the closed-loop system. That is, if one of the eigenvalues of the system is not both controllable and observable, this part of the dynamics will remain untouched in the closed-loop system. If such an eigenvalue is not stable, the dynamics of this eigenvalue will be present in the closed-loop system which therefore will be unstable. Unobservable poles are not present in the transfer function realization of a state-space representation, which is why sometimes the latter is preferred in dynamical systems analysis.
Solutions to problems of an uncontrollable or unobservable system include adding actuators and sensors.
Several different control strategies have been devised in the past years. These vary from extremely general ones (PID controller), to others devoted to very particular classes of systems (especially robotics or aircraft cruise control).
A control problem can have several specifications. Stability, of course, is always present. The controller must ensure that the closed-loop system is stable, regardless of the open-loop stability. A poor choice of controller can even worsen the stability of the open-loop system, which must normally be avoided. Sometimes it would be desired to obtain particular dynamics in the closed loop: i.e. that the poles have formula_41, where formula_42 is a fixed value strictly greater than zero, instead of simply asking that formula_43.
Another typical specification is the rejection of a step disturbance; including an integrator in the open-loop chain (i.e. directly before the system under control) easily achieves this. Other classes of disturbances need different types of sub-systems to be included.
Other "classical" control theory specifications regard the time-response of the closed-loop system. These include the rise time (the time needed by the control system to reach the desired value after a perturbation), peak overshoot (the highest value reached by the response before reaching the desired value) and others (settling time, quarter-decay). Frequency domain specifications are usually related to robustness (see after).
Modern performance assessments use some variation of integrated tracking error (IAE, ISA, CQI).
A control system must always have some robustness property. A robust controller is such that its properties do not change much if applied to a system slightly different from the mathematical one used for its synthesis. This requirement is important, as no real physical system truly behaves like the series of differential equations used to represent it mathematically. Typically a simpler mathematical model is chosen in order to simplify calculations, otherwise, the true system dynamics can be so complicated that a complete model is impossible.
The process of determining the equations that govern the model's dynamics is called system identification. This can be done off-line: for example, executing a series of measures from which to calculate an approximated mathematical model, typically its transfer function or matrix. Such identification from the output, however, cannot take account of unobservable dynamics. Sometimes the model is built directly starting from known physical equations, for example, in the case of a system we know that formula_44. Even assuming that a "complete" model is used in designing the controller, all the parameters included in these equations (called "nominal parameters") are never known with absolute precision; the control system will have to behave correctly even when connected to a physical system with true parameter values away from nominal.
Some advanced control techniques include an "on-line" identification process (see later). The parameters of the model are calculated ("identified") while the controller itself is running. In this way, if a drastic variation of the parameters ensues, for example, if the robot's arm releases a weight, the controller will adjust itself consequently in order to ensure the correct performance.
Analysis of the robustness of a SISO (single input single output) control system can be performed in the frequency domain, considering the system's transfer function and using Nyquist and Bode diagrams. Topics include gain and phase margin and amplitude margin. For MIMO (multi-input multi output) and, in general, more complicated control systems, one must consider the theoretical results devised for each control technique (see next section). I.e., if particular robustness qualities are needed, the engineer must shift his attention to a control technique by including them in its properties.
A particular robustness issue is the requirement for a control system to perform properly in the presence of input and state constraints. In the physical world every signal is limited. It could happen that a controller will send control signals that cannot be followed by the physical system, for example, trying to rotate a valve at excessive speed. This can produce undesired behavior of the closed-loop system, or even damage or break actuators or other subsystems. Specific control techniques are available to solve the problem: model predictive control (see later), and anti-wind up systems. The latter consists of an additional control block that ensures that the control signal never exceeds a given threshold.
For MIMO systems, pole placement can be performed mathematically using a state space representation of the open-loop system and calculating a feedback matrix assigning poles in the desired positions. In complicated systems this can require computer-assisted calculation capabilities, and cannot always ensure robustness. Furthermore, all system states are not in general measured and so observers must be included and incorporated in pole placement design.
Processes in industries like robotics and the aerospace industry typically have strong nonlinear dynamics. In control theory it is sometimes possible to linearize such classes of systems and apply linear techniques, but in many cases it can be necessary to devise from scratch theories permitting control of nonlinear systems. These, e.g., feedback linearization, backstepping, sliding mode control, trajectory linearization control normally take advantage of results based on Lyapunov's theory. Differential geometry has been widely used as a tool for generalizing well-known linear control concepts to the non-linear case, as well as showing the subtleties that make it a more challenging problem. Control theory has also been used to decipher the neural mechanism that directs cognitive states.
When the system is controlled by multiple controllers, the problem is one of decentralized control. Decentralization is helpful in many ways, for instance, it helps control systems to operate over a larger geographical area. The agents in decentralized control systems can interact using communication channels and coordinate their actions.
A stochastic control problem is one in which the evolution of the state variables is subjected to random shocks from outside the system. A deterministic control problem is not subject to external random shocks.
Every control system must guarantee first the stability of the closed-loop behavior. For linear systems, this can be obtained by directly placing the poles. Non-linear control systems use specific theories (normally based on Aleksandr Lyapunov's Theory) to ensure stability without regard to the inner dynamics of the system. The possibility to fulfill different specifications varies from the model considered and the control strategy chosen.
Many active and historical figures made significant contribution to control theory including | https://en.wikipedia.org/wiki?curid=7039 |
Chemical formula
The connectivity of a molecule often has a strong influence on its physical and chemical properties and behavior. Two molecules composed of the same numbers of the same types of atoms (i.e. a pair of isomers) might have completely different chemical and/or physical properties if the atoms are connected differently or in different positions. In such cases, a structural formula is useful, as it illustrates which atoms are bonded to which other ones. From the connectivity, it is often possible to deduce the approximate shape of the molecule.
A condensed chemical formula may represent the types and spatial arrangement of bonds in a simple chemical substance, though it does not necessarily specify isomers or complex structures. For example, ethane consists of two carbon atoms single-bonded to each other, with each carbon atom having three hydrogen atoms bonded to it. Its chemical formula can be rendered as CH3CH3. In ethylene there is a double bond between the carbon atoms (and thus each carbon only has two hydrogens), therefore the chemical formula may be written: CH2CH2, and the fact that there is a double bond between the carbons is implicit because carbon has a valence of four. However, a more explicit method is to write H2C=CH2 or less commonly H2C::CH2. The two lines (or two pairs of dots) indicate that a double bond connects the atoms on either side of them.
A triple bond may be expressed with three lines (HC≡CH) or three pairs of dots (HC:::CH), and if there may be ambiguity, a single line or pair of dots may be used to indicate a single bond.
Molecules with multiple functional groups that are the same may be expressed by enclosing the repeated group in round brackets. For example, isobutane may be written (CH3)3CH. This condensed structural formula implies a different connectivity from other molecules that can be formed using the same atoms in the same proportions (isomers). The formula (CH3)3CH implies a central carbon atom connected to one hydrogen atom and three CH3 groups. The same number of atoms of each element (10 hydrogens and 4 carbons, or C4H10) may be used to make a straight chain molecule, "n"-butane: CH3CH2CH2CH3.
In any given chemical compound, the elements always combine in the same proportion with each other. This is the law of constant composition.
The law of constant composition says that, in any particular chemical compound, all samples of that compound will be made up of the same elements in the same proportion or ratio. For example, any water molecule is always made up of two hydrogen atoms and one oxygen atom in a 2:1 ratio. If we look at the relative masses of oxygen and hydrogen in a water molecule, we see that 94% of the mass of a water molecule is accounted for by oxygen and the remaining 6% is the mass of hydrogen. This mass proportion will be the same for any water molecule.
The alkene called but-2-ene has two isomers, which the chemical formula CH3CH=CHCH3 does not identify. The relative position of the two methyl groups must be indicated by additional notation denoting whether the methyl groups are on the same side of the double bond ("cis" or "Z") or on the opposite sides from each other ("trans" or "E").
As noted above, in order to represent the full structural formulae of many complex organic and inorganic compounds, chemical nomenclature may be needed which goes well beyond the available resources used above in simple condensed formulae. See IUPAC nomenclature of organic chemistry and IUPAC nomenclature of inorganic chemistry 2005 for examples. In addition, linear naming systems such as International Chemical Identifier (InChI) allow a computer to construct a structural formula, and simplified molecular-input line-entry system (SMILES) allows a more human-readable ASCII input. However, all these nomenclature systems go beyond the standards of chemical formulae, and technically are chemical naming systems, not formula systems.
For polymers in condensed chemical formulae, parentheses are placed around the repeating unit. For example, a hydrocarbon molecule that is described as CH3(CH2)50CH3, is a molecule with fifty repeating units. If the number of repeating units is unknown or variable, the letter "n" may be used to indicate this formula: CH3(CH2)"n"CH3.
For ions, the charge on a particular atom may be denoted with a right-hand superscript. For example, Na+, or Cu2+. The total charge on a charged molecule or a polyatomic ion may also be shown in this way. For example: H3O+ or SO42−. Note that + and - are used in place of +1 and -1, respectively.
For more complex ions, brackets [ ] are often used to enclose the ionic formula, as in [B12H12]2−, which is found in compounds such as Cs2[B12H12]. Parentheses ( ) can be nested inside brackets to indicate a repeating unit, as in [Co(NH3)6]3+Cl3−. Here, (NH3)6 indicates that the ion contains six NH3 groups bonded to cobalt, and [ ] encloses the entire formula of the ion with charge +3.
This is strictly optional; a chemical formula is valid with or without ionization information, and Hexamminecobalt(III) chloride may be written as [Co(NH3)6]3+Cl3− or [Co(NH3)6]Cl3. Brackets, like parentheses, behave in chemistry as they do in mathematics, grouping terms together they are not specifically employed only for ionization states. In the latter case here, the parentheses indicate 6 groups all of the same shape, bonded to another group of size 1 (the cobalt atom), and then the entire bundle, as a group, is bonded to 3 chlorine atoms. In the former case, it is clearer that the bond connecting the chlorines is ionic, rather than covalent.
Although isotopes are more relevant to nuclear chemistry or stable isotope chemistry than to conventional chemistry, different isotopes may be indicated with a prefixed superscript in a chemical formula. For example, the phosphate ion containing radioactive phosphorus-32 is [32PO4]3−. Also a study involving stable isotope ratios might include the molecule 18O16O.
A left-hand subscript is sometimes used redundantly to indicate the atomic number. For example, 8O2 for dioxygen, and for the most abundant isotopic species of dioxygen. This is convenient when writing equations for nuclear reactions, in order to show the balance of charge more clearly.
The @ symbol (at sign) indicates an atom or molecule trapped inside a cage but not chemically bound to it. For example, a buckminsterfullerene (C60) with an atom (M) would simply be represented as MC60 regardless of whether M was inside the fullerene without chemical bonding or outside, bound to one of the carbon atoms. Using the @ symbol, this would be denoted M@C60 if M was inside the carbon network. A non-fullerene example is [As@Ni12As20]3−, an ion in which one As atom is trapped in a cage formed by the other 32 atoms.
This notation was proposed in 1991 with the discovery of fullerene cages (endohedral fullerenes), which can trap atoms such as La to form, for example, La@C60 or La@C82. The choice of the symbol has been explained by the authors as being concise, readily printed and transmitted electronically (the at sign is included in ASCII, which most modern character encoding schemes are based on), and the visual aspects suggesting the structure of an endohedral fullerene.
Chemical formulae most often use integers for each element. However, there is a class of compounds, called non-stoichiometric compounds, that cannot be represented by small integers. Such a formula might be written using decimal fractions, as in Fe0.95O, or it might include a variable part represented by a letter, as in Fe1–xO, where x is normally much less than 1.
A chemical formula used for a series of compounds that differ from each other by a constant unit is called a "general formula". It generates a homologous series of chemical formulae. For example, alcohols may be represented by the formula C"n"H(2n + 1)OH ("n" ≥ 1), giving the homologs methanol, ethanol, propanol for "n"=1–3.
The Hill system (or Hill notation) is a system of writing empirical chemical formulae, molecular chemical formulae and components of a condensed formula such that the number of carbon atoms in a molecule is indicated first, the number of hydrogen atoms next, and then the number of all other chemical elements subsequently, in alphabetical order of the chemical symbols. When the formula contains no carbon, all the elements, including hydrogen, are listed alphabetically.
By sorting formulae according to the number of atoms of each element present in the formula according to these rules, with differences in earlier elements or numbers being treated as more significant than differences in any later element or number—like sorting text strings into lexicographical order—it is possible to collate chemical formulae into what is known as Hill system order.
The Hill system was first published by Edwin A. Hill of the United States Patent and Trademark Office in 1900. It is the most commonly used system in chemical databases and printed indexes to sort lists of compounds.
A list of formulae in Hill system order is arranged alphabetically, as above, with single-letter elements coming before two-letter symbols when the symbols begin with the same letter (so "B" comes before "Be", which comes before "Br").
The following example formulae are written using the Hill system, and listed in Hill order: | https://en.wikipedia.org/wiki?curid=7043 |
Beetle
Beetles are a group of insects that form the order Coleoptera, in the superorder Endopterygota. Their front pair of wings are hardened into wing-cases, elytra, distinguishing them from most other insects. The Coleoptera, with about 400,000 species, is the largest of all orders, constituting almost 40% of described insects and 25% of all known animal life-forms; new species are discovered frequently. The largest of all families, the Curculionidae (weevils), with some 83,000 member species,
belongs to this order. Found in almost every habitat except the sea and the polar regions, they interact with their ecosystems in several ways: beetles often feed on plants and fungi, break down animal and plant debris, and eat other invertebrates. Some species are serious agricultural pests, such as the Colorado potato beetle, while others such as Coccinellidae (ladybirds or ladybugs) eat aphids, scale insects, thrips, and other plant-sucking insects that damage crops.
Beetles typically have a particularly hard exoskeleton including the elytra, though some such as the rove beetles have very short elytra while blister beetles have softer elytra. The general anatomy of a beetle is quite uniform and typical of insects, although there are several examples of novelty, such as adaptations in water beetles which trap air bubbles under the elytra for use while diving. Beetles are endopterygotes, which means that they undergo complete metamorphosis, with a series of conspicuous and relatively abrupt changes in body structure between hatching and becoming adult after a relatively immobile pupal stage. Some, such as stag beetles, have a marked sexual dimorphism, the males possessing enormously enlarged mandibles which they use to fight other males. Many beetles are aposematic, with bright colours and patterns warning of their toxicity, while others are harmless Batesian mimics of such insects. Many beetles, including those that live in sandy places, have effective camouflage.
Beetles are prominent in human culture, from the sacred scarabs of ancient Egypt to beetlewing art and use as pets or fighting insects for entertainment and gambling. Many beetle groups are brightly and attractively coloured making them objects of collection and decorative displays. Over 300 species are used as food, mostly as larvae; species widely consumed include mealworms and rhinoceros beetle larvae. However, the major impact of beetles on human life is as agricultural, forestry, and horticultural pests. Serious pests include the boll weevil of cotton, the Colorado potato beetle, the coconut hispine beetle, and the mountain pine beetle. Most beetles, however, do not cause economic damage and many, such as the lady beetles and dung beetles are beneficial by helping to control insect pests.
The name of the taxonomic order, Coleoptera, comes from the Greek "koleopteros" (κολεόπτερος), given to the group by Aristotle for their elytra, hardened shield-like forewings, from "koleos", sheath, and "pteron", wing. The English name beetle comes from the Old English word "bitela", little biter, related to "bītan" (to bite), leading to Middle English "betylle". Another Old English name for beetle is "ċeafor", chafer, used in names such as cockchafer, from the Proto-Germanic *"kebrô" ("beetle"; compare German "Käfer", Dutch "kever").
Beetles are by far the largest order of insects: the roughly 400,000 species make up about 40% of all insect species so far described, and about 25% of all animals. A 2015 study provided four independent estimates of the total number of beetle species, giving a mean estimate of some 1.5 million with a "surprisingly narrow range" spanning all four estimates from a minimum of 0.9 to a maximum of 2.1 million beetle species. The four estimates made use of host-specificity relationships (1.5 to 1.9 million), ratios with other taxa (0.9 to 1.2 million), plant:beetle ratios (1.2 to 1.3), and extrapolations based on body size by year of description (1.7 to 2.1 million).
Beetles are found in nearly all habitats, including freshwater and coastal habitats, wherever vegetative foliage is found, from trees and their bark to flowers, leaves, and underground near roots - even inside plants in galls, in every plant tissue, including dead or decaying ones. Tropical forest canopies have a large and diverse fauna of beetles, including Carabidae, Chrysomelidae, and Scarabaeidae.
The heaviest beetle, indeed the heaviest insect stage, is the larva of the goliath beetle, "Goliathus goliatus", which can attain a mass of at least and a length of . Adult male goliath beetles are the heaviest beetle in its adult stage, weighing and measuring up to . Adult elephant beetles, "Megasoma elephas" and "Megasoma actaeon" often reach and .
The longest beetle is the Hercules beetle "Dynastes hercules", with a maximum overall length of at least 16.7 cm (6.6 in) including the very long pronotal horn. The smallest recorded beetle and the smallest free-living insect (), is the featherwing beetle "Scydosella musawasensis" which may measure as little as 325 µm in length.
The oldest known fossil insect that unequivocally resembles a Coleopteran is from the Lower Permian Period about (mya), though these members of the family Tshekardocoleidae have 13-segmented antennae, elytra with more fully developed venation and more irregular longitudinal ribbing, and abdomen and ovipositor extending beyond the apex of the elytra. In the Permian–Triassic extinction event at the end of the Permian, some 30% of all insect species became extinct, so the fossil record of insects only includes beetles from the Lower Triassic . Around this time, during the Late Triassic, fungus-feeding species such as Cupedidae appear in the fossil record. In the stages of the Upper Triassic, alga-feeding insects such as Triaplidae and Hydrophilidae begin to appear, alongside predatory water beetles. The first weevils, including the Obrienidae, appear alongside the first rove beetles (Staphylinidae), which closely resemble recent species. Some entomologists are sceptical that such early insects are so closely related to present-day species, arguing that this is extremely unlikely; for example, the structure of the metepisternum suggests that the Obrienidae could be Archostemata, not weevils at all, despite fossils with weevil-like snouts.
In 2009, a fossil beetle was described from the Pennsylvanian of Mazon Creek, Illinois, pushing the origin of the beetles to an earlier date, . Fossils from this time have been found in Asia and Europe, for instance in the red slate fossil beds of Niedermoschel near Mainz, Germany. Further fossils have been found in Obora, Czech Republic and Tshekarda in the Ural mountains, Russia. However, there are only a few fossils from North America before the middle Permian, although both Asia and North America had been united to Euramerica. The first discoveries from North America made in the Wellington formation of Oklahoma were published in 2005 and 2008.
As a consequence of the Permian–Triassic extinction event, the fossil record of insects is scant, including beetles from the Lower Triassic. However, there are a few exceptions, such as in Eastern Europe. At the Babiy Kamen site in the Kuznetsk Basin, numerous beetle fossils were discovered, including entire specimens of the infraorders Archostemata (e.g. Ademosynidae, Schizocoleidae), Adephaga (e.g., Triaplidae, Trachypachidae) and Polyphaga (e.g. Hydrophilidae, Byrrhidae, Elateroidea). However, species from the families Cupedidae and Schizophoroidae are not present at this site, whereas they dominate at other fossil sites from the Lower Triassic such as Khey-Yaga, Russia, in the Korotaikha Basin.
During the Jurassic (), there was a dramatic increase in the diversity of beetle families, including the development and growth of carnivorous and herbivorous species. The Chrysomeloidea diversified around the same time, feeding on a wide array of plant hosts from cycads and conifers to angiosperms. Close to the Upper Jurassic, the Cupedidae decreased, but the diversity of the early plant-eating species increased. Most recent plant-eating beetles feed on flowering plants or angiosperms, whose success contributed to a doubling of plant-eating species during the Middle Jurassic. However, the increase of the number of beetle families during the Cretaceous does not correlate with the increase of the number of angiosperm species. Around the same time, numerous primitive weevils (e.g. Curculionoidea) and click beetles (e.g. Elateroidea) appeared. The first jewel beetles (e.g. Buprestidae) are present, but they remained rare until the Cretaceous. The first scarab beetles were not coprophagous but presumably fed on rotting wood with the help of fungus; they are an early example of a mutualistic relationship.
There are more than 150 important fossil sites from the Jurassic, the majority in Eastern Europe and North Asia. Outstanding sites include Solnhofen in Upper Bavaria, Germany, Karatau in South Kazakhstan, the Yixian formation in Liaoning, North China, as well as the Jiulongshan formation and further fossil sites in Mongolia. In North America there are only a few sites with fossil records of insects from the Jurassic, namely the shell limestone deposits in the Hartford basin, the Deerfield basin and the Newark basin.
The Cretaceous saw the fragmenting of the southern landmass, with the opening of the southern Atlantic Ocean and the isolation of New Zealand, while South America, Antarctica, and Australia grew more distant. The diversity of Cupedidae and Archostemata decreased considerably. Predatory ground beetles (Carabidae) and rove beetles (Staphylinidae) began to distribute into different patterns; the Carabidae predominantly occurred in the warm regions, while the Staphylinidae and click beetles (Elateridae) preferred temperate climates. Likewise, predatory species of Cleroidea and Cucujoidea hunted their prey under the bark of trees together with the jewel beetles (Buprestidae). The diversity of jewel beetles increased rapidly, as they were the primary consumers of wood, while longhorn beetles (Cerambycidae) were rather rare: their diversity increased only towards the end of the Upper Cretaceous. The first coprophagous beetles are from the Upper Cretaceous and may have lived on the excrement of herbivorous dinosaurs. The first species where both larvae and adults are adapted to an aquatic lifestyle are found. Whirligig beetles (Gyrinidae) were moderately diverse, although other early beetles (e.g. Dytiscidae) were less, with the most widespread being the species of Coptoclavidae, which preyed on aquatic fly larvae.
Many fossil sites worldwide contain beetles from the Cretaceous. Most are in Europe and Asia and belong to the temperate climate zone during the Cretaceous. Lower Cretaceous sites include the Crato fossil beds in the Araripe basin in the Ceará, North Brazil, as well as overlying Santana formation; the latter was near the equator at that time. In Spain, important sites are near Montsec and Las Hoyas. In Australia, the Koonwarra fossil beds of the Korumburra group, South Gippsland, Victoria, are noteworthy. Major sites from the Upper Cretaceous include Kzyl-Dzhar in South Kazakhstan and Arkagala in Russia.
Beetle fossils are abundant in the Cenozoic; by the Quaternary (up to 1.6 mya), fossil species are identical to living ones, while from the Late Miocene (5.7 mya) the fossils are still so close to modern forms that they are most likely the ancestors of living species. The large oscillations in climate during the Quaternary caused beetles to change their geographic distributions so much that current location gives little clue to the biogeographical history of a species. It is evident that geographic isolation of populations must often have been broken as insects moved under the influence of changing climate, causing mixing of gene pools, rapid evolution, and extinctions, especially in middle latitudes.
The very large number of beetle species poses special problems for classification. Some families contain tens of thousands of species, and need to be divided into subfamilies and tribes. This immense number led the evolutionary biologist J. B. S. Haldane to quip, when some theologians asked him what could be inferred about the mind of the Creator from the works of His Creation, "An inordinate fondness for beetles".
Polyphaga is the largest suborder, containing more than 300,000 described species in more than 170 families, including rove beetles (Staphylinidae), scarab beetles (Scarabaeidae), blister beetles (Meloidae), stag beetles (Lucanidae) and true weevils (Curculionidae). These polyphagan beetle groups can be identified by the presence of cervical sclerites (hardened parts of the head used as points of attachment for muscles) absent in the other suborders.
Adephaga contains about 10 families of largely predatory beetles, includes ground beetles (Carabidae), water beetles (Dytiscidae) and whirligig beetles (Gyrinidae). In these insects, the testes are tubular and the first abdominal sternum (a plate of the exoskeleton) is divided by the hind coxae (the basal joints of the beetle's legs).
Archostemata contains four families of mainly wood-eating beetles, including reticulated beetles (Cupedidae) and the telephone-pole beetle.
The Archostemata have an exposed plate called the metatrochantin in front of the basal segment or coxa of the hind leg. Myxophaga contains about 65 described species in four families, mostly very small, including Hydroscaphidae and the genus "Sphaerius". The myxophagan beetles are small and mostly alga-feeders. Their mouthparts are characteristic in lacking galeae and having a mobile tooth on their left mandible.
The consistency of beetle morphology, in particular their possession of elytra, has long suggested that Coleoptera is monophyletic, though there have been doubts about the arrangement of the suborders, namely the Adephaga, Archostemata, Myxophaga and Polyphaga within that clade. The twisted-wing parasites, Strepsiptera, are thought to be a sister group to the beetles, having split from them in the Early Permian.
Molecular phylogenetic analysis confirms that the Coleoptera are monophyletic. Duane McKenna et al. (2015) used eight nuclear genes for 367 species from 172 of 183 Coleopteran families. They split the Adephaga into 2 clades, Hydradephaga and Geadephaga, broke up the Cucujoidea into 3 clades, and placed the Lymexyloidea within the Tenebrionoidea. The Polyphaga appear to date from the Triassic. Most extant beetle families appear to have arisen in the Cretaceous. The cladogram is based on McKenna (2015). The number of species in each group (mainly superfamilies) is shown in parentheses, and boldface if over 10,000. English common names are given where possible. Dates of origin of major groups are shown in italics in millions of years ago (mya).
Beetles are generally characterized by a particularly hard exoskeleton and hard forewings (elytra) not usable for flying. Almost all beetles have mandibles that move in a horizontal plane. The mouthparts are rarely suctorial, though they are sometimes reduced; the maxillae always bear palps. The antennae usually have 11 or fewer segments, except in some groups like the Cerambycidae (longhorn beetles) and the Rhipiceridae (cicada parasite beetles). The coxae of the legs are usually located recessed within a coxal cavity. The genitalic structures are telescoped into the last abdominal segment in all extant beetles. Beetle larvae can often be confused with those of other endopterygote groups. The beetle's exoskeleton is made up of numerous plates, called sclerites, separated by thin sutures. This design provides armored defenses while maintaining flexibility. The general anatomy of a beetle is quite uniform, although specific organs and appendages vary greatly in appearance and function between the many families in the order. Like all insects, beetles' bodies are divided into three sections: the head, the thorax, and the abdomen. Because there are so many species, identification is quite difficult, and relies on attributes including the shape of the antennae, the tarsal formulae and shapes of these small segments on the legs, the mouthparts, and the ventral plates (sterna, pleura, coxae). In many species accurate identification can only be made by examination of the unique male genitalic structures.
The head, having mouthparts projecting forward or sometimes downturned, is usually heavily sclerotized and is sometimes very large. The eyes are compound and may display remarkable adaptability, as in the case of the aquatic whirligig beetles (Gyrinidae), where they are split to allow a view both above and below the waterline. A few Longhorn beetles (Cerambycidae) and weevils as well as some fireflies (Rhagophthalmidae) have divided eyes, while many have eyes that are notched, and a few have ocelli, small, simple eyes usually farther back on the head (on the vertex); these are more common in larvae than in adults. The anatomical organization of the compound eyes may be modified and depends on whether a species is primarily crepuscular, or diurnally or nocturnally active. Ocelli are found in the adult carpet beetle (Dermestidae), some rove beetles (Omaliinae), and the Derodontidae.
Beetle antennae are primarily organs of sensory perception and can detect motion, odour and chemical substances, but may also be used to physically feel a beetle's environment. Beetle families may use antennae in different ways. For example, when moving quickly, tiger beetles may not be able to see very well and instead hold their antennae rigidly in front of them in order to avoid obstacles.
Certain Cerambycidae use antennae to balance, and blister beetles may use them for grasping. Some aquatic beetle species may use antennae for gathering air and passing it under the body whilst submerged. Equally, some families use antennae during mating, and a few species use them for defence. In the cerambycid "Onychocerus albitarsis", the antennae have venom injecting structures used in defence, which is unique among arthropods. Antennae vary greatly in form, sometimes between the sexes, but are often similar within any given family. Antennae may be , , , , (either on one side or both, bipectinate), or . The physical variation of antennae is important for the identification of many beetle groups. The Curculionidae have elbowed or geniculate antennae. Feather like flabellate antennae are a restricted form found in the Rhipiceridae and a few other families. The Silphidae have a capitate antennae with a spherical head at the tip. The Scarabaeidae typically have lamellate antennae with the terminal segments extended into long flat structures stacked together. The Carabidae typically have thread-like antennae. The antennae arises between the eye and the mandibles and in the Tenebrionidae, the antennae rise in front of a notch that breaks the usually circular outline of the compound eye. They are segmented and usually consist of 11 parts, the first part is called the scape and the second part is the pedicel. The other segments are jointly called the flagellum.
Beetles have mouthparts like those of grasshoppers. The mandibles appear as large pincers on the front of some beetles. The mandibles are a pair of hard, often tooth-like structures that move horizontally to grasp, crush, or cut food or enemies (see defence, below). Two pairs of finger-like appendages, the maxillary and labial palpi, are found around the mouth in most beetles, serving to move food into the mouth. In many species, the mandibles are sexually dimorphic, with those of the males enlarged enormously compared with those of females of the same species.
The thorax is segmented into the two discernible parts, the pro- and pterothorax. The pterothorax is the fused meso- and metathorax, which are commonly separated in other insect species, although flexibly articulate from the prothorax. When viewed from below, the thorax is that part from which all three pairs of legs and both pairs of wings arise. The abdomen is everything posterior to the thorax. When viewed from above, most beetles appear to have three clear sections, but this is deceptive: on the beetle's upper surface, the middle section is a hard plate called the pronotum, which is only the front part of the thorax; the back part of the thorax is concealed by the beetle's wings. This further segmentation is usually best seen on the abdomen.
The multisegmented legs end in two to five small segments called tarsi. Like many other insect orders, beetles have claws, usually one pair, on the end of the last tarsal segment of each leg. While most beetles use their legs for walking, legs have been variously adapted for other uses. Aquatic beetles including the Dytiscidae (diving beetles), Haliplidae, and many species of Hydrophilidae, the legs, often the last pair, are modified for swimming, typically with rows of long hairs. Male diving beetles have suctorial cups on their forelegs that they use to grasp females. Other beetles have fossorial legs widened and often spined for digging. Species with such adaptations are found among the scarabs, ground beetles, and clown beetles (Histeridae). The hind legs of some beetles, such as flea beetles (within Chrysomelidae) and flea weevils (within Curculionidae), have enlarged femurs that help them leap.
The forewings of beetles are not used for flight, but form elytra which cover the hind part of the body and protect the hindwings. The elytra are usually hard shell-like structures which must be raised to allow the hind wings to move for flight. However, in the soldier beetles (Cantharidae), the elytra are soft, earning this family the name of leatherwings. Other soft wing beetles include the net-winged beetle "Calopteron discrepans", which has brittle wings that rupture easily in order to release chemicals for defence.
Beetles' flight wings are crossed with veins and are folded after landing, often along these veins, and stored below the elytra. A fold ("jugum") of the membrane at the base of each wing is characteristic. Some beetles have lost the ability to fly. These include some ground beetles (Carabidae) and some true weevils (Curculionidae), as well as desert- and cave-dwelling species of other families. Many have the two elytra fused together, forming a solid shield over the abdomen. In a few families, both the ability to fly and the elytra have been lost, as in the glow-worms (Phengodidae), where the females resemble larvae throughout their lives. The presence of elytra and wings does not always indicate that the beetle will fly. For example, the tansy beetle walks between habitats despite being physically capable of flight.
The abdomen is the section behind the metathorax, made up of a series of rings, each with a hole for breathing and respiration, called a spiracle, composing three different segmented sclerites: the tergum, pleura, and the sternum. The tergum in almost all species is membranous, or usually soft and concealed by the wings and elytra when not in flight. The pleura are usually small or hidden in some species, with each pleuron having a single spiracle. The sternum is the most widely visible part of the abdomen, being a more or less sclerotized segment. The abdomen itself does not have any appendages, but some (for example, Mordellidae) have articulating sternal lobes.
The digestive system of beetles is primarily adapted for a herbivorous diet. Digestion takes place mostly in the anterior midgut, although in predatory groups like the Carabidae, most digestion occurs in the crop by means of midgut enzymes. In the Elateridae, the larvae are liquid feeders that extraorally digest their food by secreting enzymes. The alimentary canal basically consists of a short, narrow pharynx, a widened expansion, the crop, and a poorly developed gizzard. This is followed by the midgut, that varies in dimensions between species, with a large amount of cecum, and the hindgut, with varying lengths. There are typically four to six Malpighian tubules.
The nervous system in beetles contains all the types found in insects, varying between different species, from three thoracic and seven or eight abdominal ganglia which can be distinguished to that in which all the thoracic and abdominal ganglia are fused to form a composite structure.
Like most insects, beetles inhale air, for the oxygen it contains, and exhale carbon dioxide, via a tracheal system. Air enters the body through spiracles, and circulates within the haemocoel in a system of tracheae and tracheoles, through whose walls the gases can diffuse.
Diving beetles, such as the Dytiscidae, carry a bubble of air with them when they dive. Such a bubble may be contained under the elytra or against the body by specialized hydrophobic hairs. The bubble covers at least some of the spiracles, permitting air to enter the tracheae. The function of the bubble is not only to contain a store of air but to act as a physical gill. The air that it traps is in contact with oxygenated water, so as the animal's consumption depletes the oxygen in the bubble, more oxygen can diffuse in to replenish it. Carbon dioxide is more soluble in water than either oxygen or nitrogen, so it readily diffuses out faster than in. Nitrogen is the most plentiful gas in the bubble, and the least soluble, so it constitutes a relatively static component of the bubble and acts as a stable medium for respiratory gases to accumulate in and pass through. Occasional visits to the surface are sufficient for the beetle to re-establish the constitution of the bubble.
Like other insects, beetles have open circulatory systems, based on hemolymph rather than blood. As in other insects, a segmented tube-like heart is attached to the dorsal wall of the hemocoel. It has paired inlets or "ostia" at intervals down its length, and circulates the hemolymph from the main cavity of the haemocoel and out through the anterior cavity in the head.
Different glands are specialized for different pheromones to attract mates. Pheromones from species of Rutelinae are produced from epithelial cells lining the inner surface of the apical abdominal segments; amino acid-based pheromones of Melolonthinae are produced from eversible glands on the abdominal apex. Other species produce different types of pheromones. Dermestids produce esters, and species of Elateridae produce fatty acid-derived aldehydes and acetates. To attract a mate, fireflies (Lampyridae) use modified fat body cells with transparent surfaces backed with reflective uric acid crystals to produce light by bioluminescence. Light production is highly efficient, by oxidation of luciferin catalyzed by enzymes (luciferases) in the presence of adenosine triphosphate (ATP) and oxygen, producing oxyluciferin, carbon dioxide, and light.
Tympanal organs or hearing organs consist of a membrane (tympanum) stretched across a frame backed by an air sac and associated sensory neurons, are found in two families. Several species of the genus "Cicindela" (Carabidae) have hearing organs on the dorsal surfaces of their first abdominal segments beneath the wings; two tribes in the Dynastinae (within the Scarabaeidae) have hearing organs just beneath their pronotal shields or neck membranes. Both families are sensitive to ultrasonic frequencies, with strong evidence indicating they function to detect the presence of bats by their ultrasonic echolocation.
Beetles are members of the superorder Endopterygota, and accordingly most of them undergo complete metamorphosis. The typical form of metamorphosis in beetles passes through four main stages: the egg, the larva, the pupa, and the imago or adult. The larvae are commonly called grubs and the pupa sometimes is called the chrysalis. In some species, the pupa may be enclosed in a cocoon constructed by the larva towards the end of its final instar. Some beetles, such as typical members of the families Meloidae and Rhipiphoridae, go further, undergoing hypermetamorphosis in which the first instar takes the form of a triungulin.
Some beetles have intricate mating behaviour. Pheromone communication is often important in locating a mate.
Different species use different pheromones. Scarab beetles such as the Rutelinae use pheromones derived from fatty acid synthesis, while other scarabs such as the Melolonthinae use amino acids and terpenoids. Another way beetles find mates is seen in the fireflies (Lampyridae) which are bioluminescent, with abdominal light-producing organs. The males and females engage in a complex dialogue before mating; each species has a unique combination of flight patterns, duration, composition, and intensity of the light produced.
Before mating, males and females may stridulate, or vibrate the objects they are on. In the Meloidae, the male climbs onto the dorsum of the female and strokes his antennae on her head, palps, and antennae. In "Eupompha", the male draws his antennae along his longitudinal vertex. They may not mate at all if they do not perform the precopulatory ritual. This mating behaviour may be different amongst dispersed populations of the same species. For example, the mating of a Russian population of tansy beetle ("Chysolina graminis") is preceded by an elaborate ritual involving the male tapping the female's eyes, pronotum and antennae with its antennae, which is not evident in the population of this species in the United Kingdom.
Competition can play a part in the mating rituals of species such as burying beetles ("Nicrophorus"), the insects fighting to determine which can mate. Many male beetles are territorial and fiercely defend their territories from intruding males. In such species, the male often has horns on the head or thorax, making its body length greater than that of a female. Copulation is generally quick, but in some cases lasts for several hours. During copulation, sperm cells are transferred to the female to fertilize the egg.
Essentially all beetles lay eggs, though some myrmecophilous Aleocharinae and some Chrysomelinae which live in mountains or the subarctic are ovoviviparous, laying eggs which hatch almost immediately. Beetle eggs generally have smooth surfaces and are soft, though the Cupedidae have hard eggs. Eggs vary widely between species: the eggs tend to be small in species with many instars (larval stages), and in those that lay large numbers of eggs.
A female may lay from several dozen to several thousand eggs during her lifetime, depending on the extent of parental care. This ranges from the simple laying of eggs under a leaf, to the parental care provided by scarab beetles, which house, feed and protect their young. The Attelabidae roll leaves and lay their eggs inside the roll for protection.
The larva is usually the principal feeding stage of the beetle life cycle. Larvae tend to feed voraciously once they emerge from their eggs. Some feed externally on plants, such as those of certain leaf beetles, while others feed within their food sources. Examples of internal feeders are most Buprestidae and longhorn beetles. The larvae of many beetle families are predatory like the adults (ground beetles, ladybirds, rove beetles). The larval period varies between species, but can be as long as several years. The larvae of skin beetles undergo a degree of reversed development when starved, and later grow back to the previously attained level of maturity. The cycle can be repeated many times (see Biological immortality). Larval morphology is highly varied amongst species, with well-developed and sclerotized heads, distinguishable thoracic and abdominal segments (usually the tenth, though sometimes the eighth or ninth).
Beetle larvae can be differentiated from other insect larvae by their hardened, often darkened heads, the presence of chewing mouthparts, and spiracles along the sides of their bodies. Like adult beetles, the larvae are varied in appearance, particularly between beetle families. Beetles with somewhat flattened, highly mobile larvae include the ground beetles and rove beetles; their larvae are described as campodeiform. Some beetle larvae resemble hardened worms with dark head capsules and minute legs. These are elateriform larvae, and are found in the click beetle (Elateridae) and darkling beetle (Tenebrionidae) families. Some elateriform larvae of click beetles are known as wireworms. Beetles in the Scarabaeoidea have short, thick larvae described as scarabaeiform, more commonly known as grubs.
All beetle larvae go through several instars, which are the developmental stages between each moult. In many species, the larvae simply increase in size with each successive instar as more food is consumed. In some cases, however, more dramatic changes occur. Among certain beetle families or genera, particularly those that exhibit parasitic lifestyles, the first instar (the planidium) is highly mobile to search out a host, while the following instars are more sedentary and remain on or within their host. This is known as hypermetamorphosis; it occurs in the Meloidae, Micromalthidae, and Ripiphoridae. The blister beetle "Epicauta vittata" (Meloidae), for example, has three distinct larval stages. Its first stage, the triungulin, has longer legs to go in search of the eggs of grasshoppers. After feeding for a week it moults to the second stage, called the caraboid stage, which resembles the larva of a carabid beetle. In another week it moults and assumes the appearance of a scarabaeid larva – the scarabaeidoid stage. Its penultimate larval stage is the pseudo-pupa or the coarcate larva, which will overwinter and pupate until the next spring.
The larval period can vary widely. A fungus feeding staphylinid "Phanerota fasciata" undergoes three moults in 3.2 days at room temperature while "Anisotoma" sp. (Leiodidae) completes its larval stage in the fruiting body of slime mold in 2 days and possibly represents the fastest growing beetles. Dermestid beetles, "Trogoderma inclusum" can remain in an extended larval state under unfavourable conditions, even reducing their size between moults. A larva is reported to have survived for 3.5 years in an enclosed container.
As with all endopterygotes, beetle larvae pupate, and from these pupae emerge fully formed, sexually mature adult beetles, or imagos. Pupae never have mandibles (they are adecticous). In most pupae, the appendages are not attached to the body and are said to be exarate; in a few beetles (Staphylinidae, Ptiliidae etc.) the appendages are fused with the body (termed as obtect pupae).
Adults have extremely variable lifespans, from weeks to years, depending on the species. Some wood-boring beetles can have extremely long life-cycles. It is believed that when furniture or house timbers are infested by beetle larvae, the timber already contained the larvae when it was first sawn up. A birch bookcase 40 years old released adult "Eburia quadrigeminata" (Cerambycidae), while "Buprestis aurulenta" and other Buprestidae have been documented as emerging as much as 51 years after manufacture of wooden items.
The elytra allow beetles to both fly and move through confined spaces, doing so by folding the delicate wings under the elytra while not flying, and folding their wings out just before takeoff. The unfolding and folding of the wings is operated by muscles attached to the wing base; as long as the tension on the radial and cubital veins remains, the wings remain straight. In some day-flying species (for example, Buprestidae, Scarabaeidae), flight does not include large amounts of lifting of the elytra, having the metathorac wings extended under the lateral elytra margins. The altitude reached by beetles in flight varies. One study investigating the flight altitude of the ladybird species "Coccinella septempunctata" and "Harmonia axyridis" using radar showed that, whilst the majority in flight over a single location were at 150–195 m above ground level, some reached altitudes of over 1100 m.
Many rove beetles have greatly reduced elytra, and while they are capable of flight, they most often move on the ground: their soft bodies and strong abdominal muscles make them flexible, easily able to wriggle into small cracks.
Aquatic beetles use several techniques for retaining air beneath the water's surface. Diving beetles (Dytiscidae) hold air between the abdomen and the elytra when diving. Hydrophilidae have hairs on their under surface that retain a layer of air against their bodies. Adult crawling water beetles use both their elytra and their hind coxae (the basal segment of the back legs) in air retention, while whirligig beetles simply carry an air bubble down with them whenever they dive.
Beetles have a variety of ways to communicate, including the use of pheromones. The mountain pine beetle emits a pheromone to attract other beetles to a tree. The mass of beetles are able to overcome the chemical defenses of the tree. After the tree's defenses have been exhausted, the beetles emit an anti-aggregation pheromone. This species can stridulate to communicate, but others may use sound to defend themselves when attacked.
Parental care is found in a few families of beetle, perhaps for protection against adverse conditions and predators. The rove beetle "Bledius spectabilis" lives in salt marshes, so the eggs and larvae are endangered by the rising tide. The maternal beetle patrols the eggs and larvae, burrowing to keep them from flooding and asphyxiating, and protects them from the predatory carabid beetle "Dicheirotrichus gustavi" and from the parasitoidal wasp "Barycnemis blediator", which kills some 15% of the larvae.
Burying beetles are attentive parents, and participate in cooperative care and feeding of their offspring. Both parents work to bury small animal carcass to serve as a food resource for their young and build a brood chamber around it. The parents prepare the carcass and protect it from competitors and from early decomposition. After their eggs hatch, the parents keep the larvae clean of fungus and bacteria and help the larvae feed by regurgitating food for them.
Some dung beetles provide parental care, collecting herbivore dung and laying eggs within that food supply, an instance of mass provisioning. Some species do not leave after this stage, but remain to safeguard their offspring.
Most species of beetles do not display parental care behaviors after the eggs have been laid.
Subsociality, where females guard their offspring, is well-documented in two families of Chrysomelidae, Cassidinae and Chrysomelinae.
Eusociality involves cooperative brood care (including brood care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labour into reproductive and non-reproductive groups. Few organisms outside Hymenoptera exhibit this behavior; the only beetle to do so is the weevil "Austroplatypus incompertus". This Australian species lives in horizontal networks of tunnels, in the heartwood of "Eucalyptus" trees. It is one of more than 300 species of wood-boring Ambrosia beetles which distribute the spores of ambrosia fungi. The fungi grow in the beetles' tunnels, providing food for the beetles and their larvae; female offspring remain in the tunnels and maintain the fungal growth, probably never reproducing. Cooperative brood care is also found in the bess beetles (Passalidae) where the larvae feed on the semi-digested faeces of the adults.
Beetles are able to exploit a wide diversity of food sources available in their many habitats. Some are omnivores, eating both plants and animals. Other beetles are highly specialized in their diet. Many species of leaf beetles, longhorn beetles, and weevils are very host-specific, feeding on only a single species of plant. Ground beetles and rove beetles (Staphylinidae), among others, are primarily carnivorous and catch and consume many other arthropods and small prey, such as earthworms and snails. While most predatory beetles are generalists, a few species have more specific prey requirements or preferences.
Decaying organic matter is a primary diet for many species. This can range from dung, which is consumed by coprophagous species (such as certain scarab beetles in the Scarabaeidae), to dead animals, which are eaten by necrophagous species (such as the carrion beetles, Silphidae). Some beetles found in dung and carrion are in fact predatory. These include members of the Histeridae and Silphidae, preying on the larvae of coprophagous and necrophagous insects. Many beetles feed under bark, some feed on wood while others feed on fungi growing on wood or leaf-litter. Some beetles have special mycangia, structures for the transport of fungal spores.
Beetles, both adults and larvae, are the prey of many animal predators including mammals from bats to rodents, birds, lizards, amphibians, fishes, dragonflies, robberflies, reduviid bugs, ants, other beetles, and spiders. Beetles use a variety of anti-predator adaptations to defend themselves. These include camouflage and mimicry against predators that hunt by sight, toxicity, and defensive behaviour.
Camouflage is common and widespread among beetle families, especially those that feed on wood or vegetation, such as leaf beetles (Chrysomelidae, which are often green) and weevils. In some species, sculpturing or various coloured scales or hairs cause beetles such as the avocado weevil "Heilipus apiatus" to resemble bird dung or other inedible objects. Many beetles that live in sandy environments blend in with the coloration of that substrate.
Some longhorn beetles (Cerambycidae) are effective Batesian mimics of wasps. Beetles may combine coloration with behavioural mimicry, acting like the wasps they already closely resemble. Many other beetles, including ladybirds, blister beetles, and lycid beetles secrete distasteful or toxic substances to make them unpalatable or poisonous, and are often aposematic, where bright or contrasting coloration warn off predators; many beetles and other insects mimic these chemically protected species.
Chemical defense is important in some species, usually being advertised by bright aposematic colours. Some Tenebrionidae use their posture for releasing noxious chemicals to warn off predators. Chemical defences may serve purposes other than just protection from vertebrates, such as protection from a wide range of microbes. Some species sequester chemicals from the plants they feed on, incorporating them into their own defenses.
Other species have special glands to produce deterrent chemicals. The defensive glands of carabid ground beetles produce a variety of hydrocarbons, aldehydes, phenols, quinones, esters, and acids released from an opening at the end of the abdomen. African carabid beetles (for example, "Anthia" and "Thermophilum" – "Thermophilum" is sometimes included within "Anthia") employ the same chemicals as ants: formic acid. Bombardier beetles have well-developed pygidial glands that empty from the sides of the intersegment membranes between the seventh and eighth abdominal segments. The gland is made of two containing chambers, one for hydroquinones and hydrogen peroxide, the other holding hydrogen peroxide and catalase enzymes. These chemicals mix and result in an explosive ejection, reaching a temperature of around , with the breakdown of hydroquinone to hydrogen, oxygen, and quinone. The oxygen propels the noxious chemical spray as a jet that can be aimed accurately at predators.
Large ground-dwelling beetles such as Carabidae, the rhinoceros beetle and the longhorn beetles defend themselves using strong mandibles, or heavily sclerotised (armored) spines or horns to deter or fight off predators. Many species of weevil that feed out in the open on leaves of plants react to attack by employing a drop-off reflex. Some combine it with thanatosis, in which they close up their appendages and "play dead". The click beetles (Elateridae) can suddenly catapult themselves out of danger by releasing the energy stored by a click mechanism, which consists of a stout spine on the prosternum and a matching groove in the mesosternum. Some species startle an attacker by producing sounds through a process known as stridulation.
A few species of beetles are ectoparasitic on mammals. One such species, "Platypsyllus castoris", parasitises beavers ("Castor" spp.). This beetle lives as a parasite both as a larva and as an adult, feeding on epidermal tissue and possibly on skin secretions and wound exudates. They are strikingly flattened dorsoventrally, no doubt as an adaptation for slipping between the beavers' hairs. They are wingless and eyeless, as are many other ectoparasites. Others are kleptoparasites of other invertebrates, such as the small hive beetle ("Aethina tumida") that infests honey bee nests, while many species are parasitic inquilines or commensal in the nests of ants. A few groups of beetles are primary parasitoids of other insects, feeding off of, and eventually killing their hosts.
Beetle-pollinated flowers are usually large, greenish or off-white in color, and heavily scented. Scents may be spicy, fruity, or similar to decaying organic material. Beetles were most likely the first insects to pollinate flowers. Most beetle-pollinated flowers are flattened or dish-shaped, with pollen easily accessible, although they may include traps to keep the beetle longer. The plants' ovaries are usually well protected from the biting mouthparts of their pollinators. The beetle families that habitually pollinate flowers are the Buprestidae, Cantharidae, Cerambycidae, Cleridae, Dermestidae, Lycidae, Melyridae, Mordellidae, Nitidulidae and Scarabaeidae. Beetles may be particularly important in some parts of the world such as semiarid areas of southern Africa and southern California and the montane grasslands of KwaZulu-Natal in South Africa.
Mutualism is well known in a few beetles, such as the ambrosia beetle, which partners with fungi to digest the wood of dead trees. The beetles excavate tunnels in dead trees in which they cultivate fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases spores of its fungal symbiont. The fungus penetrates the plant's xylem tissue, digests it, and concentrates the nutrients on and near the surface of the beetle gallery, so the weevils and the fungus both benefit. The beetles cannot eat the wood due to toxins, and uses its relationship with fungi to help overcome the defenses of its host tree in order to provide nutrition for their larvae. Chemically mediated by a bacterially produced polyunsaturated peroxide, this mutualistic relationship between the beetle and the fungus is coevolved.
About 90% of beetle species enter a period of adult diapause, a quiet phase with reduced metabolism to tide unfavourable environmental conditions. Adult diapause is the most common form of diapause in Coleoptera. To endure the period without food (often lasting many months) adults prepare by accumulating reserves of lipids, glycogen, proteins and other substances needed for resistance to future hazardous changes of environmental conditions. This diapause is induced by signals heralding the arrival of the unfavourable season; usually the cue is photoperiodic. Short (decreasing) day length serves as a signal of approaching winter and induces winter diapause (hibernation). A study of hibernation in the Arctic beetle "Pterostichus brevicorni" showed that the body fat levels of adults were highest in autumn with the alimentary canal filled with food, but empty by the end of January. This loss of body fat was a gradual process, occurring in combination with dehydration.
All insects are poikilothermic, so the ability of a few beetles to live in extreme environments depends on their resilience to unusually high or low temperatures. The bark beetle "Pityogenes chalcographus" can survive whilst overwintering beneath tree bark; the Alaskan beetle "Cucujus clavipes puniceus" is able to withstand ; its larvae may survive . At these low temperatures, the formation of ice crystals in internal fluids is the biggest threat to survival to beetles, but this is prevented through the production of antifreeze proteins that stop water molecules from grouping together. The low temperatures experienced by "Cucujus clavipes" can be survived through their deliberate dehydration in conjunction with the antifreeze proteins. This concentrates the antifreezes several fold. The hemolymph of the mealworm beetle "Tenebrio molitor" contains several antifreeze proteins. The Alaskan beetle "Upis ceramboides" can survive −60 °C: its cryoprotectants are xylomannan, a molecule consisting of a sugar bound to a fatty acid, and the sugar-alcohol, threitol.
Conversely, desert dwelling beetles are adapted to tolerate high temperatures. For example, the Tenebrionid beetle "Onymacris rugatipennis" can withstand . Tiger beetles in hot, sandy areas are often whitish (for example, "Habroscelimorpha dorsalis"), to reflect more heat than a darker colour would. These beetles also exhibits behavioural adaptions to tolerate the heat: they are able to stand erect on their tarsi to hold their bodies away from the hot ground, seek shade, and turn to face the sun so that only the front parts of their heads are directly exposed.
The fogstand beetle of the Namib Desert, "Stenocara gracilipes", is able to collect water from fog, as its elytra have a textured surface combining hydrophilic (water-loving) bumps and waxy, hydrophobic troughs. The beetle faces the early morning breeze, holding up its abdomen; droplets condense on the elytra and run along ridges towards their mouthparts. Similar adaptations are found in several other Namib desert beetles such as "Onymacris unguicularis".
Some terrestrial beetles that exploit shoreline and floodplain habitats have physiological adaptations for surviving floods. In the event of flooding, adult beetles may be mobile enough to move away from flooding, but larvae and pupa often cannot. Adults of "Cicindela togata" are unable to survive immersion in water, but larvae are able to survive a prolonged period, up to 6 days, of anoxia during floods. Anoxia tolerance in the larvae may have been sustained by switching to anaerobic metabolic pathways or by reducing metabolic rate. Anoxia tolerance in the adult Carabid beetle "Pelophilia borealis" was tested in laboratory conditions and it was found that they could survive a continuous period of up to 127 days in an atmosphere of 99.9% nitrogen at 0 °C.
Many beetle species undertake annual mass movements which are termed as migrations. These include the pollen beetle "Meligethes aeneus" and many species of coccinellids. These mass movements may also be opportunistic, in search of food, rather than seasonal. A 2008 study of an unusually large outbreak of Mountain Pine Beetle ("Dendroctonus ponderosae") in British Columbia found that beetles were capable of flying 30–110 km per day in densities of up to 18, 600 beetles per hectare.
Several species of dung beetle, especially the sacred scarab, "Scarabaeus sacer", were revered in Ancient Egypt. The hieroglyphic image of the beetle may have had existential, fictional, or ontologic significance. Images of the scarab in bone, ivory, stone, Egyptian faience, and precious metals are known from the Sixth Dynasty and up to the period of Roman rule. The scarab was of prime significance in the funerary cult of ancient Egypt. The scarab was linked to Khepri, the god of the rising sun, from the supposed resemblance of the rolling of the dung ball by the beetle to the rolling of the sun by the god. Some of ancient Egypt's neighbors adopted the scarab motif for seals of varying types. The best-known of these are the Judean LMLK seals, where eight of 21 designs contained scarab beetles, which were used exclusively to stamp impressions on storage jars during the reign of Hezekiah. Beetles are mentioned as a symbol of the sun, as in ancient Egypt, in Plutarch's 1st century "Moralia". The Greek Magical Papyri of the 2nd century BC to the 5th century AD describe scarabs as an ingredient in a spell.
Pliny the Elder discusses beetles in his "Natural History", describing the stag beetle: "Some insects, for the preservation of their wings, are covered with (elytra) – the beetle, for instance, the wing of which is peculiarly fine and frail. To these insects a sting has been denied by Nature; but in one large kind we find horns of a remarkable length, two-pronged at the extremities, and forming pincers, which the animal closes when it is its intention to bite." The stag beetle is recorded in a Greek myth by Nicander and recalled by Antoninus Liberalis in which Cerambus is turned into a beetle: "He can be seen on trunks and has hook-teeth, ever moving his jaws together. He is black, long and has hard wings like a great dung beetle". The story concludes with the comment that the beetles were used as toys by young boys, and that the head was removed and worn as a pendant.
About 75% of beetle species are phytophagous in both the larval and adult stages. Many feed on economically important plants and stored plant products, including trees, cereals, tobacco, and dried fruits. Some, such as the boll weevil, which feeds on cotton buds and flowers, can cause extremely serious damage to agriculture. The boll weevil crossed the Rio Grande near Brownsville, Texas, to enter the United States from Mexico around 1892, and had reached southeastern Alabama by 1915. By the mid-1920s, it had entered all cotton-growing regions in the US, traveling per year. It remains the most destructive cotton pest in North America. Mississippi State University has estimated, since the boll weevil entered the United States, it has cost cotton producers about $13 billion, and in recent times about $300 million per year.
The bark beetle, elm leaf beetle and the Asian longhorned beetle ("Anoplophora glabripennis") are among the species that attack elm trees. Bark beetles (Scolytidae) carry Dutch elm disease as they move from infected breeding sites to healthy trees. The disease has devastated elm trees across Europe and North America.
Some species of beetle have evolved immunity to insecticides. For example, the Colorado potato beetle, "Leptinotarsa decemlineata", is a destructive pest of potato plants. Its hosts include other members of the Solanaceae, such as nightshade, tomato, eggplant and capsicum, as well as the potato. Different populations have between them developed resistance to all major classes of insecticide. The Colorado potato beetle was evaluated as a tool of entomological warfare during World War II, the idea being to use the beetle and its larvae to damage the crops of enemy nations. Germany tested its Colorado potato beetle weaponisation program south of Frankfurt, releasing 54,000 beetles.
The death watch beetle, "Xestobium rufovillosum" (Ptinidae), is a serious pest of older wooden buildings in Europe. It attacks hardwoods such as oak and chestnut, always where some fungal decay has taken or is taking place. The actual introduction of the pest into buildings is thought to take place at the time of construction.
Other pests include the coconut hispine beetle, "Brontispa longissima", which feeds on young leaves, seedlings and mature coconut trees, causing serious economic damage in the Philippines. The mountain pine beetle is a destructive pest of mature or weakened lodgepole pine, sometimes affecting large areas of Canada.
Beetles can be beneficial to human economics by controlling the populations of pests. The larvae and adults of some species of lady beetles (Coccinellidae) feed on aphids that are pests. Other lady beetles feed on scale insects, whitefly and mealybugs. If normal food sources are scarce, they may feed on small caterpillars, young plant bugs, or honeydew and nectar. Ground beetles (Carabidae) are common predators of many insect pests, including fly eggs, caterpillars, and wireworms. Ground beetles can help to control weeds by eating their seeds in the soil, reducing the need for herbicides to protect crops. The effectiveness of some species in reducing certain plant populations has resulted in the deliberate introduction of beetles in order to control weeds. For example, the genus "Zygogramma" is native to North America but has been used to control "Parthenium hysterophorus" in India and "Ambrosia artemisiifolia" in Russia.
Dung beetles (Scarabidae) have been successfully used to reduce the populations of pestilent flies, such as "Musca vetustissima" and "Haematobia exigua" which are serious pests of cattle in Australia. The beetles make the dung unavailable to breeding pests by quickly rolling and burying it in the soil, with the added effect of improving soil fertility, tilth, and nutrient cycling. The Australian Dung Beetle Project (1965–1985), introduced species of dung beetle to Australia from South Africa and Europe to reduce populations of "Musca vetustissima", following successful trials of this technique in Hawaii. The American Institute of Biological Sciences reports that dung beetles save the United States cattle industry an estimated US$380 million annually through burying above-ground livestock feces.
The Dermestidae are often used in taxidermy and in the preparation of scientific specimens, to clean soft tissue from bones. Larvae feed on and remove cartilage along with other soft tissue.
Beetles are the most widely eaten insects, with about 344 species used as food, usually at the larval stage. The mealworm (the larva of the darkling beetle) and the rhinoceros beetle are among the species commonly eaten. A wide range of species is also used in folk medicine to treat those suffering from a variety of disorders and illnesses, though this is done without clinical studies supporting the efficacy of such treatments.
Due to their habitat specificity, many species of beetles have been suggested as suitable as indicators, their presence, numbers, or absence providing a measure of habitat quality. Predatory beetles such as the tiger beetles (Cicindelidae) have found scientific use as an indicator taxon for measuring regional patterns of biodiversity. They are suitable for this as their taxonomy is stable; their life history is well described; they are large and simple to observe when visiting a site; they occur around the world in many habitats, with species specialised to particular habitats; and their occurrence by species accurately indicates other species, both vertebrate and invertebrate. According to the habitats, many other groups such as the rove beetles in human-modified habitats, dung beetles in savannas and saproxylic beetles in forests have been suggested as potential indicator species.
Many beetles have beautiful and durable elytra that have been used as material in arts, with beetlewing the best example. Sometimes, they are incorporated into ritual objects for their religious significance. Whole beetles, either as-is or encased in clear plastic, are made into objects ranging from cheap souvenirs such as key chains to expensive fine-art jewellery. In parts of Mexico, beetles of the genus "Zopherus" are made into living brooches by attaching costume jewelry and golden chains, which is made possible by the incredibly hard elytra and sedentary habits of the genus.
Fighting beetles are used for entertainment and gambling. This sport exploits the territorial behavior and mating competition of certain species of large beetles. In the Chiang Mai district of northern Thailand, male "Xylotrupes" rhinoceros beetles are caught in the wild and trained for fighting. Females are held inside a log to stimulate the fighting males with their pheromones. These fights may be competitive and involve gambling both money and property. In South Korea the Dytiscidae species "Cybister tripunctatus" is used in a roulette-like game.
Beetles are sometimes used as instruments: the Onabasulu of Papua New Guinea historically used the weevil "Rhynchophorus ferrugineus" as a musical instrument by letting the human mouth serve as a variable resonance chamber for the wing vibrations of the live adult beetle.
Some species of beetle are kept as pets, for example diving beetles (Dytiscidae) may be kept in a domestic fresh water tank.
In Japan the practice of keeping horned rhinoceros beetles (Dynastinae) and stag beetles (Lucanidae) is particularly popular amongst young boys. Such is the popularity in Japan that vending machines dispensing live beetles were developed in 1999, each holding up to 100 stag beetles.
Beetle collecting became extremely popular in the Victorian era. The naturalist Alfred Russel Wallace collected (by his own count) a total of 83,200 beetles during the eight years described in his 1869 book "The Malay Archipelago", including 2,000 species new to science.
Several coleopteran adaptations have attracted interest in biomimetics with possible commercial applications. The bombardier beetle's powerful repellent spray has inspired the development of a fine mist spray technology, claimed to have a low carbon impact compared to aerosol sprays. Moisture harvesting behavior by the Namib desert beetle ("Stenocara gracilipes") has inspired a self-filling water bottle which utilises hydrophilic and hydrophobic materials to benefit people living in dry regions with no regular rainfall.
Living beetles have been used as cyborgs. A Defense Advanced Research Projects Agency funded project implanted electrodes into "Mecynorhina torquata" beetles, allowing them to be remotely controlled via a radio receiver held on its back, as proof-of-concept for surveillance work. Similar technology has been applied to enable a human operator to control the free-flight steering and walking gaits of "Mecynorhina torquata" as well as graded turning and backward walking of "Zophobas morio".
Since beetles form such a large part of the world's biodiversity, their conservation is important, and equally, loss of habitat and biodiversity is essentially certain to impact on beetles. Many species of beetles have very specific habitats and long life cycles that make them vulnerable. Some species are highly threatened while others are already feared extinct. Island species tend to be more susceptible as in the case of "Helictopleurus undatus" of Madagascar which is thought to have gone extinct during the late 20th century. Conservationists have attempted to arouse a liking for beetles with flagship species like the stag beetle, "Lucanus cervus", and tiger beetles (Cicindelidae). In Japan the Genji firefly, "Luciola cruciata", is extremely popular, and in South Africa the Addo elephant dung beetle offers promise for broadening ecotourism beyond the big five tourist mammal species. Popular dislike of pest beetles, too, can be turned into public interest in insects, as can unusual ecological adaptations of species like the fairy shrimp hunting beetle, "Cicinis bruchi". | https://en.wikipedia.org/wiki?curid=7044 |
Concorde
The Aérospatiale/BAC Concorde () is a British–French turbojet-powered supersonic passenger airliner that was operated until 2003. It had a maximum speed over twice the speed of sound, at Mach 2.04 ( at cruise altitude), with seating for 92 to 128 passengers. First flown in 1969, Concorde entered service in 1976 and continued flying for the next 27 years. It is one of only two supersonic transports to have been operated commercially; the other is the Soviet-built Tupolev Tu-144, which operated in the late 1970s.
Concorde was jointly developed and manufactured by Sud Aviation (later Aérospatiale) and the British Aircraft Corporation (BAC) under an Anglo-French treaty. Twenty aircraft were built, including six prototypes and development aircraft. Air France and British Airways were the only airlines to purchase and fly Concorde. The aircraft was used mainly by wealthy passengers who could afford to pay a high price in exchange for the aircraft's speed and luxury service. For example, in 1997, the round-trip ticket price from New York to London was $7,995 (equivalent to $ in ), more than 30 times the cost of the cheapest option to fly this route.
The original programme cost estimate of £70 million met huge overruns and delays, with the program eventually costing £1.3 billion. It was this extreme cost that became the main factor in the production run being much smaller than anticipated. Later, another factor, which affected the viability of all supersonic transport programmes, was that supersonic flight could only be used on ocean-crossing routes, to prevent sonic boom disturbance over populated areas. With only seven airframes each being operated by the British and French, the per-unit cost was impossible to recoup, so the French and British governments absorbed the development costs. British Airways and Air France were able to operate Concorde at a profit, in spite of very high maintenance costs, because the aircraft was able to sustain a high ticket price.
Among other destinations, Concorde flew regular transatlantic flights from London's Heathrow Airport and Paris's Charles de Gaulle Airport to John F. Kennedy International Airport in New York, Washington Dulles International Airport in Virginia and Grantley Adams International Airport in Barbados; it flew these routes in less than half the time of other airliners.
Concorde won the 2006 Great British Design Quest, organised by the BBC and the Design Museum of London, beating other well-known designs such as the BMC Mini, the miniskirt, the Jaguar E-Type, the London Tube map and the Supermarine Spitfire. The type was retired in 2003, three years after the crash of Air France Flight 4590, in which all passengers and crew were killed. The general downturn in the commercial aviation industry after the September 11 attacks in 2001 and the end of maintenance support for Concorde by Airbus (the successor company of both Aérospatiale and BAC) also contributed to the retirement.
The origins of the Concorde project date to the early 1950s, when Arnold Hall, director of the Royal Aircraft Establishment (RAE), asked Morien Morgan to form a committee to study the supersonic transport (SST) concept. The group met for the first time in February 1954 and delivered their first report in April 1955.
At the time it was known that the drag at supersonic speeds was strongly related to the span of the wing. This led to the use of very short-span, very thin trapezoidal wings such as those seen on the control surfaces of many missiles, or in aircraft like the Lockheed F-104 Starfighter or the Avro 730 that the team studied. The team outlined a baseline configuration that looked like an enlarged Avro 730.
This same short span produced very little lift at low speed, which resulted in extremely long take-off runs and frighteningly high landing speeds. In an SST design, this would have required enormous engine power to lift off from existing runways, and to provide the fuel needed, "some horribly large aeroplanes" resulted. Based on this, the group considered the concept of an SST infeasible, and instead suggested continued low-level studies into supersonic aerodynamics.
Soon after, Johanna Weber and Dietrich Küchemann at the RAE published a series of reports on a new wing planform, known in the UK as the "slender delta" concept. The team, including Eric Maskell whose report "Flow Separation in Three Dimensions" contributed to an understanding of the physical nature of separated flow, worked with the fact that delta wings can produce strong vortices on their upper surfaces at high angles of attack. The vortex will lower the air pressure and cause lift to be greatly increased. This effect had been noticed earlier, notably by Chuck Yeager in the Convair XF-92, but its qualities had not been fully appreciated. Weber suggested that this was no mere curiosity, and the effect could be deliberately used to improve low speed performance.
Küchemann's and Weber's papers changed the entire nature of supersonic design almost overnight. Although the delta had already been used on aircraft prior to this point, these designs used planforms that were not much different from a swept wing of the same span. Weber noted that the lift from the vortex was increased by the length of the wing it had to operate over, which suggested that the effect would be maximised by extending the wing along the fuselage as far as possible. Such a layout would still have good supersonic performance inherent to the short span, while also offering reasonable take-off and landing speeds using vortex generation. The only downside to such a design is that the aircraft would have to take off and land very "nose high" to generate the required vortex lift, which led to questions about the low speed handling qualities of such a design. It would also need to have long landing gear to produce the required angle of attack while still on the runway.
Küchemann presented the idea at a meeting where Morgan was also present. Test pilot Eric Brown recalls Morgan's reaction to the presentation, saying that he immediately seized on it as the solution to the SST problem. Brown considers this moment as being the true birth of the Concorde project.
On 1 October 1956 the Ministry of Supply asked Morgan to form a new study group, the "Supersonic Transport Aircraft Committee" ("STAC") (sometimes referred to as the "Supersonic Transport Advisory Committee"), with the explicit goal of developing a practical SST design and finding industry partners to build it. At the very first meeting, on 5 November 1956, the decision was made to fund the development of a test bed aircraft to examine the low-speed performance of the slender delta, a contract that eventually produced the Handley Page HP.115. This aircraft would ultimately demonstrate safe control at speeds as low as , about that of the F-104 Starfighter.
STAC stated that an SST would have economic performance similar to existing subsonic types. Although they would burn more fuel in cruise, they would be able to fly more sorties in a given period of time, so fewer aircraft would be needed to service a particular route. This would remain economically advantageous as long as fuel represented a small percentage of operational costs, as it did at the time.
STAC suggested that two designs naturally fell out of their work, a transatlantic model flying at about Mach 2, and a shorter-range version flying at perhaps Mach 1.2. Morgan suggested that a 150-passenger transatlantic SST would cost about £75 to £90 million to develop, and be in service in 1970. The smaller 100 passenger short-range version would cost perhaps £50 to £80 million, and be ready for service in 1968. To meet this schedule, development would need to begin in 1960, with production contracts let in 1962. Morgan strongly suggested that the US was already involved in a similar project, and that if the UK failed to respond it would be locked out of an airliner market that he believed would be dominated by SST aircraft.
In 1959, a study contract was awarded to Hawker Siddeley and Bristol for preliminary designs based on the slender delta concept, which developed as the HSA.1000 and Bristol 198. Armstrong Whitworth also responded with an internal design, the M-Wing, for the lower-speed shorter-range category. Even at this early time, both the STAC group and the government were looking for partners to develop the designs. In September 1959, Hawker approached Lockheed, and after the creation of British Aircraft Corporation in 1960, the former Bristol team immediately started talks with Boeing, General Dynamics, Douglas Aircraft and Sud Aviation.
Küchemann and others at the RAE continued their work on the slender delta throughout this period, considering three basic shapes; the classic straight-edge delta, the "gothic delta" that was rounded outwards to appear like a gothic arch, and the "ogival wing" that was compound-rounded into the shape of an ogee. Each of these planforms had its own advantages and disadvantages in terms of aerodynamics. As they worked with these shapes, a practical concern grew to become so important that it forced selection of one of these designs.
Generally one wants to have the wing's centre of pressure (CP, or "lift point") close to the aircraft's centre of gravity (CG, or "balance point") to reduce the amount of control force required to pitch the aircraft. As the aircraft layout changes during the design phase, it is common for the CG to move fore or aft. With a normal wing design this can be addressed by moving the wing slightly fore or aft to account for this. With a delta wing running most of the length of the fuselage, this was no longer easy; moving the wing would leave it in front of the nose or behind the tail. Studying the various layouts in terms of CG changes, both during design and changes due to fuel use during flight, the ogee planform immediately came to the fore.
While the wing planform was evolving, so was the basic SST concept. Bristol's original Type 198 was a small design with an almost pure slender delta wing, but evolved into the larger Type 223.
To test the new wing, NASA privately assisted the team by modifying a Douglas F5D Skylancer with temporary wing modifications to mimic the wing selection. In 1965 the NASA test aircraft successfully tested the wing, and found that it reduced landing speeds noticeably over the standard delta wing. NASA Ames test center also ran simulations which showed that the aircraft would suffer a sudden change in pitch when entering ground effect. Ames test pilots later participated in a joint cooperative test with the French and British test pilots and found that the simulations had been correct, and this information was added to pilot training.
By this time similar political and economic concerns in France had led to their own SST plans. In the late 1950s the government requested designs from both the government-owned Sud Aviation and Nord Aviation, as well as Dassault. All three returned designs based on Küchemann and Weber's slender delta; Nord suggested a ramjet powered design flying at Mach 3, the other two were jet powered Mach 2 designs that were similar to each other. Of the three, the Sud Aviation Super-Caravelle won the design contest with a medium-range design deliberately sized to avoid competition with transatlantic US designs they assumed were already on the drawing board.
As soon as the design was complete, in April 1960, Pierre Satre, the company's technical director, was sent to Bristol to discuss a partnership. Bristol was surprised to find that the Sud team had designed a very similar aircraft after considering the SST problem and coming to the very same conclusions as the Bristol and STAC teams in terms of economics. It was later revealed that the original STAC report, marked "For UK Eyes Only", had secretly been passed to the French to win political favour. Sud made minor changes to the paper, and presented it as their own work.
Unsurprisingly, the two teams found much to agree on. The French had no modern large jet engines, and had already concluded they would buy a British design anyway (as they had on the earlier subsonic Caravelle). As neither company had experience in the use of high-heat metals for airframes, a maximum speed of around Mach 2 was selected so aluminium could be used – above this speed the friction with the air warms the metal so much that aluminium begins to soften. This lower speed would also speed development and allow their design to fly before the Americans. Finally, everyone involved agreed that Küchemann's ogee shaped wing was the right one.
The only disagreements were over the size and range. The UK team was still focused on a 150-passenger design serving transatlantic routes, while the French were deliberately avoiding these. However, this proved not to be the barrier it might seem; common components could be used in both designs, with the shorter range version using a clipped fuselage and four engines, the longer one with a stretched fuselage and six engines, leaving only the wing to be extensively re-designed. The teams continued to meet through 1961, and by this time it was clear that the two aircraft would be considerably more similar in spite of different range and seating arrangements. A single design emerged that differed mainly in fuel load. More powerful Bristol Siddeley Olympus engines, being developed for the TSR-2, allowed either design to be powered by only four engines.
While the development teams met, French Minister of Public Works and Transport Robert Buron was meeting with the UK Minister of Aviation Peter Thorneycroft, and Thorneycroft soon revealed to the cabinet that the French were much more serious about a partnership than any of the US companies. The various US companies had proved uninterested in such a venture, likely due to the belief that the government would be funding development and would frown on any partnership with a European company, and the risk of "giving away" US technological leadership to a European partner.
When the STAC plans were presented to the UK cabinet, a very negative reaction resulted. The economic considerations were considered highly questionable, especially as these were based on development costs, now estimated to be £150 million, which were repeatedly overrun in the industry. The Treasury Ministry in particular presented a very negative view, suggesting that there was no way the project would have any positive financial returns for the government, especially in light that "the industry's past record of over-optimistic estimating (including the recent history of the TSR.2) suggests that it would be prudent to consider the £150 million [cost] to turn out much too low."
This concern led to an independent review of the project by the Committee on Civil Scientific Research and Development, which met on topic between July and September 1962. The Committee ultimately rejected the economic arguments, including considerations of supporting the industry made by Thorneycroft. Their report in October stated that it was unlikely there would be any direct positive economic outcome, but that the project should still be considered for the simple reason that everyone else was going supersonic, and they were concerned they would be locked out of future markets. Conversely, it appeared the project would not be likely to significantly impact other, more important, research efforts.
After considerable argument, the decision to proceed ultimately fell to an unlikely political expediency. At the time, the UK was pressing for admission to the European Common Market, which was being controlled by Charles de Gaulle who felt the UK's Special Relationship with the US made them unacceptable in a pan-European group. Cabinet felt that signing a deal with Sud would pave the way for Common Market entry, and this became the main deciding reason for moving ahead with the deal. It was this belief that had led the original STAC documents being leaked to the French. However, De Gaulle spoke of the European origin of the design, and continued to block the UK's entry into the Common Market.
The development project was negotiated as an international treaty between the two countries rather than a commercial agreement between companies and included a clause, originally asked for by the UK, imposing heavy penalties for cancellation. A draft treaty was signed on 29 November 1962.
Reflecting the treaty between the British and French governments that led to Concorde's construction, the name "Concorde" is from the French word "concorde" (), which has an English equivalent, "concord". Both words mean "agreement", "harmony" or "union". The name was officially changed to "Concord" by Harold Macmillan in response to a perceived slight by Charles de Gaulle. At the French roll-out in Toulouse in late 1967, the British Government Minister of Technology, Tony Benn, announced that he would change the spelling back to "Concorde". This created a nationalist uproar that died down when Benn stated that the suffixed "e" represented "Excellence, England, Europe and Entente (Cordiale)". In his memoirs, he recounts a tale of a letter from an irate Scotsman claiming: "[Y]ou talk about 'E' for England, but part of it is made in Scotland." Given Scotland's contribution of providing the nose cone for the aircraft, Benn replied, "[I]t was also 'E' for 'Écosse' (the French name for Scotland) – and I might have added 'e' for extravagance and 'e' for escalation as well!"
Concorde also acquired an unusual nomenclature for an aircraft. In common usage in the United Kingdom, the type is known as "Concorde" without an article, rather than " Concorde" or " Concorde".
Described by "Flight International" as an "aviation icon" and "one of aerospace's most ambitious but commercially flawed projects", Concorde failed to meet its original sales targets, despite initial interest from several airlines.
At first, the new consortium intended to produce one long-range and one short-range version. However, prospective customers showed no interest in the short-range version and it was dropped.
An advertisement covering two full pages, promoting Concorde, ran in the 29 May 1967 issue of "Aviation Week & Space Technology". The advertisement predicted a market for 350 aircraft by 1980 and boasted of Concorde's head start over the United States' SST project.
Concorde had considerable difficulties that led to its dismal sales performance. Costs had spiralled during development to more than six times the original projections, arriving at a unit cost of £23 million in 1977 (equivalent to £ million in ). Its sonic boom made travelling supersonically over land impossible without causing complaints from citizens. World events had also dampened Concorde sales prospects, the 1973–74 stock market crash and the 1973 oil crisis had made many airlines cautious about aircraft with high fuel consumption rates; and new wide-body aircraft, such as the Boeing 747, had recently made subsonic aircraft significantly more efficient and presented a low-risk option for airlines. While carrying a full load, Concorde achieved 15.8 passenger miles per gallon of fuel, while the Boeing 707 reached 33.3 pm/g, the Boeing 747 46.4 pm/g, and the McDonnell Douglas DC-10 53.6 pm/g. An emerging trend in the industry in favour of cheaper airline tickets had also caused airlines such as Qantas to question Concorde's market suitability.
The consortium received orders, i.e., non-binding options, for over 100 of the long-range version from the major airlines of the day: Pan Am, BOAC, and Air France were the launch customers, with six Concordes each. Other airlines in the order book included Panair do Brasil, Continental Airlines, Japan Airlines, Lufthansa, American Airlines, United Airlines, Air India, Air Canada, Braniff, Singapore Airlines, Iran Air, Olympic Airways, Qantas, CAAC Airlines, Middle East Airlines, and TWA. At the time of the first flight the options list contained 74 options from 16 airlines:
The design work was supported by a preceding research programme studying the flight characteristics of low ratio delta wings. A supersonic Fairey Delta 2 was modified to carry the ogee planform, and, renamed as the BAC 221, used for flight tests of the high speed flight envelope, the Handley Page HP.115 also provided valuable information on low speed performance.
Construction of two prototypes began in February 1965: 001, built by Aérospatiale at Toulouse, and 002, by BAC at Filton, Bristol. Concorde 001 made its first test flight from Toulouse on 2 March 1969, piloted by André Turcat, and first went supersonic on 1 October. The first UK-built Concorde flew from Filton to RAF Fairford on 9 April 1969, piloted by Brian Trubshaw. Both prototypes were presented to the public for the first time on 7–8 June 1969 at the Paris Air Show. As the flight programme progressed, 001 embarked on a sales and demonstration tour on 4 September 1971, which was also the first transatlantic crossing of Concorde. Concorde 002 followed suit on 2 June 1972 with a tour of the Middle and Far East. Concorde 002 made the first visit to the United States in 1973, landing at the new Dallas/Fort Worth Regional Airport to mark that airport's opening.
While Concorde had initially held a great deal of customer interest, the project was hit by a large number of order cancellations. The Paris Le Bourget air show crash of the competing Soviet Tupolev Tu-144 had shocked potential buyers, and public concern over the environmental issues presented by a supersonic aircraft—the sonic boom, take-off noise and pollution—had produced a shift in public opinion of SSTs. By 1976 four nations remained as prospective buyers: Britain, France, China, and Iran. Only Air France and British Airways (the successor to BOAC) took up their orders, with the two governments taking a cut of any profits made.
The United States government cut federal funding for the Boeing 2707, its rival supersonic transport programme, in 1971; Boeing did not complete its two 2707 prototypes. The US, India, and Malaysia all ruled out Concorde supersonic flights over the noise concern, although some of these restrictions were later relaxed. Professor Douglas Ross characterised restrictions placed upon Concorde operations by President Jimmy Carter's administration as having been an act of protectionism of American aircraft manufacturers.
Concorde is an ogival delta winged aircraft with four Olympus engines based on those employed in the RAF's Avro Vulcan strategic bomber. It is one of the few commercial aircraft to employ a tailless design (the Tupolev Tu-144 being another). Concorde was the first airliner to have a (in this case, analogue) fly-by-wire flight-control system; the avionics system Concorde used was unique because it was the first commercial aircraft to employ hybrid circuits. The principal designer for the project was Pierre Satre, with Sir Archibald Russell as his deputy.
Concorde pioneered the following technologies:
For high speed and optimisation of flight:
For weight-saving and enhanced performance:
A symposium titled "Supersonic-Transport Implications" was hosted by the Royal Aeronautical Society on 8 December 1960. Various views were put forward on the likely type of powerplant for a supersonic transport, such as podded or buried installation and turbojet or ducted-fan engines. Boundary layer management in the podded installation was put forward as simpler with only an inlet cone but Dr. Seddon of the RAE saw "a future in a more sophisticated integration of shapes" in a buried installation. Another concern highlighted the case with two or more engines situated behind a single intake. An intake failure could lead to a double or triple engine failure. The advantage of the ducted fan over the turbojet was reduced airport noise but with considerable economic penalties with its larger cross-section producing excessive drag. At that time it was considered that the noise from a turbojet optimised for supersonic cruise could be reduced to an acceptable level using noise suppressors as used on subsonic jets.
The powerplant configuration selected for Concorde, and its development to a certificated design, can be seen in light of the above symposium topics (which highlighted airfield noise, boundary layer management and interactions between adjacent engines) and the requirement that the powerplant, at Mach 2, tolerate combinations of pushovers, sideslips, pull-ups and throttle slamming without surging. Extensive development testing with design changes and changes to intake and engine control laws would address most of the issues except airfield noise and the interaction between adjacent powerplants at speeds above Mach 1.6 which meant Concorde "had to be certified aerodynamically as a twin-engined aircraft above Mach 1.6".
Rolls-Royce had a design proposal, the RB.169, for the aircraft at the time of Concorde's initial design but "to develop a brand-new engine for Concorde would have been prohibitively expensive" so an existing engine, already flying in the TSR-2 prototype, was chosen. It was the Olympus 320 turbojet, a development of the Bristol engine first used for the Avro Vulcan bomber.
Great confidence was placed in being able to reduce the noise of a turbojet and massive strides by SNECMA in silencer design were reported during the programme. However, by 1974 the spade silencers which projected into the exhaust were reported to be ineffective. The Olympus Mk.622 with reduced jet velocity was proposed to reduce the noise but it was not developed.
Situated behind the leading edge of the wing the engine intake had wing boundary layer ahead of it. Two-thirds was diverted and the remaining third which entered the intake did not adversely affect the intake efficiency except during pushovers when the boundary layer thickened ahead of the intake and caused surging. Extensive wind tunnel testing helped define leading edge modifications ahead of the intakes which solved the problem.
Each engine had its own intake and the engine nacelles were paired with a splitter plate between them to minimise adverse behaviour of one powerplant influencing the other. Only above was an engine surge likely to affect the adjacent engine.
Concorde needed to fly long distances to be economically viable; this required high efficiency from the powerplant. Turbofan engines were rejected due to their larger cross-section producing excessive drag. Olympus turbojet technology was available to be developed to meet the design requirements of the aircraft, although turbofans would be studied for any future SST.
The aircraft used reheat (afterburners) at take-off and to pass through the upper transonic regime and to supersonic speeds, between Mach 0.95 and 1.7. The afterburners were switched off at all other times. Due to jet engines being highly inefficient at low speeds, Concorde burned of fuel (almost 2% of the maximum fuel load) taxiing to the runway. Fuel used is Jet A-1. Due to the high thrust produced even with the engines at idle, only the two outer engines were run after landing for easier taxiing and less brake pad wear – at low weights after landing, the aircraft would not remain stationary with all four engines idling requiring the brakes to be continuously applied to prevent the aircraft from rolling.
The intake design for Concorde's engines was especially critical. The intakes had to provide low distortion levels (to prevent engine surge) and high efficiency for all likely ambient temperatures to be met in cruise. They had to provide adequate subsonic performance for diversion cruise and low engine-face distortion at take-off. They also had to provide an alternative path for excess intake air during engine throttling or shutdowns. The variable intake features required to meet all these requirements consisted of front and rear ramps, a dump door, an auxiliary inlet and a ramp bleed to the exhaust nozzle.
As well as supplying air to the engine, the intake also supplied air through the ramp bleed to the propelling nozzle. The nozzle ejector (or aerodynamic) design, with variable exit area and secondary flow from the intake, contributed to good expansion efficiency from take-off to cruise.
Engine failure causes problems on conventional subsonic aircraft; not only does the aircraft lose thrust on that side but the engine creates drag, causing the aircraft to yaw and bank in the direction of the failed engine. If this had happened to Concorde at supersonic speeds, it theoretically could have caused a catastrophic failure of the airframe. Although computer simulations predicted considerable problems, in practice Concorde could shut down both engines on the same side of the aircraft at Mach 2 without the predicted difficulties. During an engine failure the required air intake is virtually zero. So, on Concorde, engine failure was countered by the opening of the auxiliary spill door and the full extension of the ramps, which deflected the air downwards past the engine, gaining lift and minimising drag. Concorde pilots were routinely trained to handle double engine failure.
Concorde's Air Intake Control Units (AICUs) made use of a digital processor to provide the necessary accuracy for intake control. It was the world's first use of a digital processor to be given full authority control of an essential system in a passenger aircraft. It was developed by the Electronics and Space Systems (ESS) division of the British Aircraft Corporation after it became clear that the analogue AICUs fitted to the prototype aircraft and developed by Ultra Electronics were found to be insufficiently accurate for the tasks in hand.
Concorde's thrust-by-wire engine control system was developed by Ultra Electronics.
Air compression on the outer surfaces caused the cabin to heat up during flight. Every surface, such as windows and panels, was warm to the touch by the end of the flight. Besides engines, the hottest part of the structure of any supersonic aircraft is the nose, due to aerodynamic heating. The engineers used Hiduminium R.R. 58, an aluminium alloy, throughout the aircraft because of its familiarity, cost and ease of construction. The highest temperature that aluminium could sustain over the life of the aircraft was , which limited the top speed to Mach 2.02. Concorde went through two cycles of heating and cooling during a flight, first cooling down as it gained altitude, then heating up after going supersonic. The reverse happened when descending and slowing down. This had to be factored into the metallurgical and fatigue modelling. A test rig was built that repeatedly heated up a full-size section of the wing, and then cooled it, and periodically samples of metal were taken for testing. The Concorde airframe was designed for a life of 45,000 flying hours.
Owing to air compression in front of the plane as it travelled at supersonic speed, the fuselage heated up and expanded by as much as . The most obvious manifestation of this was a gap that opened up on the flight deck between the flight engineer's console and the bulkhead. On some aircraft that conducted a retiring supersonic flight, the flight engineers placed their caps in this expanded gap, wedging the cap when it shrank again. To keep the cabin cool, Concorde used the fuel as a heat sink for the heat from the air conditioning. The same method also cooled the hydraulics. During supersonic flight the surfaces forward from the cockpit became heated, and a visor was used to deflect much of this heat from directly reaching the cockpit.
Concorde had livery restrictions; the majority of the surface had to be covered with a highly reflective white paint to avoid overheating the aluminium structure due to heating effects from supersonic flight at Mach 2. The white finish reduced the skin temperature by . In 1996, Air France briefly painted F-BTSD in a predominantly blue livery, with the exception of the wings, in a promotional deal with Pepsi. In this paint scheme, Air France was advised to remain at for no more than 20 minutes at a time, but there was no restriction at speeds under Mach 1.7. F-BTSD was used because it was not scheduled for any long flights that required extended Mach 2 operations.
Due to its high speeds, large forces were applied to the aircraft during banks and turns, and caused twisting and distortion of the aircraft's structure. In addition there were concerns over maintaining precise control at supersonic speeds. Both of these issues were resolved by active ratio changes between the inboard and outboard elevons, varying at differing speeds including supersonic. Only the innermost elevons, which are attached to the stiffest area of the wings, were active at high speed. Additionally, the narrow fuselage meant that the aircraft flexed. This was visible from the rear passengers' viewpoints.
When any aircraft passes the critical mach of that particular airframe, the centre of pressure shifts rearwards. This causes a pitch down moment on the aircraft if the centre of gravity remains where it was. The engineers designed the wings in a specific manner to reduce this shift, but there was still a shift of about . This could have been countered by the use of trim controls, but at such high speeds this would have dramatically increased drag. Instead, the distribution of fuel along the aircraft was shifted during acceleration and deceleration to move the centre of gravity, effectively acting as an auxiliary trim control.
To fly non-stop across the Atlantic Ocean, Concorde required the greatest supersonic range of any aircraft. This was achieved by a combination of engines which were highly efficient at supersonic speeds, a slender fuselage with high fineness ratio, and a complex wing shape for a high lift-to-drag ratio. This also required carrying only a modest payload and a high fuel capacity, and the aircraft was trimmed with precision to avoid unnecessary drag.
Nevertheless, soon after Concorde began flying, a Concorde "B" model was designed with slightly larger fuel capacity and slightly larger wings with leading edge slats to improve aerodynamic performance at all speeds, with the objective of expanding the range to reach markets in new regions. It featured more powerful engines with sound deadening and without the fuel-hungry and noisy afterburner. It was speculated that it was reasonably possible to create an engine with up to 25% gain in efficiency over the Rolls-Royce/Snecma Olympus 593. This would have given additional range and a greater payload, making new commercial routes possible. This was cancelled due in part to poor sales of Concorde, but also to the rising cost of aviation fuel in the 1970s.
Concorde's high cruising altitude meant passengers received almost twice the flux of extraterrestrial ionising radiation as those travelling on a conventional long-haul flight. Upon Concorde's introduction, it was speculated that this exposure during supersonic travels would increase the likelihood of skin cancer. Due to the proportionally reduced flight time, the overall equivalent dose would normally be less than a conventional flight over the same distance. Unusual solar activity might lead to an increase in incident radiation. To prevent incidents of excessive radiation exposure, the flight deck had a radiometer and an instrument to measure the rate of decrease of radiation. If the radiation level became too high, Concorde would descend below .
Airliner cabins were usually maintained at a pressure equivalent to elevation. Concorde's pressurisation was set to an altitude at the lower end of this range, . Concorde's maximum cruising altitude was ; subsonic airliners typically cruise below .
A sudden reduction in cabin pressure is hazardous to all passengers and crew. Above , a sudden cabin depressurisation would leave a "time of useful consciousness" up to 10–15 seconds for a conditioned athlete. At Concorde's altitude, the air density is very low; a breach of cabin integrity would result in a loss of pressure severe enough that the plastic emergency oxygen masks installed on other passenger jets would not be effective and passengers would soon suffer from hypoxia despite quickly donning them. Concorde was equipped with smaller windows to reduce the rate of loss in the event of a breach, a reserve air supply system to augment cabin air pressure, and a rapid descent procedure to bring the aircraft to a safe altitude. The FAA enforces minimum emergency descent rates for aircraft and noting Concorde's higher operating altitude, concluded that the best response to pressure loss would be a rapid descent. Continuous positive airway pressure would have delivered pressurised oxygen directly to the pilots through masks.
While subsonic commercial jets took eight hours to fly from New York to Paris, the average supersonic flight time on the transatlantic routes was just under 3.5 hours. Concorde had a maximum cruise altitude of and an average cruise speed of , more than twice the speed of conventional aircraft.
With no other civil traffic operating at its cruising altitude of about , Concorde had exclusive use of dedicated oceanic airways, or "tracks", separate from the North Atlantic Tracks, the routes used by other aircraft to cross the Atlantic. Due to the significantly less variable nature of high altitude winds compared to those at standard cruising altitudes, these dedicated SST tracks had fixed co-ordinates, unlike the standard routes at lower altitudes, whose co-ordinates are replotted twice daily based on forecast weather patterns (jetstreams). Concorde would also be cleared in a block, allowing for a slow climb from during the oceanic crossing as the fuel load gradually decreased. In regular service, Concorde employed an efficient "cruise-climb" flight profile following take-off.
The delta-shaped wings required Concorde to adopt a higher angle of attack at low speeds than conventional aircraft, but it allowed the formation of large low pressure vortices over the entire upper wing surface, maintaining lift. The normal landing speed was . Because of this high angle, during a landing approach Concorde was on the "back side" of the drag force curve, where raising the nose would increase the rate of descent; the aircraft was thus largely flown on the throttle and was fitted with an autothrottle to reduce the pilot's workload.
Because of the way Concorde's delta-wing generated lift, the undercarriage had to be unusually strong and tall to allow for the angle of attack at low speed. At rotation, Concorde would rise to a high angle of attack, about 18 degrees. Prior to rotation the wing generated almost no lift, unlike typical aircraft wings. Combined with the high airspeed at rotation ( indicated airspeed), this increased the stresses on the main undercarriage in a way that was initially unexpected during the development and required a major redesign. Due to the high angle needed at rotation, a small set of wheels was added aft to prevent tailstrikes. The main undercarriage units swing towards each other to be stowed but due to their great height also need to contract in length telescopically before swinging to clear each other when stowed. The four main wheel tyres on each bogie unit are inflated to . The twin-wheel nose undercarriage retracts forwards and its tyres are inflated to a pressure of , and the wheel assembly carries a spray deflector to prevent standing water being thrown up into the engine intakes. The tyres are rated to a maximum speed on the runway of . The starboard nose wheel carries a single disc brake to halt wheel rotation during retraction of the undercarriage. The port nose wheel carries speed generators for the anti-skid braking system which prevents brake activation until nose and main wheels rotate at the same rate.
Additionally, due to the high average take-off speed of , Concorde needed upgraded brakes. Like most airliners, Concorde has anti-skid braking – a system which prevents the tyres from losing traction when the brakes are applied for greater control during roll-out. The brakes, developed by Dunlop, were the first carbon-based brakes used on an airliner. The use of carbon over equivalent steel brakes provided a weight-saving of . Each wheel has multiple discs which are cooled by electric fans. Wheel sensors include brake overload, brake temperature, and tyre deflation. After a typical landing at Heathrow, brake temperatures were around . Landing Concorde required a minimum of runway length, this in fact being considerably less than the shortest runway Concorde ever actually landed on, that of Cardiff Airport.
Concorde's drooping nose, developed by Marshall's of Cambridge at Cambridge Airport, enabled the aircraft to switch between being streamlined to reduce drag and achieve optimal aerodynamic efficiency without obstructing the pilot's view during taxi, take-off, and landing operations. Due to the high angle of attack, the long pointed nose obstructed the view and necessitated the capability to droop. The droop nose was accompanied by a moving visor that retracted into the nose prior to being lowered. When the nose was raised to horizontal, the visor would rise in front of the cockpit windscreen for aerodynamic streamlining.
A controller in the cockpit allowed the visor to be retracted and the nose to be lowered to 5° below the standard horizontal position for taxiing and take-off. Following take-off and after clearing the airport, the nose and visor were raised. Prior to landing, the visor was again retracted and the nose lowered to 12.5° below horizontal for maximal visibility. Upon landing the nose was raised to the 5° position to avoid the possibility of damage.
The US Federal Aviation Administration had objected to the restrictive visibility of the visor used on the first two prototype Concordes, which had been designed before a suitable high-temperature window glass had become available, and thus requiring alteration before the FAA would permit Concorde to serve US airports. This led to the redesigned visor used on the production and the four pre-production aircraft (101, 102, 201, and 202). The nose window and visor glass, needed to endure temperatures in excess of at supersonic flight, were developed by Triplex.
"Concorde 001" was modified with rooftop portholes for use on the 1973 Solar Eclipse mission and equipped with observation instruments. It performed the longest observation of a solar eclipse to date, about 74 minutes.
Scheduled flights began on 21 January 1976 on the London–Bahrain and Paris–Rio de Janeiro (via Dakar) routes, with BA flights using the "Speedbird Concorde" call sign to notify air traffic control of the aircraft's unique abilities and restrictions, but the French using their normal call signs. The Paris-Caracas route (via Azores) began on 10 April. The US Congress had just banned Concorde landings in the US, mainly due to citizen protest over sonic booms, preventing launch on the coveted North Atlantic routes. The US Secretary of Transportation, William Coleman, gave permission for Concorde service to Washington Dulles International Airport, and Air France and British Airways simultaneously began a thrice-weekly service to Dulles on 24 May 1976. Due to low demand, Air France cancelled its Washington service in October 1982, while British Airways cancelled it in November 1994.
When the US ban on JFK Concorde operations was lifted in February 1977, New York banned Concorde locally. The ban came to an end on 17 October 1977 when the Supreme Court of the United States declined to overturn a lower court's ruling rejecting efforts by the Port Authority of New York and New Jersey and a grass-roots campaign led by Carol Berman to continue the ban. In spite of complaints about noise, the noise report noted that Air Force One, at the time a Boeing VC-137, was louder than Concorde at subsonic speeds and during take-off and landing. Scheduled service from Paris and London to New York's John F. Kennedy Airport began on 22 November 1977.
In 1977, British Airways and Singapore Airlines shared a Concorde for flights between London and Singapore International Airport at Paya Lebar via Bahrain. The aircraft, BA's Concorde G-BOAD, was painted in Singapore Airlines livery on the port side and British Airways livery on the starboard side. The service was discontinued after three return flights because of noise complaints from the Malaysian government; it could only be reinstated on a new route bypassing Malaysian airspace in 1979. A dispute with India prevented Concorde from reaching supersonic speeds in Indian airspace, so the route was eventually declared not viable and discontinued in 1980.
During the Mexican oil boom, Air France flew Concorde twice weekly to Mexico City's Benito Juárez International Airport via Washington, DC, or New York City, from September 1978 to November 1982. The worldwide economic crisis during that period resulted in this route's cancellation; the last flights were almost empty. The routing between Washington or New York and Mexico City included a deceleration, from Mach 2.02 to Mach 0.95, to cross Florida subsonically and avoid creating a sonic boom over the state; Concorde then re-accelerated back to high speed while crossing the Gulf of Mexico. On 1 April 1989, on an around-the-world luxury tour charter, British Airways implemented changes to this routing that allowed G-BOAF to maintain Mach 2.02 by passing around Florida to the east and south. Periodically Concorde visited the region on similar chartered flights to Mexico City and Acapulco.
From December 1978 to May 1980, Braniff International Airways leased 11 Concordes, five from Air France and six from British Airways. These were used on subsonic flights between Dallas-Fort Worth and Washington Dulles International Airport, flown by Braniff flight crews. Air France and British Airways crews then took over for the continuing supersonic flights to London and Paris. The aircraft were registered in both the United States and their home countries; the European registration was covered while being operated by Braniff, retaining full AF/BA liveries. The flights were not profitable and typically less than 50% booked, forcing Braniff to end its tenure as the only US Concorde operator in May 1980.
In its early years, the British Airways Concorde service had a greater number of "no shows" (passengers who booked a flight and then failed to appear at the gate for boarding) than any other aircraft in the fleet.
Following the launch of British Airways Concorde services, Britain's other major airline, British Caledonian (BCal), set up a task force headed by Gordon Davidson, BA's former Concorde director, to investigate the possibility of their own Concorde operations. This was seen as particularly viable for the airline's long-haul network as there were two unsold aircraft then available for purchase.
One important reason for BCal's interest in Concorde was that the British Government's 1976 aviation policy review had opened the possibility of BA setting up supersonic services in competition with BCal's established sphere of influence. To counteract this potential threat, BCal considered their own independent Concorde plans, as well as a partnership with BA. BCal were considered most likely to have set up a Concorde service on the Gatwick–Lagos route, a major source of revenue and profits within BCal's scheduled route network; BCal's Concorde task force did assess the viability of a daily supersonic service complementing the existing subsonic widebody service on this route.
BCal entered into a bid to acquire at least one Concorde. However, BCal eventually arranged for two aircraft to be leased from BA and Aérospatiale respectively, to be maintained by either BA or Air France. BCal's envisaged two-Concorde fleet would have required a high level of aircraft usage to be cost-effective; therefore, BCal had decided to operate the second aircraft on a supersonic service between Gatwick and Atlanta, with a stopover at either Gander or Halifax. Consideration was given to services to Houston and various points on its South American network at a later stage. Both supersonic services were to be launched at some point during 1980; however, steeply rising oil prices caused by the 1979 energy crisis led to BCal shelving their supersonic ambitions.
By around 1981 in the UK, the future for Concorde looked bleak. The British government had lost money operating Concorde every year, and moves were afoot to cancel the service entirely. A cost projection came back with greatly reduced metallurgical testing costs because the test rig for the wings had built up enough data to last for 30 years and could be shut down. Despite this, the government was not keen to continue. In 1983, BA's managing director, Sir John King, convinced the government to sell the aircraft outright to the then state-owned British Airways for £16.5 million plus the first year's profits. In 2003, Lord Heseltine, who was the Minister responsible at the time, revealed to Alan Robb on BBC Radio 5 Live, that the aircraft had been sold for "next to nothing". Asked by Robb if it was the worst deal ever negotiated by a government minister, he replied "That is probably right. But if you have your hands tied behind your back and no cards and a very skillful negotiator on the other side of the table... I defy you to do any [better]." British Airways was subsequently privatised in 1987.
In 1983, Pan American accused the British Government of subsidising British Airways Concorde air fares, on which a return London–New York was £2,399 (£ in prices), compared to £1,986 (£) with a subsonic first class return, and London–Washington return was £2,426 (£) instead of £2,258 (£) subsonic.
Research revealed that passengers thought that the fare was higher than it actually was, so the airline raised ticket prices to match these perceptions. It is reported that British Airways then ran Concorde at a profit.
Its estimated operating costs were $3,800 per block hour in 1972, compared to actual 1971 operating costs of $1,835 for a 707 and $3,500 for a 747; for a 3,050 nmi London–New York sector, a 707 cost $13,750 or 3.04c per seat/nmi, a 747 $26,200 or 2.4c per seat/nmi and the Concorde $14,250 or 4.5c per seat/nmi.
Between March 1984 and March 1991, British Airways flew a thrice-weekly Concorde service between London and Miami, stopping at Washington Dulles International Airport. Until 2003, Air France and British Airways continued to operate the New York services daily. From 1987 to 2003 British Airways flew a Saturday morning Concorde service to Grantley Adams International Airport, Barbados, during the summer and winter holiday season.
Prior to the Air France Paris crash, several UK and French tour operators operated charter flights to European destinations on a regular basis; the charter business was viewed as lucrative by British Airways and Air France.
In 1997, British Airways held a promotional contest to mark the 10th anniversary of the airline's move into the private sector. The promotion was a lottery to fly to New York held for 190 tickets valued at £5,400 each, to be offered at £10. Contestants had to call a special hotline to compete with up to 20 million people.
On 10 April 2003, Air France and British Airways simultaneously announced they would retire Concorde later that year. They cited low passenger numbers following the 25 July 2000 crash, the slump in air travel following the September 11 attacks, and rising maintenance costs: Airbus (the company that acquired Aerospatiale in 2000) had made a decision in 2003 to no longer supply replacement parts for the aircraft. Although Concorde was technologically advanced when introduced in the 1970s, 30 years later, its analogue cockpit was outdated. There had been little commercial pressure to upgrade Concorde due to a lack of competing aircraft, unlike other airliners of the same era such as the Boeing 747. By its retirement, it was the last aircraft in the British Airways fleet that had a flight engineer; other aircraft, such as the modernised 747-400, had eliminated the role.
On 11 April 2003, Virgin Atlantic founder Sir Richard Branson announced that the company was interested in purchasing British Airways' Concorde fleet "for the same price that they were given them for – one pound". British Airways dismissed the idea, prompting Virgin to increase their offer to £1 million each. Branson claimed that when BA was privatised, a clause in the agreement required them to allow another British airline to operate Concorde if BA ceased to do so, but the Government denied the existence of such a clause. In October 2003, Branson wrote in "The Economist" that his final offer was "over £5 million" and that he had intended to operate the fleet "for many years to come". The chances for keeping Concorde in service were stifled by Airbus's lack of support for continued maintenance.
It has been suggested that Concorde was not withdrawn for the reasons usually given but that it became apparent during the grounding of Concorde that the airlines could make more profit carrying first-class passengers subsonically. A lack of commitment to Concorde from Director of Engineering Alan MacDonald was cited as having undermined BA's resolve to continue operating Concorde.
Other reasons why the attempted revival of Concorde never happened relate to the fact that the narrow fuselage did not allow for "luxury" features of subsonic air travel such as moving space, reclining seats and overall comfort. In the words of "The Guardian"'s Dave Hall, "Concorde was an outdated notion of prestige that left sheer speed the only luxury of supersonic travel."
Air France made its final commercial Concorde landing in the United States in New York City from Paris on 30 May 2003. Air France's final Concorde flight took place on 27 June 2003 when F-BVFC retired to Toulouse.
An auction of Concorde parts and memorabilia for Air France was held at Christie's in Paris on 15 November 2003; 1,300 people attended, and several lots exceeded their predicted values. French Concorde F-BVFC was retired to Toulouse and kept functional for a short time after the end of service, in case taxi runs were required in support of the French judicial enquiry into the 2000 crash. The aircraft is now fully retired and no longer functional.
French Concorde F-BTSD has been retired to the "Musée de l'Air" at Paris–Le Bourget Airport near Paris; unlike the other museum Concordes, a few of the systems are being kept functional. For instance, the famous "droop nose" can still be lowered and raised. This led to rumours that they could be prepared for future flights for special occasions.
French Concorde F-BVFB is at the Auto & Technik Museum Sinsheim at Sinsheim, Germany, after its last flight from Paris to Baden-Baden, followed by a spectacular transport to Sinsheim via barge and road. The museum also has a Tupolev Tu-144 on display – this is the only place where both supersonic airliners can be seen together.
In 1989, Air France signed a letter of agreement to donate a Concorde to the National Air and Space Museum in Washington D.C. upon the aircraft's retirement. On 12 June 2003, Air France honoured that agreement, donating Concorde F-BVFA (serial 205) to the Museum upon the completion of its last flight. This aircraft was the first Air France Concorde to open service to Rio de Janeiro, Washington, D.C., and New York and had flown 17,824 hours. It is on display at the Smithsonian's Steven F. Udvar-Hazy Center at Dulles Airport.
British Airways conducted a North American farewell tour in October 2003. G-BOAG visited Toronto Pearson International Airport on 1 October, after which it flew to New York's John F. Kennedy International Airport. G-BOAD visited Boston's Logan International Airport on 8 October, and G-BOAG visited Washington Dulles International Airport on 14 October.
In a week of farewell flights around the United Kingdom, Concorde visited Birmingham on 20 October, Belfast on 21 October, Manchester on 22 October, Cardiff on 23 October, and Edinburgh on 24 October. Each day the aircraft made a return flight out and back into Heathrow to the cities, often overflying them at low altitude. On 22 October, both Concorde flight BA9021C, a special from Manchester, and BA002 from New York landed simultaneously on both of Heathrow's runways. On 23 October 2003, the Queen consented to the illumination of Windsor Castle, an honour reserved for state events and visiting dignitaries, as Concorde's last west-bound commercial flight departed London.
British Airways retired its Concorde fleet on 24 October 2003. G-BOAG left New York to a fanfare similar to that given for Air France's F-BTSD, while two more made round trips, G-BOAF over the Bay of Biscay, carrying VIP guests including former Concorde pilots, and G-BOAE to Edinburgh. The three aircraft then circled over London, having received special permission to fly at low altitude, before landing in sequence at Heathrow. The captain of the New York to London flight was Mike Bannister. The final flight of a Concorde in the US occurred on 5 November 2003 when G-BOAG flew from New York's JFK Airport to Seattle's Boeing Field to join the Museum of Flight's permanent collection. The plane was piloted by Mike Bannister and Les Broadie, who claimed a flight time of three hours, 55 minutes and 12 seconds, a record between the two cities. The museum had been pursuing a Concorde for their collection since 1984. The final flight of a Concorde worldwide took place on 26 November 2003 with a landing at Filton, Bristol, UK.
All of BA's Concorde fleet have been grounded, drained of hydraulic fluid and their airworthiness certificates withdrawn. Jock Lowe, ex-chief Concorde pilot and manager of the fleet estimated in 2004 that it would cost £10–15 million to make G-BOAF airworthy again. BA maintain ownership and have stated that they will not fly again due to a lack of support from Airbus. On 1 December 2003, Bonhams held an auction of British Airways Concorde artefacts, including a nose cone, at Kensington Olympia in London. Proceeds of around £750,000 were raised, with the majority going to charity. G-BOAD is currently on display at the Intrepid Sea, Air & Space Museum in New York. In 2007, BA announced that the advertising spot at Heathrow where a 40% scale model of Concorde was located would not be retained; the model is now on display at the Brooklands Museum, in Surrey, England.
Concorde G-BBDG was used for test flying and trials work. It was retired in 1981 and then only used for spares. It was dismantled and transported by road from Filton to the Brooklands Museum in Surrey where it was restored from essentially a shell. It remains open to visitors to the museum.
Concorde "G-BOAB", nicknamed "Alpha Bravo", was never modified and returned to service with the rest of British Airways' fleet, and has remained at London Heathrow Airport since its final flight, a ferry flight from JFK in 2000. Although the aircraft was effectively retired, G-BOAB was used as a test aircraft for the Project Rocket interiors that were in the process of being added to the rest of BA's fleet.
One of the youngest Concordes (F-BTSD) is on display at Le Bourget Air and Space Museum in Paris. In February 2010, it was announced that the museum and a group of volunteer Air France technicians intend to restore F-BTSD so it can taxi under its own power. In May 2010, it was reported that the British Save Concorde Group and French Olympus 593 groups had begun inspecting the engines of a Concorde at the French museum; their intent was to restore the airliner to a condition where it could fly in demonstrations.
G-BOAF forms the centrepiece of the Aerospace Bristol museum at Filton, which opened to the public in 2017.
On 15 September 2015, Club Concorde announced it had secured over £160 million to return an aircraft to service. Club Concorde President Paul James said:
The organisation aims to buy the Concorde currently on display at Le Bourget airport. A tentative date of 2019 had been put forward for the return to flight—50 years after its maiden journey. However, due to regulatory and technical hurdles, some of the aviation community are highly skeptical of the plan, including former Concorde captain and Club Concorde co-founder William "Jock" Lowe, who was quoted in June 2016 saying:
On 25 July 2000, Air France Flight 4590, registration F-BTSC, crashed in Gonesse, France, after departing from Charles de Gaulle Airport en route to John F. Kennedy International Airport in New York City, killing all 100 passengers and nine crew members on board as well as four people on the ground. It was the only fatal accident involving Concorde.
According to the official investigation conducted by the "Bureau d'Enquêtes et d'Analyses pour la Sécurité de l'Aviation Civile" (BEA), the crash was caused by a metallic strip that had fallen from a Continental Airlines DC-10 that had taken off minutes earlier. This fragment punctured a tyre on Concorde's left main wheel bogie during take-off. The tyre exploded, and a piece of rubber hit the fuel tank, which caused a fuel leak and led to a fire. The crew shut down engine number 2 in response to a fire warning, and with engine number 1 surging and producing little power, the aircraft was unable to gain altitude or speed. The aircraft entered a rapid pitch-up then a sudden descent, rolling left and crashing tail-low into the Hôtelissimo Les Relais Bleus Hotel in Gonesse.
The claim that a metallic strip caused the crash was disputed during the trial both by witnesses (including the pilot of then French President Jacques Chirac's aircraft that had just landed on an adjacent runway when Flight 4590 caught fire) and by an independent French TV investigation that found a wheel spacer had not been installed in the left-side main gear and that the plane caught fire some 1,000 feet from where the metallic strip lay. British investigators and former French Concorde pilots looked at several other possibilities that the BEA report ignored, including an unbalanced weight distribution in the fuel tanks and loose landing gear. They came to the conclusion that the Concorde veered off course on the runway, which reduced takeoff speed below the crucial minimum. John Hutchinson, who had served as a Concorde captain for 15 years with British Airways, said "the fire on its own should have been 'eminently survivable; the pilot should have been able to fly his way out of trouble'", had it not been for a "lethal combination of operational error and 'negligence' by the maintenance department of Air France" that "nobody wants to talk about".
On 6 December 2010, Continental Airlines and John Taylor, a mechanic who installed the metal strip, were found guilty of involuntary manslaughter; however, on 30 November 2012, a French court overturned the conviction, saying mistakes by Continental and Taylor did not make them criminally responsible.
Before the accident, Concorde had been arguably the safest operational passenger airliner in the world with zero passenger deaths-per-kilometres travelled; but there had been two prior non-fatal accidents and a rate of tyre damage some 30 times higher than subsonic airliners from 1995 to 2000. Safety improvements were made in the wake of the crash, including more secure electrical controls, Kevlar lining on the fuel tanks and specially developed burst-resistant tyres. The first flight with the modifications departed from London Heathrow on 17 July 2001, piloted by BA Chief Concorde Pilot Mike Bannister. During the 3-hour 20-minute flight over the mid-Atlantic towards Iceland, Bannister attained Mach 2.02 and before returning to RAF Brize Norton. The test flight, intended to resemble the London–New York route, was declared a success and was watched on live TV, and by crowds on the ground at both locations.
The first flight with passengers after the accident took place on 11 September 2001, landing shortly before the World Trade Center attacks in the US. This was not a commercial flight: all the passengers were BA employees. Normal commercial operations resumed on 7 November 2001 by BA and AF (aircraft G-BOAE and F-BTSD), with service to New York JFK, where Mayor Rudy Giuliani greeted the passengers.
Concorde had suffered two previous non-fatal accidents that were similar to each other.
Of the 20 aircraft built, 18 remain in good condition. Many are on display at museums in the United Kingdom, France, the United States, Germany, and Barbados.
The only supersonic airliner in direct competition with Concorde was the Soviet Tupolev Tu-144, nicknamed "Concordski" by Western European journalists for its outward similarity to Concorde. It had been alleged that Soviet espionage efforts had resulted in the theft of Concorde blueprints, supposedly to assist in the design of the Tu-144. As a result of a rushed development programme, the first Tu-144 prototype was substantially different from the preproduction machines, but both were cruder than Concorde. The Tu-144"S" had a significantly shorter range than Concorde. Jean Rech, Sud Aviation, attributed this to two things, a very heavy powerplant with an intake twice as long as that on Concorde, and low-bypass turbofan engines with too-high a bypass ratio which needed afterburning for cruise. The aircraft had poor control at low speeds because of a simpler supersonic wing design; in addition the Tu-144 required braking parachutes to land while Concorde used anti-lock brakes. The Tu-144 had two crashes, one at the 1973 Paris Air Show, and another during a pre-delivery test flight in May 1978.
Later production Tu-144 versions were more refined and competitive. They had retractable canards for better low-speed control, turbojet engines providing nearly the fuel efficiency and range of Concorde and a top speed of Mach 2.35. Passenger service commenced in November 1977, but after the 1978 crash the aircraft was taken out of passenger service after only 55 flights, which carried an average of 58 passengers. The aircraft had an inherently unsafe structural design as a consequence of an automated production method chosen to simplify and speed up manufacturing.
The American designs, the "SST" project (for Supersonic Transport) were the Boeing 2707 and the Lockheed L-2000. These were to have been larger, with seating for up to 300 people. Running a few years behind Concorde, the Boeing 2707 was redesigned to a cropped delta layout; the extra cost of these changes helped to kill the project. The operation of US military aircraft such as the Mach 3+ North American XB-70 Valkyrie prototypes and Convair B-58 Hustler strategic nuclear bomber had shown that sonic booms were quite capable of reaching the ground, and the experience from the Oklahoma City sonic boom tests led to the same environmental concerns that hindered the commercial success of Concorde. The American government cancelled its SST project in 1971, after having spent more than $1 billion.
The only other large supersonic aircraft comparable to Concorde are strategic bombers, principally the Russian Tu-22, Tu-22M, M-50 (experimental), T-4 (experimental), Tu-160 and the American XB-70 (experimental) and B-1.
Before Concorde's flight trials, developments in the civil aviation industry were largely accepted by governments and their respective electorates. Opposition to Concorde's noise, particularly on the east coast of the United States, forged a new political agenda on both sides of the Atlantic, with scientists and technology experts across a multitude of industries beginning to take the environmental and social impact more seriously. Although Concorde led directly to the introduction of a general noise abatement programme for aircraft flying out of John F. Kennedy Airport, many found that Concorde was quieter than expected, partly due to the pilots temporarily throttling back their engines to reduce noise during overflight of residential areas. Even before commercial flights started, it had been claimed that Concorde was quieter than many other aircraft. In 1971, BAC's technical director was quoted as saying, "It is certain on present evidence and calculations that in the airport context, production Concordes will be no worse than aircraft now in service and will in fact be better than many of them."
Concorde produced nitrogen oxides in its exhaust, which, despite complicated interactions with other ozone-depleting chemicals, are understood to result in degradation to the ozone layer at the stratospheric altitudes it cruised. It has been pointed out that other, lower-flying, airliners produce ozone during their flights in the troposphere, but vertical transit of gases between the layers is restricted. The small fleet meant overall ozone-layer degradation caused by Concorde was negligible. In 1995, David Fahey, of the National Oceanic and Atmospheric Administration in the United States, warned that a fleet of 500 supersonic aircraft with exhausts similar to Concorde might produce a 2 percent drop in global ozone levels, much higher than previously thought. Each 1 percent drop in ozone is estimated to increase the incidence of non-melanoma skin cancer worldwide by 2 percent. Dr Fahey said if these particles are produced by highly oxidised sulphur in the fuel, as he believed, then removing sulphur in the fuel will reduce the ozone-destroying impact of supersonic transport.
Concorde's technical leap forward boosted the public's understanding of conflicts between technology and the environment as well as awareness of the complex decision analysis processes that surround such conflicts. In France, the use of acoustic fencing alongside TGV tracks might not have been achieved without the 1970s controversy over aircraft noise. In the UK, the CPRE has issued tranquillity maps since 1990.
Concorde was normally perceived as a privilege of the rich, but special circular or one-way (with return by other flight or ship) charter flights were arranged to bring a trip within the means of moderately well-off enthusiasts.
The aircraft was usually referred to by the British as simply "Concorde". In France it was known as "le Concorde" due to "le", the definite article, used in French grammar to introduce the name of a ship or aircraft, and the capital being used to distinguish a proper name from a common noun of the same spelling. In French, the common noun "concorde" means "agreement, harmony, or peace". Concorde's pilots and British Airways in official publications often refer to Concorde both in the singular and plural as "she" or "her".
As a symbol of national pride, an example from the BA fleet made occasional flypasts at selected Royal events, major air shows and other special occasions, sometimes in formation with the Red Arrows. On the final day of commercial service, public interest was so great that grandstands were erected at Heathrow Airport. Significant numbers of people attended the final landings; the event received widespread media coverage.
In 2006, 37 years after its first test flight, Concorde was announced the winner of the Great British Design Quest organised by the BBC and the Design Museum. A total of 212,000 votes were cast with Concorde beating other British design icons such as the Mini, mini skirt, Jaguar E-Type, Tube map, the World Wide Web, K2 telephone box and the Supermarine Spitfire.
The heads of France and the United Kingdom flew Concorde many times. Presidents Georges Pompidou, Valéry Giscard d'Estaing and François Mitterrand regularly used Concorde as French flagman aircraft in foreign visits. Queen Elizabeth II and Prime Ministers Edward Heath, Jim Callaghan, Margaret Thatcher, John Major and Tony Blair took Concorde in some charter flights such as the Queen's trips to Barbados on her Silver Jubilee in 1977, in 1987 and in 2003, to the Middle East in 1984 and to the United States in 1991. Pope John Paul II flew on Concorde in May 1989.
Concorde sometimes made special flights for demonstrations, air shows (such as the Farnborough, Paris-LeBourget, Oshkosh AirVenture and MAKS air shows) as well as parades and celebrations (for example, of Zurich Airport's anniversary in 1998). The aircraft were also used for private charters (including by the President of Zaire Mobutu Sese Seko on multiple occasions), for advertising companies (including for the firm OKI), for Olympic torch relays (1992 Winter Olympics in Albertville) and for observing solar eclipses, including the solar eclipse of June 30, 1973 and again for the total solar eclipse on August 11, 1999.
The fastest transatlantic airliner flight was from New York JFK to London Heathrow on 7 February 1996 by the British Airways G-BOAD in 2 hours, 52 minutes, 59 seconds from take-off to touchdown aided by a 175 mph (282 km/h) tailwind. On 13 February 1985, a Concorde charter flight flew from London Heathrow to Sydney—on the opposite side of the world—in a time of 17 hours, 3 minutes and 45 seconds, including refuelling stops.
Concorde also set other records, including the official FAI "Westbound Around the World" and "Eastbound Around the World" world air speed records. On 12–13 October 1992, in commemoration of the 500th anniversary of Columbus' first New World landing, Concorde Spirit Tours (US) chartered Air France Concorde F-BTSD and circumnavigated the world in 32 hours 49 minutes and 3 seconds, from Lisbon, Portugal, including six refuelling stops at Santo Domingo, Acapulco, Honolulu, Guam, Bangkok, and Bahrain.
The eastbound record was set by the same Air France Concorde (F-BTSD) under charter to Concorde Spirit Tours in the US on 15–16 August 1995. This promotional flight circumnavigated the world from New York/JFK International Airport in 31 hours 27 minutes 49 seconds, including six refuelling stops at Toulouse, Dubai, Bangkok, Andersen AFB in Guam, Honolulu, and Acapulco. By its 30th flight anniversary on 2 March 1999 Concorde had clocked up 920,000 flight hours, with more than 600,000 supersonic, many more than all of the other supersonic aircraft in the Western world combined.
On its way to the Museum of Flight in November 2003, G-BOAG set a New York City-to-Seattle speed record of 3 hours, 55 minutes, and 12 seconds. Due to the restrictions on supersonic overflights within the US the flight was granted permission by the Canadian authorities for the majority of the journey to be flown supersonically over sparsely-populated Canadian territory. | https://en.wikipedia.org/wiki?curid=7045 |
Cannon
A cannon is a type of gun classified as artillery that launches a projectile using propellant. In the past, gunpowder was the primary propellant before the invention of smokeless powder during the late 19th century. Cannon vary in caliber, range, mobility, rate of fire, angle of fire, and firepower; different forms of cannon combine and balance these attributes in varying degrees, depending on their intended use on the battlefield. The word "cannon" is derived from several languages, in which the original definition can usually be translated as "tube", "cane", or "reed". In the modern era, the term "cannon" has fallen into decline, replaced by "guns" or "artillery" if not a more specific term such as howitzer or mortar, except for high calibre automatic weapons firing bigger rounds than machine guns, called autocannons.
The earliest known depiction of cannon appeared in Song dynasty China as early as the 12th century; however, solid archaeological and documentary evidence of cannon do not appear until the 13th century. In 1288 Yuan dynasty troops are recorded to have used hand cannon in combat, and the earliest extant cannon bearing a date of production comes from the same period. By the early 14th century, depictions of cannon had appeared in the Middle East and Europe, and almost immediately recorded usage of cannon began appearing. By the end of the 14th century cannon were widespread throughout Eurasia. Cannon were used primarily as anti-infantry weapons until around 1374 when cannon were recorded to have breached walls for the first time in Europe. Cannon featured prominently as siege weapons and ever larger pieces appeared. In 1464 a 16,000 kg (35,000 lbs) cannon known as the Great Turkish Bombard was created in the Ottoman Empire. Cannon as field artillery became more important after 1453 with the introduction of limber, which greatly improved cannon maneuverability and mobility. European cannon reached their longer, lighter, more accurate, and more efficient "classic form" around 1480. This classic European cannon design stayed relatively consistent in form with minor changes until the 1750s.
"Cannon" is derived from the Old Italian word "cannone", meaning "large tube", which came from Latin "canna", in turn originating from the Greek κάννα ("kanna"), "reed", and then generalised to mean any hollow tube-like object; cognate with Akkadian "qanu(m)" and Hebrew "qāneh", "tube, reed". The word has been used to refer to a gun since 1326 in Italy, and 1418 in England. Both "cannons" and "cannon" are correct and in common usage, with one or the other having preference in different parts of the English-speaking world. "Cannons" is more common in North America and Australia, while "cannon" as plural is more common in the United Kingdom.
The cannon may have appeared as early as the 12th century in China, and was probably a parallel development or evolution of the fire-lance, a short ranged anti-personnel weapon combining a gunpowder-filled tube and a polearm of some sort. Co-viative projectiles such as iron scraps or porcelain shards were placed in fire lance barrels at some point, and eventually, the paper and bamboo materials of fire lance barrels were replaced by metal.
The earliest known depiction of a cannon is a sculpture from the Dazu Rock Carvings in Sichuan dated to 1128, however the earliest archaeological samples and textual accounts do not appear until the 13th century. The primary extant specimens of cannon from the 13th century are the Wuwei Bronze Cannon dated to 1227, the Heilongjiang hand cannon dated to 1288, and the Xanadu Gun dated to 1298. However, only the Xanadu gun contains an inscription bearing a date of production, so it is considered the earliest confirmed extant cannon. The Xanadu Gun is 34.7 cm in length and weighs 6.2 kg. The other cannon are dated using contextual evidence. The Heilongjiang hand cannon is also often considered by some to be the oldest firearm since it was unearthed near the area where the History of Yuan reports a battle took place involving hand cannon. According to the History of Yuan, in 1288, a Jurchen commander by the name of Li Ting led troops armed with hand cannon into battle against the rebel prince Nayan.
Chen Bingying argues there were no guns before 1259 while Dang Shoushan believes the Wuwei gun and other Western Xia era samples point to the appearance of guns by 1220, and Stephen Haw goes even further by stating that guns were developed as early as 1200. Sinologist Joseph Needham and renaissance siege expert Thomas Arnold provide a more conservative estimate of around 1280 for the appearance of the "true" cannon. Whether or not any of these are correct, it seems likely that the gun was born sometime during the 13th century.
References to cannon proliferated throughout China in the following centuries. Cannon featured in literary pieces. In 1341 Xian Zhang wrote a poem called "The Iron Cannon Affair" describing a cannonball fired from an eruptor which could "pierce the heart or belly when striking a man or horse, and even transfix several persons at once."
The Mongol invasion of Java in 1293 brought gunpowder technology to the Nusantara archipelago in the form of cannon (Chinese: "Pao"). By the 1350s the cannon was used extensively in Chinese warfare. In 1358 the Ming army failed to take a city due to its garrisons' usage of cannon, however they themselves would use cannon, in the thousands, later on during the siege of Suzhou in 1366. The Korean kingdom of Joseon started producing gunpowder in 1374 and cannon by 1377. Cannons appeared in Đại Việt by 1390 at the latest.
During the Ming dynasty cannon were used in riverine warfare at the Battle of Lake Poyang. One shipwreck in Shandong had a cannon dated to 1377 and an anchor dated to 1372. From the 13th to 15th centuries cannon-armed Chinese ships also travelled throughout Southeast Asia.
The first of the western cannon to be introduced were breech-loaders in the early 16th century which the Chinese began producing themselves by 1523 and improved on by including composite metal construction in their making.
Japan did not acquire a cannon until 1510 when a monk brought one back from China, and did not produce any in appreciable numbers. During the 1593 Siege of Pyongyang, 40,000 Ming troops deployed a variety of cannon against Japanese troops. Despite their defensive advantage and the use of arquebus by Japanese soldiers, the Japanese were at a severe disadvantage due to their lack of cannon. Throughout the Japanese invasions of Korea (1592–98), the Ming-Joseon coalition used artillery widely in land and naval battles, including on the turtle ships of Yi Sun-sin.
According to Ivan Petlin, the first Russian envoy to Beijing, in September 1619, the city was armed with large cannon with cannonballs weighing more than . His general observation was that the Chinese were militarily capable and had firearms:
The Javanese Majapahit Empire was arguably able to encompass much of modern-day Indonesia due to its unique mastery of bronze-smithing and use of a central arsenal fed by a large number of cottage industries within the immediate region. Documentary and archeological evidence indicate that Arab traders introduced gunpowder, gonnes, muskets, blunderbusses, and cannons to the Javanese, Acehnese, and Batak via long established commercial trade routes around the early to mid 14th century. The resurgent Singhasari Empire overtook Sriwijaya and later emerged as the Majapahit whose warfare featured the use of fire-arms and cannonade. Cannons were introduced to Majapahit when Kublai Khan's Chinese army under the leadership of Ike Mese sought to invade Java in 1293. History of Yuan mentioned that the Mongol used cannons (Chinese: "Pao") against Daha forces. Javanese bronze breech-loaded swivel-guns, known as cetbang or lantaka, was used widely by the Majapahit navy as well as by pirates and rival lords. One of the earliest reference to cannon and artillerymen in Java is from the year 1346. The demise of the Majapahit empire and the dispersal of disaffected skilled bronze cannon-smiths to Brunei, modern Sumatra, Malaysia and the Philippines lead to widespread use, especially in the Makassar Strait. This event led to near universal use of the swivel-gun and cannon in the Nusantara archipelago. When the Portuguese first came to Malacca, they found a large colony of Javanese merchants under their own headmen; the Javanese were manufacturing their own cannon, which then, and for long after, were as necessary to merchant ships as sails.
Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Circa 1540, the Javanese, always alert for new weapons found the newly arrived Portuguese weaponry superior to that of the locally made variants. Majapahit-era cetbang cannons were further improved and used in the Demak Sultanate period during the Demak invasion of Portuguese Malacca. During this period, the iron, for manufacturing Javanese cannons was imported from Khorasan in northern Persia. The material was known by Javanese as "wesi kurasani" (Khorasan iron). When the Portuguese came to the archipelago, they referred to it as "Berço", which was also used to refer to any breech-loading swivel gun, while the Spaniards call it "Verso".
Duarte Barbosa ca. 1510 said that the inhabitants of Java are great masters in casting artillery and very good artillerymen. They make many one-pounder cannons (cetbang or rentaka), long muskets, "spingarde" (arquebus), "schioppi" (hand cannon), Greek fire, guns (cannons), and other fire-works. Every place are considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Patih Yunus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180–260-pounders, weighing anywhere between 3–8 tons, length of them between 3–6 m.
Cannons were used by the Ayutthaya Kingdom in 1352 during its invasion of the Khmer Empire. Within a decade large quantities of gunpowder could be found in the Khmer Empire. By the end of the century firearms were also used by the Trần dynasty.
Saltpeter harvesting was recorded by Dutch and German travelers as being common in even the smallest villages and was collected from the decomposition process of large dung hills specifically piled for the purpose. The Dutch punishment for possession of non-permitted gunpowder appears to have been amputation. Ownership and manufacture of gunpowder was later prohibited by the colonial Dutch occupiers. According to colonel McKenzie quoted in Sir Thomas Stamford Raffles', "The History of Java" (1817), the purest sulfur was supplied from a crater from a mountain near the straits of Bali.
There is no clear consensus of when the cannon first appeared in the Islamic world, with dates ranging from 1260 to the mid-14th century. The cannon may have appeared in the Islamic world in the late 13th century, with Ibn Khaldun in the 14th century stating that cannons were used in the Maghreb region of North Africa in 1274, and other Arabic military treatises in the 14th century referring to the use of cannon by Mamluk forces in 1260 and 1303, and by Muslim forces at the 1324 Siege of Huesca in Spain. However, some scholars do not accept these early dates. While the date of its first appearance is not entirely clear, the general consensus among most historians is that there is no doubt the Mamluk forces were using cannon by 1342.
According to historian Ahmad Y. al-Hassan, during the Battle of Ain Jalut in 1260, the Mamluks used cannon against the Mongols. He claims that this was "the first cannon in history" and used a gunpowder formula almost identical to the ideal composition for explosive gunpowder. He also argues that this was not known in China or Europe until much later. Hassan further claims that the earliest textual evidence of cannon is from the Middle East, based on earlier originals which report hand-held cannon being used by the Mamluks at the Battle of Ain Jalut in 1260. Such an early date is not accepted by some historians, including David Ayalon, Iqtidar Alam Khan, Joseph Needham and Tonio Andrade. Khan argues that it was the Mongols who introduced gunpowder to the Islamic world, and believes cannon only reached Mamluk Egypt in the 1370s. Needham argued that the term "midfa", dated to textual sources from 1342 to 1352, did not refer to true hand-guns or bombards, and that contemporary accounts of a metal-barrel cannon in the Islamic world did not occur until 1365. Similarly, Andrade dates the textual appearance of cannon in middle eastern sources to the 1360s. Gabor Ágoston and David Ayalon note that the Mamluks had certainly used siege cannon by 1342 or the 1360s, respectively, but earlier uses of cannon in the Islamic World are vague with a possible appearance in the Emirate of Granada by the 1320s and 1330s, though evidence is inconclusive.
Ibn Khaldun reported the use of cannon as siege machines by the Marinid sultan Abu Yaqub Yusuf at the siege of Sijilmasa in 1274. The passage by Ibn Khaldun on the Marinid Siege of Sijilmassa in 1274 occurs as follows: "[The Sultan] installed siege engines … and gunpowder engines …, which project small balls of iron. These balls are ejected from a chamber … placed in front of a kindling fire of gunpowder; this happens by a strange property which attributes all actions to the power of the Creator." The source is not contemporary and was written a century later around 1382. Its interpretation has been rejected as anachronistic by some historians, who urge caution regarding claims of Islamic firearms use in the 1204–1324 period as late medieval Arabic texts used the same word for gunpowder, naft, as they did for an earlier incendiary, naphtha. Ágoston and Peter Purton note that in the 1204–1324 period, late medieval Arabic texts used the same word for gunpowder, "naft", that they used for an earlier incendiary naphtha. Needham believes Ibn Khaldun was speaking of fire lances rather than hand cannon.
The Ottoman Empire made good use of cannon as siege artillery. Sixty-eight super-sized bombards were used by Mehmed the Conqueror to capture Constantinople in 1453. Jim Bradbury argues that Urban, a Hungarian cannon engineer, introduced this cannon from Central Europe to the Ottoman realm; according to Paul Hammer, however, it could have been introduced from other Islamic countries which had earlier used cannon. These cannon could fire heavy stone balls a mile, and the sound of their blast could reportedly be heard from a distance of . Shkodëran historian Marin Barleti discusses Turkish bombards at length in his book "De obsidione Scodrensi" (1504), describing the 1478–79 siege of Shkodra in which eleven bombards and two mortars were employed. The Ottomans also used cannon to control passage of ships through the Bosphorus strait. Ottoman cannon also proved effective at stopping crusaders at Varna in 1444 and Kosovo in 1448 despite the presence of European cannon in the former case.
The similar Dardanelles Guns (for the location) were created by Munir Ali in 1464 and were still in use during the Anglo-Turkish War (1807–09). These were cast in bronze into two parts, the chase (the barrel) and the breech, which combined weighed 18.4 tonnes. The two parts were screwed together using levers to facilitate moving it.
Fathullah Shirazi, a Persian inhabitant of India who worked for Akbar in the Mughal Empire, developed a volley gun in the 16th century.
While there is evidence of cannon in Iran as early as 1405 they were not widespread. This changed following the increased use of firearms by Shah Isma il I, and the Iranian army used 500 cannon by the 1620s, probably captured from the Ottomans or acquired by allies in Europe. By 1443 Iranians were also making some of their own cannon, as Mir Khawand wrote of a 1200 kg metal piece being made by an Iranian "rekhtagar" which was most likely a cannon. Due to the difficulties of transporting cannon in mountainous terrain, their use was less common compared to their use in Europe.
Outside of China, the earliest texts to mention gunpowder are Roger Bacon's "Opus Majus" (1267) and "Opus Tertium" in what has been interpreted as references to firecrackers. In the early 20th century, a British artillery officer proposed that another work tentatively attributed to Bacon, "Epistola de Secretis Operibus Artis et Naturae, et de Nullitate Magiae", also known as "Opus Minor", dated to 1247, contained an encrypted formula for gunpowder hidden in the text. These claims have been disputed by science historians. In any case, the formula itself is not useful for firearms or even firecrackers, burning slowly and producing mostly smoke.
There is a record of a gun in Europe dating to 1322 being discovered in the nineteenth century but the artifact has since been lost. The earliest known European depiction of a gun appeared in 1326 in a manuscript by Walter de Milemete, although not necessarily drawn by him, known as "De Nobilitatibus, sapientii et prudentiis regum" (Concerning the Majesty, Wisdom, and Prudence of Kings), which displays a gun with a large arrow emerging from it and its user lowering a long stick to ignite the gun through the touchole In the same year, another similar illustration showed a darker gun being set off by a group of knights, which also featured in another work of de Milemete's, "De secretis secretorum Aristotelis". On 11 February of that same year, the Signoria of Florence appointed two officers to obtain "canones de mettallo" and ammunition for the town's defense. In the following year a document from the Turin area recorded a certain amount was paid "for the making of a certain instrument or device made by Friar Marcello for the projection of pellets of lead." A reference from 1331 describes an attack mounted by two Germanic knights on Cividale del Friuli, using gunpowder weapons of some sort. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian's catalogue for a new crusade in 1321 implies that guns were unknown in Europe up until this point, further solidifying the 1320 mark, however more evidence in this area may be forthcoming in the future.
The oldest extant cannon in Europe is a small bronze example unearthed in Loshult, Scania in southern Sweden. It dates from the early-mid 14th century, and is currently in the Swedish History Museum in Stockholm.
Early cannon in Europe often shot arrows and were known by an assortment of names such as "pot-de-fer", "tonnoire", "ribaldis", and "büszenpyle". The "ribaldi"s, which shot large arrows and simplistic grapeshot, were first mentioned in the English Privy Wardrobe accounts during preparations for the Battle of Crécy, between 1345 and 1346. The Florentine Giovanni Villani recounts their destructiveness, indicating that by the end of the battle, "the whole plain was covered by men struck down by arrows and cannon balls." Similar cannon were also used at the Siege of Calais (1346–47), although it was not until the 1380s that the "ribaudekin" clearly became mounted on wheels.
The battle of Crecy which pitted the English against the French in 1346 featured the early use of cannon which helped the long-bowmen repulse a large force of Genoese crossbowmen deployed by the French. The English originally intended to use the cannon against cavalry sent to attack their archers, thinking that the loud noises produced by their cannon would panic the advancing horses along with killing the knights atop them.
Early cannon could also be used for more than simply killing men and scaring horses. English cannon were used defensively during the siege of the castle Breteuil to launch fire onto an advancing belfry. In this way cannon could be used to burn down siege equipment before it reached the fortifications. The use of cannon to shoot fire could also be used offensively as another battle involved the setting of a castle ablaze with similar methods. The particular incendiary used in these cannon was most likely a gunpowder mixture. This is one area where early Chinese and European cannon share a similarity as both were possibly used to shoot fire.
Another aspect of early European cannon is that they were rather small, dwarfed by the bombards which would come later. In fact, it is possible that the cannon used at Crecy were capable of being moved rather quickly as there is an anonymous chronicle that notes the guns being used to attack the French camp, indicating that they would have been mobile enough press the attack. These smaller cannon would eventually give way to larger, wall breaching guns by the end of the 1300s.
Documentary evidence of cannon in Russia does not appear until 1382 and they were used only in sieges, often by the defenders. It was not until 1475 when Ivan III established the first Russian cannon foundry in Moscow that they began to produce cannon natively.
Later on large cannon were known as bombards, ranging from three to five feet in length and were used by Dubrovnik and Kotor in defence during the later 14th century. The first bombards were made of iron, but bronze became more prevalent as it was recognized as more stable and capable of propelling stones weighing as much as . Around the same period, the Byzantine Empire began to accumulate its own cannon to face the Ottoman Empire, starting with medium-sized cannon long and of 10 in calibre. The earliest reliable recorded use of artillery in the region was against the Ottoman siege of Constantinople in 1396, forcing the Ottomans to withdraw. The Ottomans acquired their own cannon and laid siege to the Byzantine capital again in 1422. By 1453, the Ottomans used 68 Hungarian-made cannon for the 55-day bombardment of the walls of Constantinople, "hurling the pieces everywhere and killing those who happened to be nearby." The largest of their cannon was the Great Turkish Bombard, which required an operating crew of 200 men and 70 oxen, and 10,000 men to transport it. Gunpowder made the formerly devastating Greek fire obsolete, and with the final fall of Constantinople—which was protected by what were once the strongest walls in Europe—on 29 May 1453, "it was the end of an era in more ways than one."
While previous smaller guns could burn down structures with fire, larger cannon were so effective that engineers were forced to develop stronger castle walls to prevent their keeps from falling. This isn't to say that cannon were only used to batter down walls as fortifications began using cannon as defensive instruments such as an example in India where the fort of Raicher had gun ports built into its walls to accommodate the use of defensive cannon. In "Art of War" Niccolò Machiavelli opined that field artillery forced an army to take up a defensive posture and this opposed a more ideal offensive stance. Machiavelli's concerns can be seen in the criticisms of Portuguese mortars being used in India during the sixteenth century as lack of mobility was one of the key problems with the design. In Russia the early cannon were again placed in forts as a defensive tool. Cannon were also difficult to move around in certain types of terrain with mountains providing a great obstacle for them, for these reasons offensives conducted with cannon would be difficult to pull off in places such as Iran.
By the 16th century, cannon were made in a great variety of lengths and bore diameters, but the general rule was that the longer the barrel, the longer the range. Some cannon made during this time had barrels exceeding in length, and could weigh up to . Consequently, large amounts of gunpowder were needed to allow them to fire stone balls several hundred yards. By mid-century, European monarchs began to classify cannon to reduce the confusion. Henry II of France opted for six sizes of cannon, but others settled for more; the Spanish used twelve sizes, and the English sixteen. Better powder had been developed by this time as well. Instead of the finely ground powder used by the first bombards, powder was replaced by a "corned" variety of coarse grains. This coarse powder had pockets of air between grains, allowing fire to travel through and ignite the entire charge quickly and uniformly.
The end of the Middle Ages saw the construction of larger, more powerful cannon, as well as their spread throughout the world. As they were not effective at breaching the newer fortifications resulting from the development of cannon, siege engines—such as siege towers and trebuchets—became less widely used. However, wooden "battery-towers" took on a similar role as siege towers in the gunpowder age—such as that used at Siege of Kazan in 1552, which could hold ten large-calibre cannon, in addition to 50 lighter pieces. Another notable effect of cannon on warfare during this period was the change in conventional fortifications. Niccolò Machiavelli wrote, "There is no wall, whatever its thickness that artillery will not destroy in only a few days." Although castles were not immediately made obsolete by cannon, their use and importance on the battlefield rapidly declined. Instead of majestic towers and merlons, the walls of new fortresses were thick, angled, and sloped, while towers became low and stout; increasing use was also made of earth and brick in breastworks and redoubts. These new defences became known as bastion forts, after their characteristic shape which attempted to force any advance towards it directly into the firing line of the guns. A few of these featured cannon batteries, such as the House of Tudor's Device Forts, in England. Bastion forts soon replaced castles in Europe, and, eventually, those in the Americas, as well.
By the end of the 15th century, several technological advancements made cannon more mobile. Wheeled gun carriages and trunnions became common, and the invention of the limber further facilitated transportation. As a result, field artillery became more viable, and began to see more widespread use, often alongside the larger cannon intended for sieges. Better gunpowder, cast-iron projectiles (replacing stone), and the standardisation of calibres meant that even relatively light cannon could be deadly. In "The Art of War", Niccolò Machiavelli observed that "It is true that the arquebuses and the small artillery do much more harm than the heavy artillery." This was the case at the Battle of Flodden, in 1513: the English field guns outfired the Scottish siege artillery, firing two or three times as many rounds. Despite the increased maneuverability, however, cannon were still the slowest component of the army: a heavy English cannon required 23 horses to transport, while a culverin needed nine. Even with this many animals pulling, they still moved at a walking pace. Due to their relatively slow speed, and lack of organisation, and undeveloped tactics, the combination of pike and shot still dominated the battlefields of Europe.
Innovations continued, notably the German invention of the mortar, a thick-walled, short-barrelled gun that blasted shot upward at a steep angle. Mortars were useful for sieges, as they could hit targets behind walls or other defences. This cannon found more use with the Dutch, who learnt to shoot bombs filled with powder from them. Setting the bomb fuse was a problem. "Single firing" was first used to ignite the fuse, where the bomb was placed with the fuse down against the cannon's propellant. This often resulted in the fuse being blown into the bomb, causing it to blow up as it left the mortar. Because of this, "double firing" was tried where the gunner lit the fuse and then the touch hole. This, however, required considerable skill and timing, and was especially dangerous if the gun misfired, leaving a lighted bomb in the barrel. Not until 1650 was it accidentally discovered that double-lighting was superfluous as the heat of firing would light the fuse.
Gustavus Adolphus of Sweden emphasised the use of light cannon and mobility in his army, and created new formations and tactics that revolutionised artillery. He discontinued using all 12 pounder—or heavier—cannon as field artillery, preferring, instead, to use cannon that could be handled by only a few men. One obsolete type of gun, the "leatheren" was replaced by 4 pounder and 9 pounder demi-culverins. These could be operated by three men, and pulled by only two horses. Gustavus Adolphus's army was also the first to use a cartridge that contained both powder and shot which sped up reloading, increasing the rate of fire. Finally, against infantry he pioneered the use of canister shot—essentially a tin can filled with musket balls. Until then there was no more than one cannon for every thousand infantrymen on the battlefield but Gustavus Adolphus increased the number of cannon sixfold. Each regiment was assigned two pieces, though he often arranged them into batteries instead of distributing them piecemeal. He used these batteries to break his opponent's infantry line, while his cavalry would outflank their heavy guns.
At the Battle of Breitenfeld, in 1631, Adolphus proved the effectiveness of the changes made to his army, by defeating Johann Tserclaes, Count of Tilly. Although severely outnumbered, the Swedes were able to fire between three and five times as many volleys of artillery, and their infantry's linear formations helped ensure they didn't lose any ground. Battered by cannon fire, and low on morale, Tilly's men broke ranks and fled.
In England cannon were being used to besiege various fortified buildings during the English Civil War. Nathaniel Nye is recorded as testing a Birmingham cannon in 1643 and experimenting with a saker in 1645. From 1645 he was the master gunner to the Parliamentarian garrison at Evesham and in 1646 he successfully directed the artillery at the Siege of Worcester, detailing his experiences and in his 1647 book "The Art of Gunnery". Believing that war was as much a science as an art, his explanations focused on triangulation, arithmetic, theoretical mathematics, and cartography as well as practical considerations such as the ideal specification for gunpowder or slow matches. His book acknowledged mathematicians such as Robert Recorde and Marcus Jordanus as well as earlier military writers on artillery such as Niccolò Fontana Tartaglia and Thomas (or Francis) Malthus (author of "A Treatise on Artificial Fire-Works").
Around this time also came the idea of aiming the cannon to hit a target. Gunners controlled the range of their cannon by measuring the angle of elevation, using a "gunner's quadrant." Cannon did not have sights, therefore, even with measuring tools, aiming was still largely guesswork.
In the latter half of the 17th century, the French engineer Sébastien Le Prestre de Vauban introduced a more systematic and scientific approach to attacking gunpowder fortresses, in a time when many field commanders "were notorious dunces in siegecraft." Careful sapping forward, supported by enfilading ricochets, was a key feature of this system, and it even allowed Vauban to calculate the length of time a siege would take. He was also a prolific builder of bastion forts, and did much to popularize the idea of "depth in defence" in the face of cannon. These principles were followed into the mid-19th century, when changes in armaments necessitated greater depth defence than Vauban had provided for. It was only in the years prior to World War I that new works began to break radically away from his designs.
The lower tier of 17th-century English ships of the line were usually equipped with demi-cannon, guns that fired a solid shot, and could weigh up to . Demi-cannon were capable of firing these heavy metal balls with such force that they could penetrate more than a metre of solid oak, from a distance of , and could dismast even the largest ships at close range. Full cannon fired a shot, but were discontinued by the 18th century, as they were too unwieldy. By the end of the 18th century, principles long adopted in Europe specified the characteristics of the Royal Navy's cannon, as well as the acceptable defects, and their severity. The United States Navy tested guns by measuring them, firing them two or three times—termed "proof by powder"—and using pressurized water to detect leaks.
The carronade was adopted by the Royal Navy in 1779; the lower muzzle velocity of the round shot when fired from this cannon was intended to create more wooden splinters when hitting the structure of an enemy vessel, as they were believed to be more deadly than the ball by itself. The carronade was much shorter, and weighed between a third to a quarter of the equivalent long gun; for example, a 32-pounder carronade weighed less than a ton, compared with a 32-pounder long gun, which weighed over 3 tons. The guns were, therefore, easier to handle, and also required less than half as much gunpowder, allowing fewer men to crew them. Carronades were manufactured in the usual naval gun calibres, but were not counted in a ship of the line's rated number of guns. As a result, the classification of Royal Navy vessels in this period can be misleading, as they often carried more cannon than were listed.
Cannon were crucial in Napoleon's rise to power, and continued to play an important role in his army in later years. During the French Revolution, the unpopularity of the Directory led to riots and rebellions. When over 25,000 royalists led by General Danican assaulted Paris, Paul Barras was appointed to defend the capital; outnumbered five to one and disorganised, the Republicans were desperate. When Napoleon arrived, he reorganised the defences but realised that without cannon the city could not be held. He ordered Joachim Murat to bring the guns from the Sablons artillery park; the Major and his cavalry fought their way to the recently captured cannon, and brought them back to Napoleon. When Danican's poorly trained men attacked, on 13 Vendémiaire, 1795 – 5 October 1795, in the calendar used in France at the time—Napoleon ordered his cannon to fire grapeshot into the mob, an act that became known as the "whiff of grapeshot". The slaughter effectively ended the threat to the new government, while, at the same time, made Bonaparte a famous—and popular—public figure. Among the first generals to recognise that artillery was not being used to its full potential, Napoleon often massed his cannon into batteries and introduced several changes into the French artillery, improving it significantly and making it among the finest in Europe. Such tactics were successfully used by the French, for example, at the Battle of Friedland, when sixty-six guns fired a total of 3,000 roundshot and 500 rounds of grapeshot, inflicting severe casualties to the Russian forces, whose losses numbered over 20,000 killed and wounded, in total. At the Battle of Waterloo—Napoleon's final battle—the French army had many more artillery pieces than either the British or Prussians. As the battlefield was muddy, recoil caused cannon to bury themselves into the ground after firing, resulting in slow rates of fire, as more effort was required to move them back into an adequate firing position; also, roundshot did not ricochet with as much force from the wet earth. Despite the drawbacks, sustained artillery fire proved deadly during the engagement, especially during the French cavalry attack. The British infantry, having formed infantry squares, took heavy losses from the French guns, while their own cannon fired at the cuirassiers and lancers, when they fell back to regroup. Eventually, the French ceased their assault, after taking heavy losses from the British cannon and musket fire.
In the 1810s and 1820s, greater emphasis was placed on the accuracy of long-range gunfire, and less on the weight of a broadside. The carronade, although initially very successful and widely adopted, disappeared from the Royal Navy in the 1850s after the development of wrought-iron-jacketed steel cannon by William Armstrong and Joseph Whitworth. Nevertheless, carronades were used in the American Civil War.
Western cannon during the 19th century became larger, more destructive, more accurate, and could fire at longer range. One example is the American wrought-iron, muzzle-loading rifle, or Griffen gun (usually called the 3-inch Ordnance Rifle), used during the American Civil War, which had an effective range of over . Another is the smoothbore 12-pounder Napoleon, which originated in France in 1853 and was widely used by both sides in the American Civil War. This cannon was renowned for its sturdiness, reliability, firepower, flexibility, relatively lightweight, and range of .
The practice of rifling—casting spiralling lines inside the cannon's barrel—was applied to artillery more frequently by 1855, as it gave cannon projectiles gyroscopic stability, which improved their accuracy. One of the earliest rifled cannon was the breech-loading Armstrong Gun—also invented by William Armstrong—which boasted significantly improved range, accuracy, and power than earlier weapons. The projectile fired from the Armstrong gun could reportedly pierce through a ship's side and explode inside the enemy vessel, causing increased damage and casualties. The British military adopted the Armstrong gun, and was impressed; the Duke of Cambridge even declared that it "could do everything but speak." Despite being significantly more advanced than its predecessors, the Armstrong gun was rejected soon after its integration, in favour of the muzzle-loading pieces that had been in use before. While both types of gun were effective against wooden ships, neither had the capability to pierce the armour of ironclads; due to reports of slight problems with the breeches of the Armstrong gun, and their higher cost, the older muzzle-loaders were selected to remain in service instead. Realising that iron was more difficult to pierce with breech-loaded cannon, Armstrong designed rifled muzzle-loading guns, which proved successful; "The Times" reported: "even the fondest believers in the invulnerability of our present ironclads were obliged to confess that against such artillery, at such ranges, their plates and sides were almost as penetrable as wooden ships."
The superior cannon of the Western world brought them tremendous advantages in warfare. For example, in the First Opium War in China, during the 19th century, British battleships bombarded the coastal areas and fortifications from afar, safe from the reach of the Chinese cannon. Similarly, the shortest war in recorded history, the Anglo-Zanzibar War of 1896, was brought to a swift conclusion by shelling from British cruisers. The cynical attitude towards recruited infantry in the face of ever more powerful field artillery is the source of the term "cannon fodder", first used by François-René de Chateaubriand, in 1814; however, the concept of regarding soldiers as nothing more than "food for powder" was mentioned by William Shakespeare as early as 1598, in Henry IV, Part 1.
Cannon in the 20th and 21st centuries are usually divided into sub-categories and given separate names. Some of the most widely used types of modern cannon are howitzers, mortars, guns, and autocannon, although a few very large-calibre cannon, custom-designed, have also been constructed. Nuclear artillery was experimented with, but was abandoned as impractical. Modern artillery is used in a variety of roles, depending on its type. According to NATO, the general role of artillery is to provide fire support, which is defined as "the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize, or suppress the enemy."
When referring to cannon, the term "gun" is often used incorrectly. In military usage, a gun is a cannon with a high muzzle velocity and a flat trajectory, useful for hitting the sides of targets such as walls, as opposed to howitzers or mortars, which have lower muzzle velocities, and fire indirectly, lobbing shells up and over obstacles to hit the target from above.
By the early 20th century, infantry weapons had become more powerful, forcing most artillery away from the front lines. Despite the change to indirect fire, cannon proved highly effective during World War I, directly or indirectly causing over 75% of casualties. The onset of trench warfare after the first few months of World War I greatly increased the demand for howitzers, as they were more suited at hitting targets in trenches. Furthermore, their shells carried more explosives than those of guns, and caused considerably less barrel wear. The German army had the advantage here as they began the war with many more howitzers than the French. World War I also saw the use of the Paris Gun, the longest-ranged gun ever fired. This calibre gun was used by the Germans against Paris and could hit targets more than away.
The Second World War sparked new developments in cannon technology. Among them were sabot rounds, hollow-charge projectiles, and proximity fuses, all of which increased the effectiveness of cannon against specific target. The proximity fuse emerged on the battlefields of Europe in late December 1944. Used to great effect in anti-aircraft projectiles, proximity fuses were fielded in both the European and Pacific Theatres of Operations; they were particularly useful against V-1 flying bombs and kamikaze planes. Although widely used in naval warfare, and in anti-air guns, both the British and Americans feared unexploded proximity fuses would be reverse engineered leading to them limiting its use in continental battles. During the Battle of the Bulge, however, the fuses became known as the American artillery's "Christmas present" for the German army because of their effectiveness against German personnel in the open, when they frequently dispersed attacks. Anti-tank guns were also tremendously improved during the war: in 1939, the British used primarily 2 pounder and 6 pounder guns. By the end of the war, 17 pounders had proven much more effective against German tanks, and 32 pounders had entered development. Meanwhile, German tanks were continuously upgraded with better main guns, in addition to other improvements. For example, the Panzer III was originally designed with a 37 mm gun, but was mass-produced with a 50 mm cannon. To counter the threat of the Russian T-34s, another, more powerful 50 mm gun was introduced, only to give way to a larger 75 mm cannon, which was in a fixed mount as the StuG III, the most-produced German World War II armoured fighting vehicle of any type. Despite the improved guns, production of the Panzer III was ended in 1943, as the tank still could not match the T-34, and was replaced by the Panzer IV and Panther tanks. In 1944, the 8.8 cm KwK 43 and many variations, entered service with the Wehrmacht, and was used as both a tank main gun, and as the PaK 43 anti-tank gun. One of the most powerful guns to see service in World War II, it was capable of destroying any Allied tank at very long ranges.
Despite being designed to fire at trajectories with a steep angle of descent, howitzers can be fired directly, as was done by the 11th Marine Regiment at the Battle of Chosin Reservoir, during the Korean War. Two field batteries fired directly upon a battalion of Chinese infantry; the Marines were forced to brace themselves against their howitzers, as they had no time to dig them in. The Chinese infantry took heavy casualties, and were forced to retreat.
The tendency to create larger calibre cannon during the World Wars has reversed since. The United States Army, for example, sought a lighter, more versatile howitzer, to replace their ageing pieces. As it could be towed, the M198 was selected to be the successor to the World War II–era cannon used at the time, and entered service in 1979. Still in use today, the M198 is, in turn, being slowly replaced by the M777 Ultralightweight howitzer, which weighs nearly half as much and can be more easily moved. Although land-based artillery such as the M198 are powerful, long-ranged, and accurate, naval guns have not been neglected, despite being much smaller than in the past, and, in some cases, having been replaced by cruise missiles. However, the 's planned armament includes the Advanced Gun System (AGS), a pair of 155 mm guns, which fire the Long Range Land-Attack Projectile. The warhead, which weighs , has a circular error of probability of , and will be mounted on a rocket, to increase the effective range to , further than that of the Paris Gun. The AGS's barrels will be water cooled, and will fire 10 rounds per minute, per gun. The combined firepower from both turrets will give a "Zumwalt"-class destroyer the firepower equivalent to 18 conventional M198 howitzers. The reason for the re-integration of cannon as a main armament in United States Navy ships is because satellite-guided munitions fired from a gun are less expensive than a cruise missile but have a similar guidance capability.
Autocannons have an automatic firing mode, similar to that of a machine gun. They have mechanisms to automatically load their ammunition, and therefore have a higher rate of fire than artillery, often approaching, or, in the case of rotary autocannons, even surpassing the firing rate of a machine gun. While there is no minimum bore for autocannons, they are generally larger than machine guns, typically 20 mm or greater since World War II and are usually capable of using explosive ammunition even if it isn't always used. Machine guns in contrast are usually too small to use explosive ammunition.
Most nations use rapid-fire cannon on light vehicles, replacing a more powerful, but heavier, tank gun. A typical autocannon is the 25 mm "Bushmaster" chain gun, mounted on the LAV-25 and M2 Bradley armoured vehicles. Autocannons may be capable of a very high rate of fire, but ammunition is heavy and bulky, limiting the amount carried. For this reason, both the 25 mm Bushmaster and the 30 mm RARDEN are deliberately designed with relatively low rates of fire. The typical rate of fire for a modern autocannon ranges from 90 to 1,800 rounds per minute. Systems with multiple barrels, such as a rotary autocannon, can have rates of fire of more than several thousand rounds per minute. The fastest of these is the GSh-6-23, which has a rate of fire of over 10,000 rounds per minute.
Autocannons are often found in aircraft, where they replaced machine guns and as shipboard anti-aircraft weapons, as they provide greater destructive power than machine guns.
The first documented installation of a cannon on an aircraft was on the Voisin Canon in 1911, displayed at the Paris Exposition that year.
By World War I, all of the major powers were experimenting with aircraft mounted cannon; however their low rate of fire and great size and weight precluded any of them from being anything other than experimental. The most successful (or least unsuccessful) was the SPAD 12 Ca.1 with a single 37mm Puteaux mounted to fire between the cylinder banks and through the propeller boss of the aircraft's Hispano-Suiza 8C. The pilot (by necessity an ace) had to manually reload each round.
The first autocannon were developed during World War I as anti-aircraft guns, and one of these—the Coventry Ordnance Works "COW 37 mm gun" was installed in an aircraft but the war ended before it could be given a field trial and never became standard equipment in a production aircraft. Later trials had it fixed at a steep angle upwards in both the Vickers Type 161 and the Westland C.O.W. Gun Fighter, an idea that would return later.
During this period autocannons became available and several fighters of the German "Luftwaffe" and the Imperial Japanese Navy Air Service were fitted with 20mm cannon. They continued to be installed as an adjunct to machine guns rather than as a replacement, as the rate of fire was still too low and the complete installation too heavy. There was a some debate in the RAF as to whether the greater number of possible rounds being fired from a machine gun, or a smaller number of explosive rounds from a cannon was preferable. Improvements during the war in regards to rate of fire allowed the cannon to displace the machine gun almost entirely. The cannon was more effective against armour so they were increasingly used during the course of World War II, and newer fighters such as the Hawker Tempest usually carried two or four versus the six .50 Browning machine guns for US aircraft or eight to twelve M1919 Browning machine guns on earlier British aircraft. The Hispano-Suiza HS.404, Oerlikon 20 mm cannon, MG FF, and their numerous variants became among the most widely used autocannon in the war. Cannon, as with machine guns, were generally fixed to fire forwards (mounted in the wings, in the nose or fuselage, or in a pannier under either); or were mounted in gun turrets on heavier aircraft. Both the Germans and Japanese mounted cannon to fire upwards and forwards for use against heavy bombers, with the Germans calling guns so-installed "Schräge Musik". Schräge Musik derives from the German colloquialism for Jazz Music (the German word schräg means slanted or oblique)
Preceding the Vietnam War the high speeds aircraft were attaining led to a move to remove the cannon due to the mistaken belief that they would be useless in a dogfight, but combat experience during the Vietnam War showed conclusively that despite advances in missiles, there was still a need for them. Nearly all modern fighter aircraft are armed with an autocannon and they are also commonly found on ground-attack aircraft. One of the most powerful examples is the 30mm GAU-8/A Avenger Gatling-type rotary cannon, mounted exclusively on the Fairchild Republic A-10 Thunderbolt II. The Lockheed AC-130 gunship (a converted transport) can carry a 105mm howitzer as well as a variety of autocannons ranging up to 40mm. Both are used in the close air support role.
Cannon in general have the form of a truncated cone with an internal cylindrical bore for holding an explosive charge and a projectile. The thickest, strongest, and closed part of the cone is located near the explosive charge. As any explosive charge will dissipate in all directions equally, the thickest portion of the cannon is useful for containing and directing this force. The backward motion of the cannon as its projectile leaves the bore is termed its recoil and the effectiveness of the cannon can be measured in terms of how much this response can be diminished, though obviously diminishing recoil through increasing the overall mass of the cannon means decreased mobility.
Field artillery cannon in Europe and the Americas were initially made most often of bronze, though later forms were constructed of cast iron and eventually steel. Bronze has several characteristics that made it preferable as a construction material: although it is relatively expensive, does not always alloy well, and can result in a final product that is "spongy about the bore", bronze is more flexible than iron and therefore less prone to bursting when exposed to high pressure; cast iron cannon are less expensive and more durable generally than bronze and withstand being fired more times without deteriorating. However, cast iron cannon have a tendency to burst without having shown any previous weakness or wear, and this makes them more dangerous to operate.
The older and more-stable forms of cannon were muzzle-loading as opposed to breech-loading—in order to be used they had to have their ordnance packed down the bore through the muzzle rather than inserted through the breech.
The following terms refer to the components or aspects of a classical western cannon (c. 1850) as illustrated here. In what follows, the words "near", "close", and "behind" will refer to those parts towards the thick, closed end of the piece, and "far", "front", "in front of", and "before" to the thinner, open end.
The main body of a cannon consists of three basic extensions: the foremost and the longest is called the "chase", the middle portion is the "reinforce", and the closest and briefest portion is the "cascabel" or "cascable".
To pack a muzzle-loading cannon, first gunpowder is poured down the bore. This is followed by a layer of wadding (often nothing more than paper), and then the cannonball itself. A certain amount of windage allows the ball to fit down the bore, though the greater the windage the less efficient the propulsion of the ball when the gunpowder is ignited. To fire the cannon, the fuse located in the vent is lit, quickly burning down to the gunpowder, which then explodes violently, propelling wadding and ball down the bore and out of the muzzle. A small portion of exploding gas also escapes through the vent, but this does not dramatically affect the total force exerted on the ball.
Any large, smoothbore, muzzle-loading gun—used before the advent of breech-loading, rifled guns—may be referred to as a cannon, though once standardised names were assigned to different-sized cannon, the term specifically referred to a gun designed to fire a shot, as distinct from a demi-cannon – , culverin – , or demi-culverin – . "Gun" specifically refers to a type of cannon that fires projectiles at high speeds, and usually at relatively low angles; they have been used in warships, and as field artillery. The term "cannon" is also used for autocannon, a modern repeating weapon firing explosive projectiles. Cannon have been used extensively in fighter aircraft since World War II.
In the 1770s, cannon operation worked as follows: each cannon would be manned by two gunners, six soldiers, and four officers of artillery. The right gunner was to prime the piece and load it with powder, and the left gunner would fetch the powder from the magazine and be ready to fire the cannon at the officer's command. On each side of the cannon, three soldiers stood, to ram and sponge the cannon, and hold the ladle. The second soldier on the left was tasked with providing 50 bullets.
Before loading, the cannon would be cleaned with a wet sponge to extinguish any smouldering material from the last shot. Fresh powder could be set off prematurely by lingering ignition sources. The powder was added, followed by wadding of paper or hay, and the ball was placed in and rammed down. After ramming, the cannon would be aimed with the elevation set using a quadrant and a plummet. At 45 degrees, the ball had the utmost range: about ten times the gun's level range. Any angle above a horizontal line was called random-shot. Wet sponges were used to cool the pieces every ten or twelve rounds.
During the Napoleonic Wars, a British gun team consisted of five gunners to aim it, clean the bore with a damp sponge to quench any remaining embers before a fresh charge was introduced, and another to load the gun with a bag of powder and then the projectile. The fourth gunner pressed his thumb on the vent hole, to prevent a draught that might fan a flame. The charge loaded, the fourth would prick the bagged charge through the vent hole, and fill the vent with powder. On command, the fifth gunner would fire the piece with a slow match. Friction primers replaced slow match ignition by the mid-19th century.
When a cannon had to be abandoned such as in a retreat or surrender, the touch hole of the cannon would be plugged flush with an iron spike, disabling the cannon (at least until metal boring tools could be used to remove the plug). This was called "spiking the cannon".
A gun was said to be "honeycombed" when the surface of the bore had cavities, or holes in it, caused either by corrosion or casting defects.
Historically, logs or poles have been used as decoys to mislead the enemy as to the strength of an emplacement. The "Quaker Gun trick" was used by Colonel William Washington's Continental Army during the American Revolutionary War; in 1780, approximately 100 Loyalists surrendered to them, rather than face bombardment. During the American Civil War, Quaker guns were also used by the Confederates, to compensate for their shortage of artillery. The decoy cannon were painted black at the "muzzle", and positioned behind fortifications to delay Union attacks on those positions. On occasion, real gun carriages were used to complete the deception.
Cannon sounds have sometimes been used in classical pieces with a military theme. One of the best known examples of such a piece is Pyotr Ilyich Tchaikovsky's "1812 Overture". The overture is to be performed using an artillery section together with the orchestra, resulting in noise levels high enough that musicians are required to wear ear protection. The cannon fire simulates Russian artillery bombardments of the Battle of Borodino, a critical battle in Napoleon's invasion of Russia, whose defeat the piece celebrates. When the overture was first performed, the cannon were fired by an electric current triggered by the conductor. However, the overture was not recorded with real cannon fire until Mercury Records and conductor Antal Doráti's 1958 recording of the Minnesota Orchestra. Cannon fire is also frequently used annually in presentations of the "1812" on the American Independence Day, a tradition started by Arthur Fiedler of the Boston Pops in 1974.
The hard rock band AC/DC also used cannon in their song "For Those About to Rock (We Salute You)", and in live shows replica Napoleonic cannon and pyrotechnics were used to perform the piece.
Cannons recovered from the sea are often extensively damaged from exposure to salt water; because of this, electrolytic reduction treatment is required to forestall the process of corrosion. The cannon is then washed in deionized water to remove the electrolyte, and is treated in tannic acid, which prevents further rust and gives the metal a bluish-black colour. After this process, cannon on display may be protected from oxygen and moisture by a wax sealant. A coat of polyurethane may also be painted over the wax sealant, to prevent the wax-coated cannon from attracting dust in outdoor displays. In 2011, archaeologists say six cannon recovered from a river in Panama that could have belonged to legendary pirate Henry Morgan are being studied and could eventually be displayed after going through a restoration process. | https://en.wikipedia.org/wiki?curid=7053 |
Computer mouse
A computer mouse (plural mice or mouses) is a hand-held pointing device that detects two-dimensional motion relative to a surface. This motion is typically translated into the motion of a pointer on a display, which allows a smooth control of the graphical user interface of a computer.
The first public demonstration of a mouse controlling a computer system was in 1968. Mice originally used a ball rolling on a surface to detect motion, but modern mice often have optical sensors that have no moving parts. Originally wired to a computer, many modern mice are cordless, relying on short-range radio communication with the connected system.
In addition to moving a cursor, computer mice have one or more buttons to allow operations such as selection of a menu item on a display. Mice often also feature other elements, such as touch surfaces and scroll wheels, which enable additional control and dimensional input.
The earliest known publication of the term "mouse" as referring to a computer pointing device is in Bill English's July 1965 publication, "Computer-Aided Display Control" likely originating from its resemblance to the shape and size of a mouse, a rodent, with the cord resembling its tail.
The plural for the small rodent is always "mice" in modern usage. The plural of a computer mouse is either "mouses" or "mice" according to most dictionaries, with "mice" being more common. The first recorded plural usage is "mice"; the online "Oxford Dictionaries" cites a 1984 use, and earlier uses include J. C. R. Licklider's "The Computer as a Communication Device" of 1968. The term computer mouses may be used informally in some cases. Although the plural of a mouse (small rodent) is mice, the two words have undergone a differentiation through usage.
The trackball, a related pointing device, was invented in 1946 by Ralph Benjamin as part of a post-World War II-era fire-control radar plotting system called Comprehensive Display System (CDS). Benjamin was then working for the British Royal Navy Scientific Service. Benjamin's project used analog computers to calculate the future position of target aircraft based on several initial input points provided by a user with a joystick. Benjamin felt that a more elegant input device was needed and invented what they called a "roller ball" for this purpose.
The device was patented in 1947, but only a prototype using a metal ball rolling on two rubber-coated wheels was ever built, and the device was kept as a military secret.
Another early trackball was built by Kenyon Taylor, a British electrical engineer working in collaboration with Tom Cranston and Fred Longstaff. Taylor was part of the original Ferranti Canada, working on the Royal Canadian Navy's DATAR (Digital Automated Tracking and Resolving) system in 1952.
DATAR was similar in concept to Benjamin's display. The trackball used four disks to pick up motion, two each for the X and Y directions. Several rollers provided mechanical support. When the ball was rolled, the pickup discs spun and contacts on their outer rim made periodic contact with wires, producing pulses of output with each movement of the ball. By counting the pulses, the physical movement of the ball could be determined. A digital computer calculated the tracks and sent the resulting data to other ships in a task force using pulse-code modulation radio signals. This trackball used a standard Canadian five-pin bowling ball. It was not patented, since it was a secret military project.
Douglas Engelbart of the Stanford Research Institute (now SRI International) has been credited in published books by Thierry Bardini, Paul Ceruzzi, Howard Rheingold, and several others as the inventor of the computer mouse. Engelbart was also recognized as such in various obituary titles after his death in July 2013.
By 1963, Engelbart had already established a research lab at SRI, the Augmentation Research Center (ARC), to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence. That November, while attending a conference on computer graphics in Reno, Nevada, Engelbart began to ponder how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data. On November 14, 1963, he first recorded his thoughts in his personal notebook about something he initially called a "bug," which in a "3-point" form could have a "drop point and 2 orthogonal wheels." He wrote that the "bug" would be "easier" and "more natural" to use, and unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard."
In 1964, Bill English joined ARC, where he helped Engelbart build the first mouse prototype. They christened the device the "mouse" as early models had a cord attached to the rear part of the device which looked like a tail, and in turn resembled the common mouse. As noted above, this "mouse" was first mentioned in print in a July 1965 report, on which English was the lead author. On 9 December 1968, Engelbart publicly demonstrated the mouse at what would come to be known as The Mother of All Demos. Engelbart never received any royalties for it, as his employer SRI held the patent, which expired before the mouse became widely used in personal computers. In any event, the invention of the mouse was just a small part of Engelbart's much larger project of augmenting human intellect.
Several other experimental pointing-devices developed for Engelbart's oN-Line System (NLS) exploited different body movements – for example, head-mounted devices attached to the chin or nose – but ultimately the mouse won out because of its speed and convenience. The first mouse, a bulky device (pictured) used two potentiometers perpendicular to each other and connected to wheels: the rotation of each wheel translated into motion along one axis. At the time of the "Mother of All Demos", Engelbart's group had been using their second generation, 3-button mouse for about a year.
On October 2, 1968, a mouse device named ' (German for "rolling ball") was described as an optional device for its SIG-100 terminal was developed by the German company Telefunken. As the name suggests and unlike Engelbart's mouse, the Telefunken model already had a ball. It was based on an earlier trackball-like device (also named ') that was embedded into radar flight control desks. This trackball had been developed by a team led by Rainer Mallebrein at Telefunken for the German "Bundesanstalt für Flugsicherung (Federal Air Traffic Control)" as part of their TR 86 process computer system with its SIG 100-86 vector graphics terminal.
When the development for the Telefunken main frame began in 1965, Mallebrein and his team came up with the idea of "reversing" the existing into a moveable mouse-like device, so that customers did not have to be bothered with mounting holes for the earlier trackball device. Together with light pens and trackballs, it was offered as an optional input device for their system since 1968. Some Rollkugel mouses installed at the in Munich in 1972 are well preserved in a museum. Telefunken considered the invention too unimportant to apply for a patent on it.
The Xerox Alto was one of the first computers designed for individual use in 1973 and is regarded as the first modern computer to utilize a mouse. Inspired by PARC's Alto, the Lilith, a computer which had been developed by a team around at ETH Zürich between 1978 and 1980, provided a mouse as well. The third marketed version of an integrated mouse shipped as a part of a computer and intended for personal computer navigation came with the Xerox 8010 Star in 1981.
By 1982, the Xerox 8010 was probably the best-known computer with a mouse. The Sun-1 also came with a mouse, and the forthcoming Apple Lisa was rumored to use one, but the peripheral remained obscure; Jack Hawley of The Mouse House reported that one buyer for a large organization believed at first that his company sold lab mice. Hawley, who manufactured mice for Xerox, stated that "Practically, I have the market all to myself right now"; a Hawley mouse cost $415. In 1982, Logitech introduced the P4 Mouse at the Comdex trade show in Las Vegas, its first hardware mouse. That same year Microsoft made the decision to make the MS-DOS program Microsoft Word mouse-compatible, and developed the first PC-compatible mouse. Microsoft's mouse shipped in 1983, thus beginning the Microsoft Hardware division of the company. However, the mouse remained relatively obscure until the appearance of the Macintosh 128K (which included an updated version of the single-button Lisa Mouse) in 1984, and of the Amiga 1000 and the Atari ST in 1985.
A mouse typically controls the motion of a pointer in two dimensions in a graphical user interface (GUI). The mouse turns movements of the hand backward and forward, left and right into equivalent electronic signals that in turn are used to move the pointer.
The relative movements of the mouse on the surface are applied to the position of the pointer on the screen, which signals the point where actions of the user take place, so hand movements are replicated by the pointer. Clicking or hovering (stopping movement while the cursor is within the bounds of an area) can select files, programs or actions from a list of names, or (in graphical interfaces) through small images called "icons" and other elements. For example, a text file might be represented by a picture of a paper notebook and clicking while the cursor hovers this icon might cause a text editing program to open the file in a window.
Different ways of operating the mouse cause specific things to happen in the GUI:
Users can also employ mice "gesturally"; meaning that a stylized motion of the mouse cursor itself, called a "gesture", can issue a command or map to a specific action. For example, in a drawing program, moving the mouse in a rapid "x" motion over a shape might delete the shape.
Gestural interfaces occur more rarely than plain pointing-and-clicking; and people often find them more difficult to use, because they require finer motor control from the user. However, a few gestural conventions have become widespread, including the drag and drop gesture, in which:
For example, a user might drag-and-drop a picture representing a file onto a picture of a trash can, thus instructing the system to delete the file.
Standard semantic gestures include:
Other uses of the mouse's input occur commonly in special application-domains. In interactive three-dimensional graphics, the mouse's motion often translates directly into changes in the virtual objects' or camera's orientation. For example, in the first-person shooter genre of games (see below), players usually employ the mouse to control the direction in which the virtual player's "head" faces: moving the mouse up will cause the player to look up, revealing the view above the player's head. A related function makes an image of an object rotate, so that all sides can be examined. 3D design and animation software often modally chords many different combinations to allow objects and cameras to be rotated and moved through space with the few axes of movement mice can detect.
When mice have more than one button, the software may assign different functions to each button. Often, the primary (leftmost in a right-handed configuration) button on the mouse will select items, and the secondary (rightmost in a right-handed) button will bring up a menu of alternative actions applicable to that item. For example, on platforms with more than one button, the Mozilla web browser will follow a link in response to a primary button click, will bring up a contextual menu of alternative actions for that link in response to a secondary-button click, and will often open the link in a new tab or window in response to a click with the tertiary (middle) mouse button.
The German company Telefunken published on their early ball mouse on 2 October 1968. Telefunken's mouse was sold as optional equipment for their computer systems. Bill English, builder of Engelbart's original mouse, created a ball mouse in 1972 while working for Xerox PARC.
The ball mouse replaced the external wheels with a single ball that could rotate in any direction. It came as part of the hardware package of the Xerox Alto computer. Perpendicular chopper wheels housed inside the mouse's body chopped beams of light on the way to light sensors, thus detecting in their turn the motion of the ball. This variant of the mouse resembled an inverted trackball and became the predominant form used with personal computers throughout the 1980s and 1990s. The Xerox PARC group also settled on the modern technique of using both hands to type on a full-size keyboard and grabbing the mouse when required.
The ball mouse has two freely rotating rollers. These are located 90 degrees apart. One roller detects the forward–backward motion of the mouse and other the left–right motion. Opposite the two rollers is a third one (white, in the photo, at 45 degrees) that is spring-loaded to push the ball against the other two rollers. Each roller is on the same shaft as an encoder wheel that has slotted edges; the slots interrupt infrared light beams to generate electrical pulses that represent wheel movement. Each wheel's disc has a pair of light beams, located so that a given beam becomes interrupted or again starts to pass light freely when the other beam of the pair is about halfway between changes.
Simple logic circuits interpret the relative timing to indicate which direction the wheel is rotating. This incremental rotary encoder scheme is sometimes called quadrature encoding of the wheel rotation, as the two optical sensors produce signals that are in approximately quadrature phase. The mouse sends these signals to the computer system via the mouse cable, directly as logic signals in very old mice such as the Xerox mice, and via a data-formatting IC in modern mice. The driver software in the system converts the signals into motion of the mouse cursor along X and Y axes on the computer screen.
The ball is mostly steel, with a precision spherical rubber surface. The weight of the ball, given an appropriate working surface under the mouse, provides a reliable grip so the mouse's movement is transmitted accurately. Ball mice and wheel mice were manufactured for Xerox by Jack Hawley, doing business as The Mouse House in Berkeley, California, starting in 1975. Based on another invention by Jack Hawley, proprietor of the Mouse House, Honeywell produced another type of mechanical mouse. Instead of a ball, it had two wheels rotating at off axes. Key Tronic later produced a similar product.
Modern computer mice took form at the École Polytechnique Fédérale de Lausanne (EPFL) under the inspiration of Professor Jean-Daniel Nicoud and at the hands of engineer and watchmaker André Guignard. This new design incorporated a single hard rubber mouseball and three buttons, and remained a common design until the mainstream adoption of the scroll-wheel mouse during the 1990s. In 1985, René Sommer added a microprocessor to Nicoud's and Guignard's design. Through this innovation, Sommer is credited with inventing a significant component of the mouse, which made it more "intelligent"; though optical mice from Mouse Systems had incorporated microprocessors by 1984.
Another type of mechanical mouse, the "analog mouse" (now generally regarded as obsolete), uses potentiometers rather than encoder wheels, and is typically designed to be plug compatible with an analog joystick. The "Color Mouse", originally marketed by RadioShack for their Color Computer (but also usable on MS-DOS machines equipped with analog joystick ports, provided the software accepted joystick input) was the best-known example.
Optical mice rely entirely on one or more light-emitting diodes (LEDs) and an imaging array of photodiodes to detect movement relative to the underlying surface, eschewing the internal moving parts a mechanical mouse uses in addition to its optics. A laser mouse is an optical mouse that uses coherent (laser) light.
The earliest optical mice detected movement on pre-printed mousepad surfaces, whereas the modern LED optical mouse works on most opaque diffuse surfaces; it is usually unable to detect movement on specular surfaces like polished stone. Laser diodes are also used for better resolution and precision, improving performance on opaque specular surfaces. Battery powered, wireless optical mice flash the LED intermittently to save power, and only glow steadily when movement is detected.
Often called "air mice" since they do not require a surface to operate, inertial mice use a tuning fork or other accelerometer (US Patent 4787051) to detect rotary movement for every axis supported. The most common models (manufactured by Logitech and Gyration) work using 2 degrees of rotational freedom and are insensitive to spatial translation. The user requires only small wrist rotations to move the cursor, reducing user fatigue or "gorilla arm".
Usually cordless, they often have a switch to deactivate the movement circuitry between use, allowing the user freedom of movement without affecting the cursor position. A patent for an inertial mouse claims that such mice consume less power than optically based mice, and offer increased sensitivity, reduced weight and increased ease-of-use. In combination with a wireless keyboard an inertial mouse can offer alternative ergonomic arrangements which do not require a flat work surface, potentially alleviating some types of repetitive motion injuries related to workstation posture.
Also known as bats, flying mice, or wands, these devices generally function through ultrasound and provide at least three degrees of freedom. Probably the best known example would be 3Dconnexion ("Logitech's SpaceMouse") from the early 1990s. In the late 1990s Kantek introduced the 3D RingMouse. This wireless mouse was worn on a ring around a finger, which enabled the thumb to access three buttons. The mouse was tracked in three dimensions by a base station. Despite a certain appeal, it was finally discontinued because it did not provide sufficient resolution.
One example of a 2000s consumer 3D pointing device is the Wii Remote. While primarily a motion-sensing device (that is, it can determine its orientation and direction of movement), Wii Remote can also detect its spatial position by comparing the distance and position of the lights from the IR emitter using its integrated IR camera (since the nunchuk accessory lacks a camera, it can only tell its current heading and orientation). The obvious drawback to this approach is that it can only produce spatial coordinates while its camera can see the sensor bar. More accurate consumer devices have since been released, including the PlayStation Move, the Razer Hydra and the controllers part of the HTC Vive virtual reality system. All of these devices can accurately detect position and orientation in 3D space regardless of angle relative to the sensor station.
A mouse-related controller called the SpaceBall has a ball placed above the work surface that can easily be gripped. With spring-loaded centering, it sends both translational as well as angular displacements on all six axes, in both directions for each. In November 2010 a German Company called Axsotic introduced a new concept of 3D mouse called 3D Spheric Mouse. This new concept of a true six degree-of-freedom input device uses a ball to rotate in 3 axes without any limitations.
In 2000, Logitech introduced a "tactile mouse" that contained a small actuator to make the mouse vibrate. Such a mouse can augment user-interfaces with haptic feedback, such as giving feedback when crossing a window boundary. To surf by touch requires the user to be able to feel depth or hardness; this ability was realized with the first electrorheological tactile mice but never marketed.
Tablet digitizers are sometimes used with accessories called pucks, devices which rely on absolute positioning, but can be configured for sufficiently mouse-like relative tracking that they are sometimes marketed as mice.
As the name suggests, this type of mouse is intended to provide optimum comfort and avoid injuries such as carpal tunnel syndrome, arthritis and other repetitive strain injuries. It is designed to fit natural hand position and movements, to reduce discomfort.
When holding a typical mouse, ulna and radius bones on the arm are crossed. Some designs attempt to place the palm more vertically, so the bones take more natural parallel position. Some limit wrist movement, encouraging arm movement instead, that may be less precise but more optimal from the health point of view. A mouse may be angled from the thumb downward to the opposite side – this is known to reduce wrist pronation. However such optimizations make the mouse right or left hand specific, making more problematic to change the tired hand. Time magazine has criticized manufacturers for offering few or no left-handed ergonomic mice: "Oftentimes I felt like I was dealing with someone who’d never actually met a left-handed person before."
Another solution is a pointing bar device. The so-called "roller bar mouse" is positioned snugly in front of the keyboard, thus allowing bi-manual accessibility.
These mice are specifically designed for use in computer games. They typically employ a wide array of controls and buttons and have designs that differ radically from traditional mice. They may also have decorative monochrome or programmable RGB LED lighting. The additional buttons can often be used for changing the sensitivity of the mouse or they can be assigned to macros (i.e., for opening a program or for use instead of a key combination) It is also common for gaming mice, especially those designed for use in real-time strategy games such as "StarCraft", or in multiplayer online battle arena games such as "Dota 2" to have a relatively high sensitivity, measured in dots per inch (DPI). Some advanced mice from gaming manufacturers also allow users to customize the weight of the mouse by adding or subtracting weights to allow for easier control. Ergonomic quality is also an important factor in gaming mice, as extended gameplay times may render further use of the mouse to be uncomfortable. Some mice have been designed to have adjustable features such as removable and/or elongated palm rests, horizontally adjustable thumb rests and pinky rests. Some mice may include several different rests with their products to ensure comfort for a wider range of target consumers. Gaming mice are held by gamers in three styles of grip:
To transmit their input, typical cabled mice use a thin electrical cord terminating in a standard connector, such as RS-232C, PS/2, ADB or USB. Cordless mice instead transmit data via infrared radiation (see IrDA) or radio (including Bluetooth), although many such cordless interfaces are themselves connected through the aforementioned wired serial buses.
While the electrical interface and the format of the data transmitted by commonly available mice is currently standardized on USB, in the past it varied between different manufacturers. A bus mouse used a dedicated interface card for connection to an IBM PC or compatible computer.
Mouse use in DOS applications became more common after the introduction of the Microsoft Mouse, largely because Microsoft provided an open standard for communication between applications and mouse driver software. Thus, any application written to use the Microsoft standard could use a mouse with a driver that implements the same API, even if the mouse hardware itself was incompatible with Microsoft's. This driver provides the state of the buttons and the distance the mouse has moved in units that its documentation calls "mickeys", as does the Allegro library.
In the 1970s, the Xerox Alto mouse, and in the 1980s the Xerox optical mouse, used a quadrature-encoded X and Y interface. This two-bit encoding per dimension had the property that only one bit of the two would change at a time, like a Gray code or Johnson counter, so that the transitions would not be misinterpreted when asynchronously sampled.
The earliest mass-market mice, such as on the original Macintosh, Amiga, and Atari ST mice used a D-subminiature 9-pin connector to send the quadrature-encoded X and Y axis signals directly, plus one pin per mouse button. The mouse was a simple optomechanical device, and the decoding circuitry was all in the main computer.
The DE-9 connectors were designed to be electrically compatible with the joysticks popular on numerous 8-bit systems, such as the Commodore 64 and the Atari 2600. Although the ports could be used for both purposes, the signals must be interpreted differently. As a result, plugging a mouse into a joystick port causes the "joystick" to continuously move in some direction, even if the mouse stays still, whereas plugging a joystick into a mouse port causes the "mouse" to only be able to move a single pixel in each direction.
Because the IBM PC did not have a quadrature decoder built in, early PC mice used the RS-232C serial port to communicate encoded mouse movements, as well as provide power to the mouse's circuits. The Mouse Systems Corporation version used a five-byte protocol and supported three buttons. The Microsoft version used a three-byte protocol and supported two buttons. Due to the incompatibility between the two protocols, some manufacturers sold serial mice with a mode switch: "PC" for MSC mode, "MS" for Microsoft mode.
In 1986 Apple first implemented the Apple Desktop Bus allowing the daisy-chaining (linking together in series, ie. end to end) of up to 16 devices, including mice and other devices on the same bus with no configuration whatsoever. Featuring only a single data pin, the bus used a purely polled approach to computer/device communications and survived as the standard on mainstream models (including a number of non-Apple workstations) until 1998 when Apple's iMac line of computers joined the industry-wide switch to using USB. Beginning with the Bronze Keyboard PowerBook G3 in May 1999, Apple dropped the external ADB port in favor of USB, but retained an internal ADB connection in the PowerBook G4 for communication with its built-in keyboard and trackpad until early 2005.
With the arrival of the IBM PS/2 personal-computer series in 1987, IBM introduced the eponymous PS/2 interface for mice and keyboards, which other manufacturers rapidly adopted. The most visible change was the use of a round 6-pin mini-DIN, in lieu of the former 5-pin MIDI style full sized DIN 41524 connector. In default mode (called "stream mode") a PS/2 mouse communicates motion, and the state of each button, by means of 3-byte packets. For any motion, button press or button release event, a PS/2 mouse sends, over a bi-directional serial port, a sequence of three bytes, with the following format:
Here, XS and YS represent the sign bits of the movement vectors, XV and YV indicate an overflow in the respective vector component, and LB, MB and RB indicate the status of the left, middle and right mouse buttons (1 = pressed). PS/2 mice also understand several commands for reset and self-test, switching between different operating modes, and changing the resolution of the reported motion vectors.
A Microsoft IntelliMouse relies on an extension of the PS/2 protocol: the ImPS/2 or IMPS/2 protocol (the abbreviation combines the concepts of "IntelliMouse" and "PS/2"). It initially operates in standard PS/2 format, for backwards compatibility. After the host sends a special command sequence, it switches to an extended format in which a fourth byte carries information about wheel movements. The IntelliMouse Explorer works analogously, with the difference that its 4-byte packets also allow for two additional buttons (for a total of five).
Mouse vendors also use other extended formats, often without providing public documentation. The Typhoon mouse uses 6-byte packets which can appear as a sequence of two standard 3-byte packets, such that an ordinary PS/2 driver can handle them. For 3-D (or 6-degree-of-freedom) input, vendors have made many extensions both to the hardware and to software. In the late 1990s, Logitech created ultrasound based tracking which gave 3D input to a few millimeters accuracy, which worked well as an input device but failed as a profitable product. In 2008, Motion4U introduced its "OptiBurst" system using IR tracking for use as a Maya (graphics software) plugin.
The industry-standard USB (Universal Serial Bus) protocol and its connector have become widely used for mice; it is among the most popular types.
Cordless or wireless mice transmit data via infrared radiation (see IrDA) or radio (including Bluetooth and Wi-Fi). The receiver is connected to the computer through a serial or USB port, or can be built in (as is sometimes the case with Bluetooth and WiFi).
Modern non-Bluetooth and non-WiFi wireless mice use USB receivers. Some of these can be stored inside the mouse for safe transport while not in use, while other, newer mice use newer "nano" receivers, designed to be small enough to remain plugged into a laptop during transport, while still being large enough to easily remove.
Some systems allow two or more mice to be used at once as input devices. Late-1980s era home computers such as the Amiga used this to allow computer games with two players interacting on the same computer (Lemmings and The Settlers for example). The same idea is sometimes used in collaborative software, e.g. to simulate a whiteboard that multiple users can draw on without passing a single mouse around.
Microsoft Windows, since Windows 98, has supported multiple simultaneous pointing devices. Because Windows only provides a single screen cursor, using more than one device at the same time requires cooperation of users or applications designed for multiple input devices.
Multiple mice are often used in multi-user gaming in addition to specially designed devices that provide several input interfaces.
Windows also has full support for multiple input/mouse configurations for multi-user environments.
Starting with Windows XP, Microsoft introduced an SDK for developing applications that allow multiple input devices to be used at the same time with independent cursors and independent input points. However, it no longer appears to be available.
The introduction of Vista and Microsoft Surface (now known as Microsoft PixelSense) introduced a new set of input APIs that were adopted into Windows 7, allowing for 50 points/cursors, all controlled by independent users. The new input points provide traditional mouse input; however, they were designed with other input technologies like touch and image in mind. They inherently offer 3D coordinates along with pressure, size, tilt, angle, mask, and even an image bitmap to see and recognize the input point/object on the screen.
As of 2009, Linux distributions and other operating systems that use X.Org, such as OpenSolaris and FreeBSD, support 255 cursors/input points through Multi-Pointer X. However, currently no window managers support Multi-Pointer X leaving it relegated to custom software usage.
There have also been propositions of having a single operator use two mice simultaneously as a more sophisticated means of controlling various graphics and multimedia applications.
Mouse buttons are microswitches which can be pressed to select or interact with an element of a graphical user interface, producing a distinctive clicking sound.
Since around the late 1990s, the three-button scrollmouse has become the de facto standard. Users most commonly employ the second button to invoke a contextual menu in the computer's software user interface, which contains options specifically tailored to the interface element over which the mouse cursor currently sits. By default, the primary mouse button sits located on the left-hand side of the mouse, for the benefit of right-handed users; left-handed users can usually reverse this configuration via software.
Nearly all mice now have an integrated input primarily intended for scrolling on top, usually a single-axis digital wheel or rocker switch which can also be depressed to act as a third button. Though less common, many mice instead have two-axis inputs such as a tiltable wheel, trackball, or touchpad.
Mickeys per second is a unit of measurement for the speed and movement direction of a computer mouse, where direction is often expressed as "horizontal" versus "vertical" mickey count. However, speed can also refer to the ratio between how many pixels the cursor moves on the screen and how far the mouse moves on the mouse pad, which may be expressed as pixels per mickey, pixels per inch, or pixels per centimeter.
The computer industry often measures mouse sensitivity in terms of counts per inch (CPI), commonly expressed as dots per inch (DPI)the number of steps the mouse will report when it moves one inch. In early mice, this specification was called pulses per inch (ppi). The Mickey originally referred to one of these counts, or one resolvable step of motion. If the default mouse-tracking condition involves moving the cursor by one screen-pixel or dot on-screen per reported step, then the CPI does equate to DPI: dots of cursor motion per inch of mouse motion. The CPI or DPI as reported by manufacturers depends on how they make the mouse; the higher the CPI, the faster the cursor moves with mouse movement. However, software can adjust the mouse sensitivity, making the cursor move faster or slower than its CPI. software can change the speed of the cursor dynamically, taking into account the mouse's absolute speed and the movement from the last stop-point. In most software, an example being the Windows platforms, this setting is named "speed," referring to "cursor precision". However, some operating systems name this setting "acceleration", the typical Apple OS designation. This term is incorrect. Mouse acceleration in most mouse software refers to the change in speed of the cursor over time while the mouse movement is constant.
For simple software, when the mouse starts to move, the software will count the number of "counts" or "mickeys" received from the mouse and will move the cursor across the screen by that number of pixels (or multiplied by a rate factor, typically less than 1). The cursor will move slowly on the screen, with good precision. When the movement of the mouse passes the value set for some threshold, the software will start to move the cursor faster, with a greater rate factor. Usually, the user can set the value of the second rate factor by changing the "acceleration" setting.
Operating systems sometimes apply acceleration, referred to as "ballistics", to the motion reported by the mouse. For example, versions of Windows prior to Windows XP doubled reported values above a configurable threshold, and then optionally doubled them again above a second configurable threshold. These doublings applied separately in the X and Y directions, resulting in very nonlinear response.
Engelbart's original mouse did not require a mousepad; the mouse had two large wheels which could roll on virtually any surface. However, most subsequent mechanical mice starting with the steel roller ball mouse have required a mousepad for optimal performance.
The mousepad, the most common mouse accessory, appears most commonly in conjunction with mechanical mice, because to roll smoothly the ball requires more friction than common desk surfaces usually provide. So-called "hard mousepads" for gamers or optical/laser mice also exist.
Most optical and laser mice do not require a pad, the notable exception being early optical mice which relied on a grid on the pad to detect movement (e.g. Mouse Systems). Whether to use a hard or soft mousepad with an optical mouse is largely a matter of personal preference. One exception occurs when the desk surface creates problems for the optical or laser tracking, for example, a transparent or reflective surface, such as glass.
Around 1981, Xerox included mice with its Xerox Star, based on the mouse used in the 1970s on the Alto computer at Xerox PARC. Sun Microsystems, Symbolics, Lisp Machines Inc., and Tektronix also shipped workstations with mice, starting in about 1981. Later, inspired by the Star, Apple Computer released the Apple Lisa, which also used a mouse. However, none of these products achieved large-scale success. Only with the release of the Apple Macintosh in 1984 did the mouse see widespread use.
The Macintosh design, commercially successful and technically influential, led many other vendors to begin producing mice or including them with their other computer products (by 1986, Atari ST, Amiga, Windows 1.0, GEOS for the Commodore 64, and the Apple IIGS).
The widespread adoption of graphical user interfaces in the software of the 1980s and 1990s made mice all but indispensable for controlling computers. In November 2008, Logitech built their billionth mouse.
The Classic Mac OS Desk Accessory "Puzzle" in 1984 was the first game designed specifically for a mouse. The device often functions as an interface for PC-based computer games and sometimes for video game consoles.
FPSs naturally lend themselves to separate and simultaneous control of the player's movement and aim, and on computers this has traditionally been achieved with a combination of keyboard and mouse. Players use the X-axis of the mouse for looking (or turning) left and right, and the Y-axis for looking up and down; the keyboard is used for movement and supplemental inputs.
Many shooting genre players prefer a mouse over a gamepad analog stick because the wide range of motion offered by a mouse allows for faster and more varied control. Although an analog stick allows the player more granular control, it is poor for certain movements, as the player's input is relayed based on a vector of both the stick
s direction and magnitude. Thus, a small but fast movement (known as "flick-shotting") using a gamepad requires the player to quickly move the stick from its rest position to the edge and back again in quick succession, a difficult maneuver. In addition the stick also has a finite magnitude; if the player is currently using the stick to move at a non-zero velocity their ability to increase the rate of movement of the camera is further limited based on the position their displaced stick was already at before executing the maneuver. The effect of this is that a mouse is well suited not only to small, precise movements but also to large, quick movements and immediate, responsive movements; all of which are important in shooter gaming. This advantage also extends in varying degrees to similar game styles such as third-person shooters.
Some incorrectly ported games or game engines have acceleration and interpolation curves which unintentionally produce excessive, irregular, or even negative acceleration when used with a mouse instead of their native platform's non-mouse default input device. Depending on how deeply hardcoded this misbehavior is, internal user patches or external 3rd-party software may be able to fix it.
Due to their similarity to the WIMP desktop metaphor interface for which mice were originally designed, and to their own tabletop game origins, computer strategy games are most commonly played with mice. In particular, real-time strategy and MOBA games usually require the use of a mouse.
The left button usually controls primary fire. If the game supports multiple fire modes, the right button often provides secondary fire from the selected weapon. Games with only a single fire mode will generally map secondary fire to "ADS". In some games, the right button may also invoke accessories for a particular weapon, such as allowing access to the scope of a sniper rifle or allowing the mounting of a bayonet or silencer.
Gamers can use a scroll wheel for changing weapons (or for controlling scope-zoom magnification, in older games). On most first person shooter games, programming may also assign more functions to additional buttons on mice with more than three controls. A keyboard usually controls movement (for example, WASD for moving forward, left, backward and right, respectively) and other functions such as changing posture. Since the mouse serves for aiming, a mouse that tracks movement accurately and with less lag (latency) will give a player an advantage over players with less accurate or slower mice. In some cases the right mouse button may be used to move the player forward, either in lieu of, or in conjunction with the typical WASD configuration.
Many games provide players with the option of mapping their own choice of a key or button to a certain control. An early technique of players, circle strafing, saw a player continuously strafing while aiming and shooting at an opponent by walking in circle around the opponent with the opponent at the center of the circle. Players could achieve this by holding down a key for strafing while continuously aiming the mouse towards the opponent.
Games using mice for input are so popular that many manufacturers make mice specifically for gaming. Such mice may feature adjustable weights, high-resolution optical or laser components, additional buttons, ergonomic shape, and other features such as adjustable CPI. Mouse Bungees are typically used with gaming mice because it eliminates the annoyance of the cable.
Many games, such as first- or third-person shooters, have a setting named "invert mouse" or similar (not to be confused with "button inversion", sometimes performed by left-handed users) which allows the user to look downward by moving the mouse forward and upward by moving the mouse backward (the opposite of non-inverted movement). This control system resembles that of aircraft control sticks, where pulling back causes pitch up and pushing forward causes pitch down; computer joysticks also typically emulate this control-configuration.
After id Software's commercial hit of "Doom", which did not support vertical aiming, competitor Bungie's "Marathon" became the first first-person shooter to support using the mouse to aim up and down. Games using the Build engine had an option to invert the Y-axis. The "invert" feature actually made the mouse behave in a manner that users regard as non-inverted (by default, moving mouse forward resulted in looking down). Soon after, id Software released "Quake", which introduced the invert feature as users know it.
In 1988, the VTech Socrates educational video game console featured a wireless mouse with an attached mouse pad as an optional controller used for some games. In the early 1990s, the Super Nintendo Entertainment System video game system featured a mouse in addition to its controllers. The "Mario Paint" game in particular used the mouse's capabilities as did its successor on the N64. Sega released official mice for their Genesis/Mega Drive, Saturn and Dreamcast consoles. NEC sold official mice for its PC Engine and PC-FX consoles. Sony released an official mouse product for the PlayStation console, included one along with the Linux for PlayStation 2 kit, as well as allowing owners to use virtually any USB mouse with the PS2, PS3, and PS4. Nintendo's Wii also had this added on in a later software update, retained on the Wii U. | https://en.wikipedia.org/wiki?curid=7056 |
Civil defense
Civil defence (civil defense in US English) or civil protection is an effort to protect the citizens of a state (generally non-combatants) from military attacks and natural disasters. It uses the principles of emergency operations: prevention, mitigation, preparation, response, or emergency evacuation and recovery. Programs of this sort were initially discussed at least as early as the 1920s and were implemented in some countries during the 1930s as the threat of war and aerial bombardment grew. It became widespread after the threat of nuclear weapons was realized.
Since the end of the Cold War, the focus of civil defence has largely shifted from military attack to emergencies and disasters in general. The new concept is described by a number of terms, each of which has its own specific shade of meaning, such as "crisis management", "emergency management", "emergency preparedness", "contingency planning", "civil contingency", "civil aid" and "civil protection".
In some countries, civil defense is seen as a key part of defense in general. For example the Swedish language word "totalförsvar" ("total defense") refers to the commitment of a wide range of national resources to its defense, including the protection of all aspects of civilian life. Some countries have organized civil defense along paramilitary lines, or incorporated it within armed forces, such as the Soviet Civil Defense Forces (Войска гражданской обороны).
The advent of civil defense was stimulated by the experience of the bombing of civilian areas during the First World War. The bombing of the United Kingdom began on 19 January 1915 when German zeppelins dropped bombs on the Great Yarmouth area, killing six people. German bombing operations of the First World War were surprisingly effective, especially after the Gotha bombers surpassed the zeppelins. The most devastating raids inflicted 121 casualties for each ton of bombs dropped; this figure was then used as a basis for predictions.
After the war, attention was turned toward civil defense in the event of war, and the Air Raid Precautions Committee (ARP) was established in 1924 to investigate ways for ensuring the protection of civilians from the danger of air-raids.
The Committee produced figures estimating that in London there would be 9,000 casualties in the first two days and then a continuing rate of 17,500 casualties a week. These rates were thought conservative. It was believed that there would be "total chaos and panic" and hysterical neurosis as the people of London would try to flee the city. To control the population harsh measures were proposed: bringing London under almost military control, and physically cordoning off the city with 120,000 troops to force people back to work. A different government department proposed setting up camps for refugees for a few days before sending them back to London.
A special government department, the Civil Defence Service, was established by the Home Office in 1935. Its remit included the pre-existing ARP as well as wardens, firemen (initially the Auxiliary Fire Service (AFS) and latterly the National Fire Service (NFS)), fire watchers, rescue, first aid post, stretcher party and industry. Over 1.9 million people served within the CD; nearly 2,400 lost their lives to enemy action.
The organization of civil defense was the responsibility of the local authority. Volunteers were ascribed to different units depending on experience or training. Each local civil defense service was divided into several sections. Wardens were responsible for local reconnaissance and reporting, and leadership, organization, guidance and control of the general public. Wardens would also advise survivors of the locations of rest and food centers, and other welfare facilities.
Rescue Parties were required to assess and then access bombed-out buildings and retrieve injured or dead people. In addition they would turn off gas, electricity and water supplies, and repair or pull down unsteady buildings. Medical services, including First Aid Parties, provided on the spot medical assistance.
The expected stream of information that would be generated during an attack was handled by 'Report and Control' teams. A local headquarters would have an ARP controller who would direct rescue, first aid and decontamination teams to the scenes of reported bombing. If local services were deemed insufficient to deal with the incident then the controller could request assistance from surrounding boroughs.
Fire Guards were responsible for a designated area/building and required to monitor the fall of incendiary bombs and pass on news of any fires that had broken out to the NFS. They could deal with an individual magnesium electron incendiary bomb by dousing it with buckets of sand or water or by smothering. Additionally, 'Gas Decontamination Teams' kitted out with gas-tight and waterproof protective clothing were to deal with any gas attacks. They were trained to decontaminate buildings, roads, rail and other material that had been contaminated by liquid or jelly gases.
Little progress was made over the issue of air-raid shelters, because of the apparently irreconcilable conflict between the need to send the public underground for shelter and the need to keep them above ground for protection against gas attacks. In February 1936 the Home Secretary appointed a technical Committee on Structural Precautions against Air Attack. During the Munich crisis, local authorities dug trenches to provide shelter. After the crisis, the British Government decided to make these a permanent feature, with a standard design of precast concrete trench lining. They also decided to issue the Anderson shelter free to poorer households and to provide steel props to create shelters in suitable basements.
During the Second World War, the ARP was responsible for the issuing of gas masks, pre-fabricated air-raid shelters (such as Anderson shelters, as well as Morrison shelters), the upkeep of local public shelters, and the maintenance of the blackout. The ARP also helped rescue people after air raids and other attacks, and some women became ARP Ambulance Attendants whose job was to help administer first aid to casualties, search for survivors, and in many grim instances, help recover bodies, sometimes those of their own colleagues.
As the war progressed, the military effectiveness of Germany's aerial bombardment was very limited. Thanks to the Luftwaffe's shifting aims, the strength of British air defenses, the use of early warning radar and the life-saving actions of local civil defense units, the aerial "Blitz" during the Battle of Britain failed to break the morale of the British people, destroy the Royal Air Force or significantly hinder British industrial production. Despite a significant investment in civil and military defense, British civilian losses during the Blitz were higher than in most strategic bombing campaigns throughout the war. For example, there were 14,000-20,000 UK civilian fatalities during the Battle of Britain, a relatively high number considering that the Luftwaffe dropped only an estimated 30,000 tons of ordinance during the battle. In comparison, Allied strategic bombing of Germany during the war was less lethal, with an estimated 400,000-600,000 German civilian fatalities for approximately 1.35 million tons of bombs dropped on Germany.
In the United States, the Office of Civil Defense was established in May 1941 to coordinate civilian defense efforts. It coordinated with the Department of the Army and established similar groups to the British ARP. One of these groups that still exists today is the Civil Air Patrol, which was originally created as a civilian auxiliary to the Army. The CAP was created on December 1, 1941, with the main civil defense mission of search and rescue. The CAP also sank two Axis submarines and provided aerial reconnaissance for Allied and neutral merchant ships. In 1946, the Civil Air Patrol was barred from combat by Public Law 79-476. The CAP then received its current mission: search and rescue for downed aircraft. When the Air Force was created, in 1947, the Civil Air Patrol became the auxiliary of the Air Force.
The Coast Guard Auxiliary performs a similar role in support of the U.S. Coast Guard. Like the Civil Air Patrol, the Coast Guard Auxiliary was established in the run up to World War II. Auxiliarists were sometimes armed during the war, and extensively participated in port security operations. After the war, the Auxiliary shifted its focus to promoting boating safety and assisting the Coast Guard in performing search and rescue and marine safety and environmental protection.
In the United States a federal civil defense program existed under Public Law 920 of the 81st Congress, as amended, from 1951–1994. That statutory scheme was made so-called all-hazards by Public Law 103-160 in 1993 and largely repealed by Public Law 103-337 in 1994. Parts now appear in Title VI of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, Public Law 100-107 [1988 as amended]. The term EMERGENCY PREPAREDNESS was largely codified by that repeal and amendment. See 42 USC Sections 5101 and following.
In most of the states of the North Atlantic Treaty Organization, such as the United States, the United Kingdom and West Germany, as well as the Soviet Bloc, and especially in the neutral countries, such as Switzerland and in Sweden during the 1950s and 1960s, many civil defense practices took place to prepare for the aftermath of a nuclear war, which seemed quite likely at that time.
In the United Kingdom, the Civil Defence Service was disbanded in 1945, followed by the ARP in 1946. With the onset of the growing tensions between East and West, the service was revived in 1949 as the Civil Defence Corps. As a civilian volunteer organization, it was tasked to take control in the aftermath of a major national emergency, principally envisaged as being a Cold War nuclear attack. Although under the authority of the Home Office, with a centralized administrative establishment, the corps was administered locally by Corps Authorities. In general every county was a Corps Authority, as were most county boroughs in England and Wales and large burghs in Scotland.
Each division was divided into several sections, including the Headquarters, Intelligence and Operations, Scientific and Reconnaissance, Warden & Rescue, Ambulance and First Aid and Welfare.
In 1954 Coventry City Council caused international controversy when it announced plans to disband its Civil Defence committee because the councillors had decided that hydrogen bombs meant that there could be no recovery from a nuclear attack. The British government opposed such a move and held a provocative Civil Defence exercise on the streets of Coventry which Labour council members protested against. The government also decided to implement its own committee at the city's cost until the council reinstituted its committee.
In the United States, the sheer power of nuclear weapons and the perceived likelihood of such an attack precipitated a greater response than had yet been required of civil defense. Civil defense, previously considered an important and commonsense step, became divisive and controversial in the charged atmosphere of the Cold War. In 1950, the National Security Resources Board created a 162-page document outlining a model civil defense structure for the U.S. Called the "Blue Book" by civil defense professionals in reference to its solid blue cover, it was the template for legislation and organization for the next 40 years.
Perhaps the most memorable aspect of the Cold War civil defense effort was the educational effort made or promoted by the government. In "Duck and Cover", Bert the Turtle advocated that children "duck and cover" when they "see the flash." Booklets such as "Survival Under Atomic Attack", "Fallout Protection" and "Nuclear War Survival Skills" were also commonplace. The transcribed radio program Stars for Defense combined hit music with civil defense advice. Government institutes created public service announcements including children's songs and distributed them to radio stations to educate the public in case of nuclear attack.
The US President Kennedy (1961–63) launched an ambitious effort to install fallout shelters throughout the United States. These shelters would not protect against the blast and heat effects of nuclear weapons, but would provide some protection against the radiation effects that would last for weeks and even affect areas distant from a nuclear explosion. In order for most of these preparations to be effective, there had to be some degree of warning. In 1951, CONELRAD (Control of Electromagnetic Radiation) was established. Under the system, a few primary stations would be alerted of an emergency and would broadcast an alert. All broadcast stations throughout the country would be constantly listening to an upstream station and repeat the message, thus passing it from station to station.
In a once classified US war game analysis, looking at varying levels of war escalation, warning and pre-emptive attacks in the late 1950s early 1960s, it was estimated that approximately 27 million US citizens would have been saved with civil defense education. At the time, however, the cost of a full-scale civil defense program was regarded as less effective in cost-benefit analysis than a ballistic missile defense (Nike Zeus) system, and as the Soviet adversary was increasing their nuclear stockpile, the efficacy of both would follow a diminishing returns trend.
Contrary to the largely noncommittal approach taken in NATO, with its stops and starts in civil defense depending on the whims of each newly elected government, the military strategy in the comparatively more ideologically consistent USSR held that, amongst other things, a winnable nuclear war was possible. To this effect the Soviets planned to minimize, as far as possible, the effects of nuclear weapon strikes on its territory, and therefore spent considerably more thought on civil defense preparations than in U.S., with defense plans that have been assessed to be far more effective than those in the U.S.
Soviet Civil Defense Troops played the main role in the massive disaster relief operation following the 1986 Chernobyl nuclear accident. Defense Troop reservists were officially mobilized (as in a case of war) from throughout the USSR to join the Chernobyl task force and formed on the basis of the Kiev Civil Defense Brigade. The task force performed some high-risk tasks including, with the failure of their robotic machinery, the manual removal of highly-radioactive debris. Many of their personnel were later decorated with medals for their work at containing the release of radiation into the environment, with a number of the 56 deaths from the accident being Civil defense troops.
In Western countries, strong civil defense policies were never properly implemented, because it was fundamentally at odds with the doctrine of "mutual assured destruction" (MAD) by making provisions for survivors. It was also considered that a full-fledged total defense would have not been worth the very large expense. For whatever reason, the public saw efforts at civil defense as fundamentally ineffective against the powerful destructive forces of nuclear weapons, and therefore a waste of time and money, although detailed scientific research programs did underlie the much-mocked government civil defense pamphlets of the 1950s and 1960s.
Governments in most Western countries, with the sole exception of Switzerland, generally sought to underfund Civil Defense due to its perceived pointlessness. Nevertheless, effective but commonly dismissed civil defense measures against nuclear attack were implemented, in the face of popular apathy and skepticism of authority. After the end of the Cold War, the focus moved from defense against nuclear war to defense against a terrorist attack possibly involving chemical or biological weapons.
The Civil Defence Corps was stood down in Great Britain in 1968 with the tacit realization that nothing practical could be done in the event of an unrestricted nuclear attack. Its neighbors, however, remained committed to Civil Defence, namely the Isle of Man Civil Defence Corps and Civil Defence Ireland (Republic of Ireland).
In the United States, the various civil defense agencies were replaced with the Federal Emergency Management Agency (FEMA) in 1979. In 2002 this became part of the Department of Homeland Security. The focus was shifted from nuclear war to an "all-hazards" approach of Comprehensive Emergency Management. Natural disasters and the emergence of new threats such as terrorism have caused attention to be focused away from traditional civil defense and into new forms of civil protection such as emergency management and homeland security.
Many countries still maintain a national Civil Defence Corps, usually having a wide brief for assisting in large scale civil emergencies such as flood, earthquake, invasion, or civil disorder.
After the September 11 attacks in 2001, in the United States the concept of civil defense has been revisited under the umbrella term of homeland security and all-hazards emergency management.
In Europe, the triangle CD logo continues to be widely used. The old U.S. civil defense logo was used in the FEMA logo until 2006 and is hinted at in the United States Civil Air Patrol logo. Created in 1939 by Charles Coiner of the N. W. Ayer Advertising Agency, it was used throughout World War II and the Cold War era. In 2006, the National Emergency Management Association—a U.S. organization made up of state emergency managers—"officially" retired the Civil Defense triangle logo, replacing it with a stylised EM (standing for Emergency management). The name and logo, however, continue to be used by Hawaii State Civil Defense and Guam Homeland Security/Office of Civil Defense.
The term "civil protection" is currently widely used within the European Union to refer to government-approved systems and resources tasked with protecting the non-combat population, primarily in the event of natural and technological disasters. In recent years there has been emphasis on preparedness for technological disasters resulting from terrorist attack. Within EU countries the term "crisis-management" emphasizes the political and security dimension rather than measures to satisfy the immediate needs of the population.
In Australia, civil defense is the responsibility of the volunteer-based State Emergency Service.
In most former Soviet countries civil defense is the responsibility of governmental ministries, such as Russia's Ministry of Emergency Situations.
Relatively small investments in preparation can speed up recovery by months or years and thereby prevent millions of deaths by hunger, cold and disease. According to human capital theory in economics, a country's population is more valuable than all of the land, factories and other assets that it possesses. People rebuild a country after its destruction, and it is therefore important for the economic security of a country that it protect its people. According to psychology, it is important for people to feel as though they are in control of their own destiny, and preparing for uncertainty via civil defense may help to achieve this.
In the United States, the federal civil defense program was authorized by statute and ran from 1951 to 1994. Originally authorized by Public Law 920 of the 81st Congress, it was repealed by Public Law 93-337 in 1994. Small portions of that statutory scheme were incorporated into the Robert T. Stafford Disaster Relief and Emergency Assistance Act (Public Law 100-707) which partly superseded in part, partly amended, and partly supplemented the Disaster Relief Act of 1974 (Public Law 93-288). In the portions of the civil defense statute incorporated into the Stafford Act, the primary modification was to use the term "Emergency Preparedness" wherever the term "Civil Defence" had previously appeared in the statutory language.
An important concept initiated by President Jimmy Carter was the so-called "Crisis Relocation Program" administered as part of the federal civil defense program. That effort largely lapsed under President Ronald Reagan, who discontinued the Carter initiative because of opposition from areas potentially hosting the relocated population.
Threats to civilians and civilian life include NBC (Nuclear, Biological, and Chemical warfare) and others, like the more modern term CBRN (Chemical Biological Radiological and Nuclear). Threat assessment involves studying each threat so that preventative measures can be built into civilian life.
Refers to conventional explosives. A blast shelter designed to protect only from radiation and fallout would be much more vulnerable to conventional explosives. See also fallout shelter.
Shelter intended to protect against nuclear blast effects would include thick concrete and other sturdy elements which are resistant to conventional explosives. The biggest threats from a nuclear attack are effects from the blast, fires and radiation. One of the most prepared countries for a nuclear attack is Switzerland. Almost every building in Switzerland has an "abri" (shelter) against the initial nuclear bomb and explosion followed by the fall-out. Because of this, many people use it as a safe to protect valuables, photos, financial information and so on. Switzerland also has air-raid and nuclear-raid sirens in every village.
A "radiologically enhanced weapon", or "dirty bomb", uses an explosive to spread radioactive material. This is a theoretical risk, and such weapons have not been used by terrorists. Depending on the quantity of the radioactive material, the dangers may be mainly psychological. Toxic effects can be managed by standard hazmat techniques.
The threat here is primarily from disease-causing microorganisms such as bacteria and viruses.
Various chemical agents are a threat, such as nerve gas (VX, Sarin, and so on.).
Mitigation is the process of actively preventing the war or the release of nuclear weapons. It includes policy analysis, diplomacy, political measures, nuclear disarmament and more military responses such as a National Missile Defense and air defense artillery. In the case of counter-terrorism, mitigation would include diplomacy, intelligence gathering and direct action against terrorist groups. Mitigation may also be reflected in long-term planning such as the design of the interstate highway system and the placement of military bases further away from populated areas.
Preparation consists of building blast shelters and pre-positioning information, supplies, and emergency infrastructure. For example, most larger cities in the U.S. now have underground emergency operations centers that can perform civil defense coordination. FEMA also has many underground facilities for the same purpose located near major railheads such as the ones in Denton, Texas and Mount Weather, Virginia.
Other measures would include continual government inventories of grain silos, the Strategic National Stockpile, the uncapping of the Strategic Petroleum Reserve, the dispersal of lorry-transportable bridges, water purification, mobile refineries, mobile de-contamination facilities, mobile general and special purpose disaster mortuary facilities such as Disaster Mortuary Operational Response Team (DMORT) and DMORT-WMD, and other aids such as temporary housing to speed civil recovery.
On an individual scale, one means of preparation for exposure to nuclear fallout is to obtain potassium iodide (KI) tablets as a safety measure to protect the human thyroid gland from the uptake of dangerous radioactive iodine. Another measure is to cover the nose, mouth and eyes with a piece of cloth and sunglasses to protect against alpha particles, which are only an internal hazard.
To support and supplement efforts at national, regional and local level with regard to disaster prevention, the preparedness of those responsible for civil protection and the intervention in the event of disaster
Preparing also includes sharing information:
Response consists first of warning civilians so they can enter fallout shelters and protect assets.
Staffing a response is always full of problems in a civil defense emergency. After an attack, conventional full-time emergency services are dramatically overloaded, with conventional fire fighting response times often exceeding several days. Some capability is maintained by local and state agencies, and an emergency reserve is provided by specialized military units, especially civil affairs, Military Police, Judge Advocates and combat engineers.
However, the traditional response to massed attack on civilian population centers is to maintain a mass-trained force of volunteer emergency workers. Studies in World War II showed that lightly trained (40 hours or less) civilians in organised teams can perform up to 95% of emergency activities when trained, liaised and supported by local government. In this plan, the populace rescues itself from most situations, and provides information to a central office to prioritize professional emergency services.
In the 1990s, this concept was revived by the Los Angeles Fire Department to cope with civil emergencies such as earthquakes. The program was widely adopted, providing standard terms for organization. In the U.S., this is now official federal policy, and it is implemented by community emergency response teams, under the Department of Homeland Security, which certifies training programs by local governments, and registers "certified disaster service workers" who complete such training.
Recovery consists of rebuilding damaged infrastructure, buildings and production. The recovery phase is the longest and ultimately most expensive phase. Once the immediate "crisis" has passed, cooperation fades away and recovery efforts are often politicized or seen as economic opportunities.
Preparation for recovery can be very helpful. If mitigating resources are dispersed before the attack, cascades of social failures can be prevented. One hedge against bridge damage in riverine cities is to subsidize a "tourist ferry" that performs scenic cruises on the river. When a bridge is down, the ferry takes up the load.
Civil Defense is also the name of a number of organizations around the world dedicated to protecting civilians from military attacks, as well as to providing rescue services after natural and human-made disasters alike.
Worldwide protection is managed by the United Nations Office for the Coordination of Humanitarian Affairs (OCHA).
In a few countries such as Jordan and Singapore (see Singapore Civil Defence Force), civil defense is essentially the same organization as the fire brigade. In most countries, however, civil defense is a government-managed, volunteer-staffed organization, separate from the fire brigade and the ambulance service.
As the threat of Cold War eased, a number of such civil defense organizations have been disbanded or mothballed (as in the case of the Royal Observer Corps in the United Kingdom and the United States civil defense), while others have changed their focuses into providing rescue services after natural disasters (as for the State Emergency Service in Australia). However, the ideals of Civil Defense have been brought back in the United States under FEMA's Citizen Corps and Community Emergency Response Team (CERT).
In the United Kingdom Civil Defence work is carried out by Emergency Responders under the Civil Contingencies Act 2004, with assistance from voluntary groups such as RAYNET, Search and Rescue Teams and 4x4 Response. In Ireland, the Civil Defence is still very much an active organization and is occasionally called upon for its Auxiliary Fire Service and ambulance/rescue services when emergencies such as flash flooding occur and require additional manpower. The organization has units of trained firemen and medical responders based in key areas around the country.
UK:
US:
Germany:
General: | https://en.wikipedia.org/wiki?curid=7059 |
Community emergency response team
In the United States, community emergency response team (CERT) can refer to
Sometimes programs and organizations take different names, such as Neighborhood Emergency Response Team (NERT), or Neighborhood Emergency Team (NET).
The concept of civilian auxiliaries is similar to civil defense, which has a longer history. The CERT concept differs because it includes nonmilitary emergencies, and is coordinated with all levels of emergency authorities, local to national, via an overarching incident command system.
A local government agency, often a fire department, police department, or emergency management agency, agrees to sponsor CERT within its jurisdiction. The sponsoring agency liaises with, deploys and may train or supervise the training of CERT members. Many sponsoring agencies employ a full-time community-service person as liaison to the CERT members. In some communities, the liaison is a volunteer and CERT member.
As people are trained and agree to join the community emergency response effort, a CERT is formed. Initial efforts may result in a team with only a few members from across the community. As the number of members grow, a single community-wide team may subdivide. Multiple CERTs are organized into a hierarchy of teams consistent with ICS principles. This follows the Incident Command System (ICS) principle of Span of control until the ideal distribution is achieved: one or more teams are formed at each neighborhood within a community.
A Teen Community Emergency Response Team (TEEN CERT), or Student Emergency Response Team (SERT), can be formed from any group of teens. A Teen CERT can be formed as a school club, service organization, Venturing Crew, Explorer Post, or the training can be added to a school's graduation curriculum. Some CERTs form a club or service corporation, and recruit volunteers to perform training on behalf of the sponsoring agency. This reduces the financial and human resource burden on the sponsoring agency.
When not responding to disasters or large emergencies, CERTs may
Some sponsoring agencies use state and federal grants to purchase response tools and equipment for their members and team(s) (subject to Stafford Act limitations). Most CERTs also acquire their own supplies, tools, and equipment. As community members, CERTs are aware of the specific needs of their community and equip the teams accordingly.
The basic idea is to use CERT to perform the large number of tasks needed in emergencies. This frees highly trained professional responders for more technical tasks. Much of CERT training concerns the Incident Command System and organization, so CERT members fit easily into larger command structures.
A team may self-activate (self-deploy) when their own neighborhood is affected by disaster. An effort is made to report their response status to the sponsoring agency. A self-activated team will size-up the loss in their neighborhood and begin performing the skills they have learned to minimize further loss of life, property, and environment. They will continue to respond safely until redirected or relieved by the sponsoring agency or professional responders on-scene.
Teams in neighborhoods not affected by disaster may be deployed or activated by the sponsoring agency. The sponsoring agency may communicate with neighborhood CERT leaders through an organic communication team. In some areas the communications may be by amateur radio, FRS, GMRS or MURS radio, dedicated telephone or fire-alarm networks. In other areas, relays of bicycle-equipped runners can effectively carry messages between the teams and the local emergency operations center.
The sponsoring agency may activate and dispatch teams in order to gather or respond to intelligence about an incident. Teams may be dispatched to affected neighborhoods, or organized to support operations. CERT members may augment support staff at an Incident Command Post or Emergency Operations Center. Additional teams may also be created to guard a morgue, locate supplies and food, convey messages to and from other CERTs and local authorities, and other duties on an as-needed basis as identified by the team leader.
In the short term, CERTs perform data gathering, especially to locate mass-casualties requiring professional response, or situations requiring professional rescues, simple fire-fighting tasks (for example, small fires, turning off gas), light search and rescue, damage evaluation of structures, triage and first aid. In the longer term, CERTs may assist in the evacuation of residents, or assist with setting up a neighborhood shelter.
While responding, CERT members are temporary volunteer government workers. In some areas, (such as California, Hawaii and Kansas) registered, activated CERT members are eligible for worker's compensation for on-the-job injuries during declared disasters.
The Federal Emergency Management Agency (FEMA) recommends that the standard, ten-person team be comprised as follows:
Because every CERT member in a community receives the same core instruction, any team member has the training necessary to assume any of these roles. This is important during a disaster response because not all members of a regular team may be available to respond. Hasty teams may be formed by whichever members are responding at the time. Additionally, members may need to adjust team roles due to stress, fatigue, injury, or other circumstances.
While state and local jurisdictions will implement training in the manner that best suits the community, FEMA's National CERT Program has an established curriculum. Jurisdictions may augment the training, but are strongly encouraged to deliver the entire core content. The CERT core curriculum for the basic course is composed of the following nine units (time is instructional hours):
CERT training emphasizes safely "doing the most good for the most people as quickly as possible" when responding to a disaster. For this reason, cardiopulmonary resuscitation (CPR) training is not included in the core curriculum, as it is time and responder intensive in a mass-casualty incident. However, many jurisdictions encourage or require CERT members to obtain CPR training. Many CERT programs provide or encourage members to take additional first aid training. Some CERT members may also take training to become a certified first responder or emergency medical technician.
Many CERT programs also provide training in amateur radio operation, shelter operations, flood response, community relations, mass care, the incident command system (ICS), and the National Incident Management System (NIMS).
Each unit of CERT training is ideally delivered by professional responders or other experts in the field addressed by the unit. This is done to help build unity between CERT members and responders, keep the attention of students, and help the professional response organizations be comfortable with the training which CERT members receive.
Each course of instruction is ideally facilitated by one or more instructors certified in the CERT curriculum by the state or sponsoring agency. Facilitating instructors provide continuity between units, and help ensure that the CERT core curriculum is being delivered successfully. Facilitating instructors also perform set-up and tear-down of the classroom, provide instructional materials for the course, record student attendance and other tasks which assist the professional responder in delivering their unit as efficiently as possible.
CERT training is provided free to interested members of the community, and is delivered in a group classroom setting. People may complete the training without obligation to join a CERT. Citizen Corps grant funds can be used to print and provide each student with a printed manual. Some sponsoring agencies use Citizen Corps grant funds to purchase disaster response tool kits. These kits are offered as an incentive to join a CERT, and must be returned to the sponsoring agency when members resign from CERT.
Some sponsoring agencies require a criminal background-check of all trainees before allowing them to participate on a CERT. For example, the city of Albuquerque, New Mexico require all volunteers to pass a background check, while the city of Austin, Texas does not require a background check to take part in training classes but requires members to undergo a background check in order to receive a CERT badge and directly assist first responders during an activation of the Emergency Operations Center. However, most programs do not require a criminal background check in order to participate.
The CERT curriculum (including the Train-the-Trainer and Program Manager courses) was updated during the last half of 2017 to reflect feedback from instructors across the nation. The update is in final review, and is scheduled for release during 2018. | https://en.wikipedia.org/wiki?curid=7061 |
Catapult
A catapult is a ballistic device used to launch a projectile a great distance without the aid of gunpowder or other propellants – particularly various types of ancient and medieval siege engines. A catapult uses the sudden release of stored potential energy to propel its payload. Most convert tension or torsion energy that was more slowly and manually built up within the device before release, via springs, bows, twisted rope, elastic, or any of numerous other materials and mechanisms. The counterweight trebuchet is a type of catapult that uses gravity.
In use since ancient times, the catapult has proven to be one of the most persistently effective mechanisms in warfare. In modern times the term can apply to devices ranging from a simple hand-held implement (also called a "slingshot") to a mechanism for launching aircraft from a ship.
The earliest catapults date to at least the 4th century BC with the advent of the mangonel in ancient China, a type of traction trebuchet and catapult. Early uses were also attributed to Ajatashatru of Magadha in his war against the Licchavis. Greek catapults were invented in the early 4th century BC, being attested by Diodorus Siculus as part of the equipment of a Greek army in 399 BC, and subsequently used at the siege of Motya in 397 BC.
The word 'catapult' comes from the Latin 'catapulta', which in turn comes from the Greek ("katapeltēs"), itself from κατά ("kata"), "downwards" and πάλλω ("pallō"), "to toss, to hurl". Catapults were invented by the ancient Greeks and in ancient India where they were used by the Magadhan Emperor Ajatshatru around the early to mid 5th century BC.
The catapult and crossbow in Greece are closely intertwined. Primitive catapults were essentially "the product of relatively straightforward attempts to increase the range and penetrating power of missiles by strengthening the bow which propelled them". The historian Diodorus Siculus (fl. 1st century BC), described the invention of a mechanical arrow-firing catapult ("katapeltikon") by a Greek task force in 399 BC. The weapon was soon after employed against Motya (397 BC), a key Carthaginian stronghold in Sicily. Diodorus is assumed to have drawn his description from the highly rated history of Philistus, a contemporary of the events then. The introduction of crossbows however, can be dated further back: according to the inventor Hero of Alexandria (fl. 1st century AD), who referred to the now lost works of the 3rd-century BC engineer Ctesibius, this weapon was inspired by an earlier foot-held crossbow, called the "gastraphetes", which could store more energy than the Greek bows. A detailed description of the "gastraphetes", or the "belly-bow", along with a watercolor drawing, is found in Heron's technical treatise "Belopoeica".
A third Greek author, Biton (fl. 2nd century BC), whose reliability has been positively reevaluated by recent scholarship, described two advanced forms of the "gastraphetes", which he credits to Zopyros, an engineer from southern Italy. Zopyrus has been plausibly equated with a Pythagorean of that name who seems to have flourished in the late 5th century BC. He probably designed his bow-machines on the occasion of the sieges of Cumae and Milet between 421 BC and 401 BC. The bows of these machines already featured a winched pull back system and could apparently throw two missiles at once.
Philo of Byzantium provides probably the most detailed account on the establishment of a theory of belopoietics ("belos" = "projectile"; "poietike" = "(art) of making") circa 200 BC. The central principle to this theory was that "all parts of a catapult, including the weight or length of the projectile, were proportional to the size of the torsion springs". This kind of innovation is indicative of the increasing rate at which geometry and physics were being assimilated into military enterprises.
From the mid-4th century BC onwards, evidence of the Greek use of arrow-shooting machines becomes more dense and varied: arrow firing machines ("katapaltai") are briefly mentioned by Aeneas Tacticus in his treatise on siegecraft written around 350 BC. An extant inscription from the Athenian arsenal, dated between 338 and 326 BC, lists a number of stored catapults with shooting bolts of varying size and springs of sinews. The later entry is particularly noteworthy as it constitutes the first clear evidence for the switch to torsion catapults, which are more powerful than the more-flexible crossbows and which came to dominate Greek and Roman artillery design thereafter. This move to torsion springs was likely spurred by the engineers of Philip II of Macedonia. Another Athenian inventory from 330 to 329 BC includes catapult bolts with heads and flights. As the use of catapults became more commonplace, so did the training required to operate them. Many Greek children were instructed in catapult usage, as evidenced by "a 3rd Century B.C. inscription from the island of Ceos in the Cyclades [regulating] catapult shooting competitions for the young". Arrow firing machines in action are reported from Philip II's siege of Perinth (Thrace) in 340 BC. At the same time, Greek fortifications began to feature high towers with shuttered windows in the top, which could have been used to house anti-personnel arrow shooters, as in Aigosthena. Projectiles included both arrows and (later) stones that were sometimes lit on fire. Onomarchus of Phocis first used catapults on the battlefield against Philip II of Macedon. Philip's son, Alexander the Great, was the next commander in recorded history to make such use of catapults on the battlefield as well as to use them during sieges.
The Romans started to use catapults as arms for their wars against Syracuse, Macedon, Sparta and Aetolia (3rd and 2nd centuries BC). The Roman machine known as an arcuballista was similar to a large crossbow. Later the Romans used ballista catapults on their warships.
Ajatshatru is recorded in Jaina texts as having used catapults in his campaign against the Licchavis.
King Uzziah, who reigned in Judah until 750 BC, is documented as having overseen the construction of machines to "shoot great stones" in .
The first recorded use of mangonels was in ancient China. They were probably used by the Mohists as early as 4th century BC, descriptions of which can be found in the "Mojing" (compiled in the 4th century BC). In Chapter 14 of the "Mojing", the mangonel is described hurling hollowed out logs filled with burning charcoal at enemy troops. The mangonel was carried westward by the Avars and appeared next in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager due to its simpler design and faster rate of fire. The Byzantines adopted the mangonel possibly as early as 587, the Persians in the early 7th century, and the Arabs in the second half of the 7th century. The Franks and Saxons adopted the weapon in the 8th century.
Castles and fortified walled cities were common during this period and catapults were used as siege weapons against them. As well as their use in attempts to breach walls, incendiary missiles, or diseased carcasses or garbage could be catapulted over the walls.
Defensive techniques in the Middle Ages progressed to a point that rendered catapults largely ineffective. The Viking siege of Paris (885–6 A.D.) "saw the employment by both sides of virtually every instrument of siege craft known to the classical world, including a variety of catapults", to little effect, resulting in failure.
The most widely used catapults throughout the Middle Ages were as follows:
The last large scale military use of catapults was during the trench warfare of World War I. During the early stages of the war, catapults were used to throw hand grenades across no man's land into enemy trenches. They were eventually replaced by small mortars.
In the 1840s the invention of vulcanized rubber allowed the making of small hand-held catapults, either improvised from Y-shaped sticks or manufactured for sale; both were popular with children and teenagers. These devices were also known as slingshots in the USA.
Special variants called aircraft catapults are used to launch planes from land bases and sea carriers when the takeoff runway is too short for a powered takeoff or simply impractical to extend. Ships also use them to launch torpedoes and deploy bombs against submarines. Small catapults, referred to as "traps", are still widely used to launch clay targets into the air in the sport of clay pigeon shooting.
In the 1990s and into the early 2000s, a powerful catapult, a trebuchet, was used by thrill-seekers first on private property and in 2001-2002 at Middlemoor Water Park, Somerset, England to experience being catapulted through the air for . The practice has been discontinued due a fatality at the Water Park. There had been an injury when the trebuchet was in use on private property. Injury and death occurred when those two participants failed to land onto the safety net. The operators of the trebuchet were tried, but found not guilty of manslaughter, though the jury noted that the fatality might have been avoided had the operators "imposed stricter safety measures." Human cannonball circus acts use a catapult launch mechanism, rather than gunpowder, and are risky ventures for the human cannonballs.
Early launched roller coasters used a catapult system powered by a diesel engine or a dropped weight to acquire their momentum, such as Shuttle Loop installations between 1977-1978. The catapult system for roller coasters has been replaced by flywheels and later linear motors.
"Pumpkin chunking" is another widely popularized use, in which people compete to see who can launch a pumpkin the farthest by mechanical means (although the world record is held by a pneumatic air cannon).
In January 2011, a homemade catapult was discovered that was used to smuggle cannabis into the United States from Mexico. The machine was found 20 feet from the border fence with bales of cannabis ready to launch. | https://en.wikipedia.org/wiki?curid=7063 |
Cinquain
Cinquain is a class of poetic forms that employ a 5-line pattern. Earlier used to describe any five-line form, it now refers to one of several forms that are defined by specific rules and guidelines.
The modern form, known as American Cinquain inspired by Japanese haiku and tanka, is akin in spirit to that of the Imagists.
In her 1915 collection titled "Verse", published one year after her death, Adelaide Crapsey included 28 cinquains.
Crapsey's American Cinquain form developed in two stages. The first, fundamental form is a stanza of five lines of accentual verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses. Then Crapsey decided to make the criterion a stanza of five lines of accentual-syllabic verse, in which the lines comprise, in order, 1, 2, 3, 4, and 1 stresses and 2, 4, 6, 8, and 2 syllables. Iambic feet were meant to be the standard for the cinquain, which made the dual criteria match perfectly. Some resource materials define classic cinquains as solely iambic, but that is not necessarily so. In contrast to the Eastern forms upon which she based them, Crapsey always titled her cinquains, effectively utilizing the title as a sixth line. Crapsey's cinquain depends on strict structure and intense physical imagery to communicate a mood or feeling.
The form is illustrated by Crapsey's "November Night":
Listen...
With faint dry sound,
Like steps of passing ghosts,
The leaves, frost-crisp'd, break from the trees
And fall.
The Scottish poet William Soutar also wrote over one hundred American Cinquains (he labelled them Epigrams) between 1933 and 1940.
The Crapsey cinquain has subsequently seen a number of variations by modern poets, including:
The didactic cinquain is closely related to the Crapsey cinquain. It is an informal cinquain widely taught in elementary schools and has been featured in, and popularized by, children's media resources, including Junie B. Jones and PBS Kids. This form is also embraced by young adults and older poets for its expressive simplicity. The prescriptions of this type of cinquain refer to word count, not syllables and stresses. Ordinarily, the first line is a one-word title, the subject of the poem; the second line is a pair of adjectives describing that title; the third line is a three-word phrase that gives more information about the subject (often a list of three gerunds); the fourth line consists of four words describing feelings related to that subject; and the fifth line is a single word synonym or other reference for the subject from line one.
For example:
Snow
Silent, white
Dancing, falling, drifting
Covering everything it touches
Blanket | https://en.wikipedia.org/wiki?curid=7066 |
Dice
Dice (singular die or dice) are small, throwable objects with marked sides that can rest in multiple positions. They are used for generating random numbers, commonly as part of tabletop games, including dice games, board games, role-playing games, and games of chance.
A traditional die is a cube with each of its six faces marked with a different number of dots (pips) from one to six. When thrown or rolled, the die comes to rest showing a random integer from one to six on its upper surface, with each value being equally likely. Dice may also have polyhedral or irregular shapes and may have faces marked with numerals or symbols instead of pips. Loaded dice are designed to favor some results over others for cheating or entertainment.
Dice have been used since before recorded history, and it is uncertain where they originated. It is theorized that dice developed from the practice of fortune-telling with the talus of hoofed animals, colloquially known as knucklebones. The Egyptian game of senet was played with flat two-sided throwsticks which indicated the number of squares a player could move, and thus functioned as a form of dice. Senet was played before 3000 BC and up to the 2nd century AD. Perhaps the oldest known dice were excavated as part of a backgammon-like game set at the Burnt City, an archeological site in south-eastern Iran, estimated to be from between 2800–2500 BC. Bone dice from Skara Brae have been dated to 3100-2400 BCE. Excavations from graves at Mohenjo-daro, an Indus Valley civilization settlement, unearthed terracotta dice dating to 2500-1900 BCE.
Games involving dice are mentioned in the ancient Indian "Rigveda", "Atharvaveda," and Buddhist games list. There are several biblical references to "casting lots" ( "yappîlū ḡōrāl"), as in Psalm 22, indicating that dicing (or a related activity) was commonplace when the psalm was composed. Knucklebones was a game of skill played in ancient Greece; a derivative form had the four sides of bones receive different values like modern dice.
Although gambling was illegal, many Romans were passionate gamblers who enjoyed dicing, which was known as "aleam ludere" ("to play at dice"). There were two sizes of Roman dice. "Tali" were large dice inscribed with one, three, four, and six on four sides. "Tesserae" were smaller dice with sides numbered from one to six. Twenty-sided dice date back to the 2nd century AD and from Ptolemaic Egypt as early as the 2nd century BC.
Dominoes and playing cards originated in China as developments from dice. The transition from dice to playing cards occurred in China around the Tang dynasty, and coincides with the technological transition from rolls of manuscripts to block printed books. In Japan, dice were used to play a popular game called sugoroku. There are two types of sugoroku. "Ban-sugoroku" is similar to backgammon and dates to the Heian period, while "e-sugoroku" is a racing game.
Dice are thrown onto a surface either from the hand or from a container designed for this (such as a cup or tray). The face of the die that is uppermost when it comes to rest provides the value of the throw.
The result of a die roll is determined by the way it is thrown, according to the laws of classical mechanics. A die roll is made random by uncertainty in minor factors such as tiny movements in the thrower's hand; they are thus a crude form of hardware random number generator.
One typical contemporary dice game is craps, where two dice are thrown simultaneously and wagers are made on the total value of the two dice. Dice are frequently used to introduce randomness into board games, where they are often used to decide the distance through which a piece will move along the board (as in backgammon and "Monopoly").
Common dice are small cubes, most often across, whose faces are numbered from one to six, usually by patterns of round dots called pips. (While the use of Arabic numerals is occasionally seen, such dice are less common.)
Opposite sides of a modern die traditionally add up to seven, requiring the 1, 2, and 3 faces to share a vertex. The faces of a die may be placed clockwise or counterclockwise about this vertex. If the 1, 2, and 3 faces run counterclockwise, the die is called "right-handed". If those faces run clockwise, the die is called "left-handed". Western dice are normally right-handed, and Chinese dice are normally left-handed.
The pips on dice are arranged in specific patterns as shown. Asian style dice bear similar patterns to Western ones, but the pips are closer to the center of the face; in addition, the pips are differently sized on Asian style dice, and the pips are colored red on the 1 and 4 sides. Red fours may be of Indian origin. In some older sets, the "one" pip is a colorless depression.
Non-precision dice are manufactured via the plastic injection molding process. The pips or numbers on the die are a part of the mold. Different pigments can be added to the dice to make them opaque or transparent, or multiple pigments may be added to make the dice speckled or marbled.
The coloring for numbering is achieved by submerging the die entirely in paint, which is allowed to dry. The die is then polished via a tumble finishing process similar to rock polishing. The abrasive agent scrapes off all of the paint except for the indents of the numbering. A finer abrasive is then used to polish the die. This process also creates the smoother, rounded edges on the dice.
Precision casino dice may have a polished or sand finish, making them transparent or translucent respectively. Casino dice have their pips drilled, then filled flush with a paint of the same density as the material used for the dice, such that the center of gravity of the dice is as close to the geometric center as possible. This mitigates concerns that the pips will cause a small bias. All such dice are stamped with a serial number to prevent potential cheaters from substituting a die. Precision backgammon dice are made the same way; they tend to be slightly smaller and have rounded corners and edges, to allow better movement inside the dice cup and stop forceful rolls from damaging the playing surface.
The word die comes from Old French "dé"; from Latin "datum" "something which is given or played".
While the terms "ace", "deuce", "trey", "cater", "cinque" and "sice" are generally obsolete, with the names of the numbers preferred, they are still used by some professional gamblers to designate different sides of the dice. "Ace" is from the Latin "as", meaning "a unit"; the others are 2 to 6 in Old French.
The term "snake eyes" is the outcome of rolling the dice and getting only one pip on each die. The "Online Etymology Dictionary" traces use of the term as far back as 1919.
The term "boxcars", also known as "midnight", is the outcome of rolling the dice and getting a six on each die. The pair of six pips resembles a pair of boxcars on a freight train.
Using Unicode characters, the faces ⚀ ⚁ ⚂ ⚃ ⚄ ⚅, can be shown in text using the range U+2680 to U+2685 or using decimal codice_1 to codice_2.
A loaded, weighted, cheat, or crooked die is one that has been tampered with so that it will land with a specific side facing upwards more or less often than a fair die would. There are several methods for creating loaded dice, including rounded faces, off-square faces, and weights. Casinos and gambling halls frequently use transparent cellulose acetate dice as tampering is easier to detect than with opaque dice.
Various shapes like two-sided or four-sided dice are documented in archaeological findings e.g. from Ancient Egypt or the Middle East. While the cubical six-sided die became the most common type in many parts of the world, other shapes were always known, like 20-sided dice in Ptolemaic and Roman times.
The modern tradition of using "sets" of polyhedral dice started around the end of the 1960s when non-cubical dice became popular among players of wargames, and since have been employed extensively in role-playing games and trading card games. Dice using both the numerals 6 and 9, which are reciprocally symmetric through rotation, typically distinguish them with a dot or underline.
Dice are often sold in sets, matching in color, of six different shapes. Five of the dice are shaped like the Platonic solids, whose faces are regular polygons. Aside from the cube, the other four Platonic solids have 4, 8, 12, and 20 faces, allowing for those number ranges to be generated. The only other common non-cubical die is the 10-sided die, a pentagonal trapezohedron die, whose faces are ten kites, each with two different edge lengths, three different angles, and two different kinds of vertices. Such sets frequently include a second 10-sided die either of contrasting color or numbered by tens, allowing the pair of 10-sided dice to be combined to generate numbers between 1 and 100.
Using these dice in various ways, games can closely approximate a variety of probability distributions. For instance, 10-sided dice can be rolled in pairs to produce a uniform distribution of random percentages, and summing the values of multiple dice will produce approximations to normal distributions.
Unlike other common dice, a four-sided (tetrahedral) die does not have a side that faces upward when it is at rest on a surface, so it must be read in a different way. On some four-sided dice, each face features multiple numbers, with same number printed near each vertex on all sides. In this case, the number around the vertex pointing up is used. Alternatively, the numbers on a tetrahedral die can be placed at the middles of the edges, in which case the numbers around the base are used.
Normally, the faces on a die will be placed so opposite faces will add up to one more than the number of faces. (This is not possible with 4-sided dice and dice with an odd-number of faces.) Some dice, such as those with 10 sides, are usually numbered sequentially beginning with 0, in which case the opposite faces will add to one less than the number of faces.
"Uniform fair dice" are dice where all faces have equal probability of outcome due to the symmetry of the die as it is face-transitive. Theoretically, these include:
Two other types of polyhedrons are technically not face-transitive, but are still fair dice due to symmetry:
Long dice and teetotums can in principle be made with any number of faces, including odd numbers. Long dice are based on the infinite set of prisms. All the rectangular faces are mutually face-transitive, so they are equally probable. The two ends of the prism may be rounded or capped with a pyramid, designed so that the die cannot rest on those faces. 4-sided long dice are easier to roll than tetrahedra, and are used in the traditional board games dayakattai and daldøs.
The faces of most dice are labelled using sequences of whole numbers, usually starting at one, expressed with either pips or digits. However, there are some applications that require results other than numbers. Examples include letters for Boggle, directions for "Warhammer Fantasy Battle", Fudge dice, playing card symbols for poker dice, and instructions for sexual acts using sex dice.
Dice may have numbers that do not form a counting sequence starting at one. One variation on the standard die is known as the "average" die. These are six-sided dice with sides numbered codice_3, which have the same arithmetic mean as a standard die (3.5 for a single die, 7 for a pair of dice), but have a narrower range of possible values (2 through 5 for one, 4 through 10 for a pair). They are used in some table-top wargames, where a narrower range of numbers is required. Other numbered variations include Sicherman dice and nontransitive dice.
A die can be constructed in the shape of a sphere, with the addition of an internal cavity in the shape of the dual polyhedron of the desired die shape and an internal weight. The weight will settle in one of the points of the internal cavity, causing it to settle with one of the numbers uppermost. For instance, a sphere with an octahedral cavity and a small internal weight will settle with one of the 6 points of the cavity held downwards by the weight.
Polyhedral dice are commonly used in role-playing games. The fantasy role-playing game "Dungeons & Dragons" (D&D) is largely credited with popularizing dice in such games. Some games use only one type, like "Exalted" which uses only ten-sided dice. Others use numerous types for different game purposes, such as D&D, which makes use of all common polyhedral dice. Dice are usually used to determine the outcome of events. Games typically determine results either as a total on one or more dice above or below a fixed number, or a certain number of rolls above a certain number on one or more dice. Due to circumstances or character skill, the initial roll may have a number added to or subtracted from the final result, or have the player roll extra or fewer dice. To keep track of rolls easily, dice notation is frequently used.
Many board games use dice to randomize how far pieces move or to settle conflicts. Typically, this has meant that rolling higher numbers is better. Some games, such as "Axis & Allies", have inverted this system by making the lower values more potent. In the modern age, a few games and game designers have approached dice in a different way by making each side of the die similarly valuable. In "Castles of Burgundy", players spend their dice to take actions based on the die's value. In this game, a six is not better than a one, or vice versa. In "Quarriors" (and its descendant, Dicemasters), different sides of the dice can offer completely different abilities. Several sides often give resources while others grant the player useful actions.
Dice can be used for divination and using dice for such a purpose is called cleromancy. A pair of common dice is usual, though other forms of polyhedra can be used. Tibetan Buddhists sometimes use this method of divination. It is highly likely that the Pythagoreans used the Platonic solids as dice. They referred to such dice as "the dice of the gods" and they sought to understand the universe through an understanding of geometry in polyhedra.
Astrological dice are a specialized set of three 12-sided dice for divination; the first die represents planets, the Sun, the Moon, and the nodes of the Moon, the second die represents the 12 zodiac signs, and the third represents the 12 houses. A specialized icosahedron die provides the answers of the Magic 8-Ball, conventionally used to provide answers to yes-or-no questions.
Dice can be used to generate random numbers for use in passwords and cryptography applications. The Electronic Frontier Foundation describes a method by which dice can be used to generate passphrases. Diceware is a method recommended for generating secure but memorable passphrases, by repeatedly rolling five dice and picking the corresponding word from a pre-generated list.
In many gaming contexts, especially tabletop role-playing games, shorthand notations representing different dice rolls are used. A "d" or "D" is used to indicate a die with a specific number of sides; for example, codice_4 denotes a four-sided die. If several dice of the same type are to be rolled, this is indicated by a leading number specifying the number of dice. Hence, codice_5 means the player should roll six eight-sided dice and add the results. Modifiers to a die roll can also be indicated as desired. For example, codice_6 instructs the player to roll three six-sided dice, calculate the total, and add four to it. | https://en.wikipedia.org/wiki?curid=8244 |
Dumpster diving
Dumpster diving (also totting, skipping, skip diving or skip salvage,) is salvaging from large commercial, residential, industrial and construction containers for unused items discarded by their owners, but deemed useful to the picker. It is not confined to dumpsters and skips specifically, and may cover standard household waste containers, curb sides, landfills or small dumps.
Different terms are used to refer to different forms of this activity. For picking materials from the curbside trash collection, expressions such as curb shopping, trash picking or street scavenging are sometimes used. When seeking primarily metal to be recycled, one is scrapping. When picking the leftover food from traditional or industrial farming left in the fields one is gleaning.
People dumpster dive for items such as clothing, furniture, food, and similar items in good working condition. Some people do this out of necessity due to poverty, while others do so professionally and systematically for profit.
The term "dumpster diving" emerged in the 1980s, combining "diving" with "dumpster", a large commercial trash bin. The term "Dumpster" itself comes from the Dempster Dumpster, a brand of bins manufactured by Dempster Brothers beginning in 1937. "Dumpster" became genericized by the 1970s. According to the "Oxford English Dictionary", the term "dumpster diving" is chiefly found in American English and first appeared in print in 1983, with the verb "dumpster-dive" appearing a few years later. In British English, the practice may be known as "skipping", from skip, another term for this type of container.
Alternative names for the practice include bin-diving, containering, D-mart, dumpstering, totting, and skipping. In Australia, garbage picking is called "skip dipping."
The term "binner" is often used to describe individuals who collect recyclable materials for their deposit value. For example, in Vancouver, British Columbia, binners, or bottle collectors, search garbage cans and dumpsters for recyclable materials that can be redeemed for their deposit value. On average, these binners earn about $40 a day for several garbage bags full of discarded containers.
The karung guni, Zabbaleen, the rag and bone man, waste picker, junk man or bin hoker are terms for people who make their living by sorting and trading trash. A similar process known as gleaning was practised in rural areas and some ancient agricultural societies, where the residue from farmers' fields was collected.
Some dumpster divers, who self-identify as freegans, aim to reduce their ecological footprint by living from dumpster-dived-goods, sometimes exclusively.
The activity is performed by people out of necessity in the developing world. Some scavengers perform in organized groups, and some organize on various internet forums and social networking websites. By reusing, or repurposing, resources destined for the landfill, dumpster diving is sometimes considered to be an environmentalist endeavor, and is thus practiced by many pro-green communities. The wastefulness of consumer society and throw-away culture compels some individuals to rescue usable items (for example, computers or smartphones, which are frequently discarded due to the extensive use of planned obsolescence in the technology industry) from destruction and divert them to those who can make use of the items.
A wide variety of things may be disposed while still repairable or in working condition, making salvage of them a source of potentially free items for personal use, or to sell for profit. Irregular, blemished or damaged items that are still otherwise functional are regularly thrown away. Discarded food that might have slight imperfections, near its expiration date, or that is simply being replaced by newer stock is often tossed out despite being still edible. Many retailers are reluctant to sell this stock at reduced prices because of the risks that people will buy it instead of the higher-priced newer stock, that extra handling time is required, and that there are liability risks. In the United Kingdom, cookery books have been written on the cooking and consumption of such foods, which has contributed to the popularity of skipping. Artists often use discarded materials retrieved from trash receptacles to create works of found objects or assemblage.
Students have been known to partake in dumpster diving to obtain high tech items for technical projects, or simply to indulge their curiosity for unusual items. Dumpster diving can additionally be used in support of academic research. Garbage picking serves as the main tool for garbologists, who study the sociology and archeology of trash in modern life. Private and government investigators may pick through garbage to obtain information for their inquiries. Illegal cigarette consumption may be deduced from discarded packages.
Dumpster diving can be hazardous, due to potential exposure to biohazardous matter, broken glass, and overall unsanitary conditions that may exist in dumpsters.
Arguments against garbage picking often focus on the health and cleanliness implications of people rummaging in trash. This exposes the dumpster divers to potential health risks, and, especially if the dumpster diver does not return the non-usable items to their previous location, may leave trash scattered around. Divers can also be seriously injured or killed by garbage collection vehicles; in January 2012, in La Jolla, Swiss-American man Alfonso de Bourbon was killed by a truck while dumpster diving. Further, there are also concerns around the legality of taking items that may still technically belong to the person who threw them away (or to the waste management operator), and whether the taking of some items like discarded documents is a violation of privacy. In general a legal concept called abandonment of property covers this question of the subject of ownership of property that is disposed of.
Discarded billing records may be used for identity theft. As a privacy violation, discarded medical records as trash led to a $140,000 penalty against Massachusetts billing company Goldthwait Associates and a group of pathology offices in 2013 and a $400,000 settlement between Midwest Women's Healthcare Specialists and 1,532 clients in Kansas City in 2014.
Since dumpsters are usually located on private premises, divers may occasionally get in trouble for trespassing while dumpster diving, though the law is enforced with varying degrees of rigor. Some businesses may lock dumpsters to prevent pickers from congregating on their property, vandalism to their property, and to limit potential liability if a dumpster diver is injured while on their property.
Police searches of discarded waste as well as similar methods are also generally not considered violations; evidence seized in this manner has been permitted in many criminal trials. In the United States this has been affirmed by numerous courts including and up to the Supreme Court, in the decision "California v. Greenwood". The doctrine is not as well established in regards to civil litigation.
Companies run by private investigators specializing in such techniques have emerged as a result of the need for discreet, undetected retrieval of documents and evidence for civil and criminal trials. Private investigators have also written books on "P.I. technique" in which dumpster diving or its equivalent "wastebasket recovery" figures prominently.
In 2009, a Belgian dumpster diver and eco-activist nicknamed Ollie was detained for a month for removing food from a garbage can, and was accused of theft and burglary. On February 25, 2009, he was arrested for removing food from a garbage can at an AD Delhaize supermarket in Bruges. Ollie's trial evoked protests in Belgium against restrictions from taking discarded food items.
In Ontario, Canada, the "Trespass to Property Act"—legislation dating back to the "British North America Act" of 1867—grants property owners and security guards the power to ban anyone from their premises, for any reason, permanently. This is done by issuing a notice to the intruder, who will only be breaking the law upon return. Similar laws exist in Prince Edward Island and Saskatchewan. A recent case in Canada, which involved a police officer who retrieved a discarded weapon from a trash receptacle as evidence, created some controversy. The judge ruled the policeman's actions as legal although there was no warrant present, which led some to speculate the event as validation for any Canadian citizen to raid garbage disposals.
Skipping in England and Wales may qualify as theft within the Theft Act 1968 or as common-law theft in Scotland, though there is very little enforcement in practice.
In Germany, dumpster diving is referred to as "containern", and a waste container's contents are regarded as the property of the container's owner. Therefore, taking items from such a container is viewed as theft. However, the police will routinely disregard the illegality of garbage picking since the items found are generally of low value. There has only been one known instance where people were prosecuted. In 2009 individuals were arrested on assumed burglary as they had surmounted a supermarket's fence which was then followed by a theft complaint by the owner; the case was suspended.
In the United States, the 1988 "California v. Greenwood" case in the U.S. Supreme Court held that there is no common law expectation of privacy for discarded materials. There are, however, limits to what can legally be taken from a company's refuse. In a 1983 Minnesota case involving the theft of customer lists from a garbage can, "Tennant Company v. Advance Machine Company" (355 N.W.2d 720), the owner of the discarded information was awarded $500,000 in damages.
Dumpster diving is practiced differently in developed countries than in developing countries.
In the 1960s, Jerry Schneider, using recovered instruction manuals from The Pacific Telephone & Telegraph Company, used the company's own procedures to acquire hundreds of thousands of dollars' worth of telephone equipment over several years until his arrest.
The "Castle Infinity" videogame, after its shutdown in 2005, was brought back from the dead by a fan rescuing its servers from the trash.
Food Not Bombs is an anti-hunger organization that gets a significant amount of its food from dumpster diving from the dumpsters at small markets and corporate grocery stores in the US and UK.
In October 2013, in North London, three men were arrested and charged under the 1824 Vagrancy Act when they were caught taking discarded food: tomatoes, mushrooms, cheese and cakes from bins behind an Iceland supermarket. The charges were dropped on 29 January 2014 after much public criticism as well as a request by Iceland's chief executive, Malcolm Walker. | https://en.wikipedia.org/wiki?curid=8246 |
Digital synthesizer
A digital synthesizer is a synthesizer that uses digital signal processing (DSP) techniques to make musical sounds. This in contrast to older analog synthesizers, which produce music using analog electronics, and samplers, which play back digital recordings of acoustic, electric, or electronic instruments. Some digital synthesizers emulate analog synthesizers; others include sampling capability in addition to digital synthesis.
The very earliest digital synthesis experiments were made with computers, as part of academic research into sound generation. In 1973, the Japanese company Yamaha licensed the algorithms for frequency modulation synthesis (FM synthesis) from John Chowning, who had experimented with it at Stanford University since 1971. Yamaha's engineers began adapting Chowning's algorithm for use in a commercial digital synthesizer, adding improvements such as the "key scaling" method to avoid the introduction of distortion that normally occurred in analog systems during frequency modulation, though it would take several years before Yamaha were to release their FM digital synthesizers. In the 1970s, Yamaha were granted a number of patents, under the company's former name "Nippon Gakki Seizo Kabushiki Kaisha", evolving Chowning's early work on FM synthesis technology. Yamaha built the first prototype digital synthesizer in 1974.
Released in 1979, the Casio VL-1 was the first commercial digital synthesizer, selling for $69.95. Yamaha eventually commercialized their FM synthesis technology and released the first FM digital synthesizer in 1980, the Yamaha GS-1, but at an expensive retail price of $16,000.
Early commercial digital synthesizers used simple hard-wired digital circuitry to implement techniques such as additive synthesis and FM synthesis. Other techniques, such as wavetable synthesis and physical modeling, only became possible with the advent of high-speed microprocessor and digital signal processing technology. Two other early commercial digital synthesizers were the Fairlight CMI, introduced in 1979, and the New England Digital Synclavier II, introduced in 1980. The Fairlight CMI was a sampling synthesizer, while the Synclavier originally used FM synthesis technology licensed from Yamaha, before adding sampling synthesis later in the 1980s. The Fairlight CMI and the Synclavier were both expensive systems, retailing for more than $20,000 in the early 1980s. The cost of digital synthesizers began falling rapidly in the early 1980s. E-mu Systems introduced the Emulator sampling synthesizer in 1982 at a retail price of $7,900. Although not as flexible or powerful as either the Fairlight CMI or the Synclavier, its lower cost and portability made it popular.
Introduced in 1983, the Yamaha DX7 was the breakthrough digital synthesizer to have a major impact, both innovative and affordable, and thus spelling the decline of analog synthesizers. It used FM synthesis and, although it was incapable of the sampling synthesis of the Fairlight CMI, its price was around $2,000, putting it within range of a much larger number of musicians. The DX-7 was also known for its "key scaling" method to avoid distortion and for its recognizabley bright tonality that was partly due to its high sampling rate of 57 kHz. It became indispensable to many music artists of the 1980s, and would become one of the best-selling synthesizers of all time.
In 1987, Roland released its own influential synthesizer of the time, the D-50. This popular synth broke new ground in affordably combining short samples and digital oscillators, as well as the innovation of built-in digital effects (reverb., chorus, equalizer). Roland called this Linear Arithmetic (LA) synthesis. This instrument is responsible for some of the very recognisable preset synthesizer sounds of the late 1980s, such as the Pizzagogo sound used on Enya's "Orinoco Flow."
It gradually became feasible to include high quality samples of existing instruments as opposed to synthesizing them. In 1988, Korg introduced the last of the hugely popular trio of digital synthesizers of the 1980s after the DX7 and D50, the M1. This heralded both the increasing popularisation of digital sample-based synthesis, and the rise of 'workstation' synthesizers. After this time, many popular modern digital synthesizers have been described as not being full synthesizers in the most precise sense, as they play back samples stored in their memory. However, they still include options to shape the sounds through use of envelopes, LFOs, filters and effects such as reverb. The Yamaha Motif and Roland Fantom series of keyboards are typical examples of this type, described as 'ROMplers' ; at the same time, they are also examples of "workstation" synthesizers.
With the addition of sophisticated sequencers on board, now added to built-in effects and other features, the 'workstation' synthesizer had been born. These always include a multi-track sequencer, and can often record and play back samples, and in later years full audio tracks, to be used to record an entire song. These are usually also ROMplers, playing back samples, to give a wide variety of realistic instrument and other sounds such as drums, string instruments and wind instruments to sequence and compose songs, along with popular keyboard instrument sounds such as electric pianos and organs.
As there was still interest in analog synthesizers, and with the increase of computing power, over the 1990s another type of synthesizer arose : the analog modeling, or "virtual analog" synthesizer. These use computing power to simulate traditional analog waveforms and circuitry such as envelopes and filters, with the most popular examples of this type of instrument including the Nord Lead and Access Virus.
As the cost of processing power and memory fell, new types of synthesizers emerged, offering a variety of novel sound synthesis options. The Korg Oasys was one such example, packaging multiple digital synthesizers into a single unit.
Digital synthesizers can now be completely emulated in software ("softsynth"), and run on conventional PC hardware. Such soft implementations require careful programming and a fast CPU to get the same latency response as their dedicated equivalents. To reduce latency, some professional sound card manufacturers have developed specialized Digital Signal Processing ([DSP]) hardware. Dedicated digital synthesizers have the advantage of a performance-friendly user interface (physical controls like buttons for selecting features and enabling functionality, and knobs for setting variable parameters). On the other hand, software synthesizers have the advantages afforded by a rich graphical display.
With focus on performance-oriented keyboards and digital computer technology, manufacturers of commercial electronic instruments created some of the earliest digital synthesizers for studio and experimental use with computers being able to handle built-in sound synthesis algorithms.
The main difference is that a digital synthesizer uses digital processors and usually uses the direct digital synthesis architecture, while an analog synthesizer uses analog circuitry and a phase-locked loop. A digital synthesizer uses a numerically-controlled oscillator while an analog synthesizer may use a voltage-controlled oscillator. A digital synthesizer is in essence a computer with (often) a piano-keyboard and an LCD as an interface. An analog synthesizer is made up of sound-generating circuitry and modulators. Because computer technology is rapidly advancing, it is often possible to offer more features in a digital synthesizer than in an analog synthesizer at a given price. However, both technologies have their own merit. Some forms of synthesis, such as, for instance, sampling and additive synthesis are not feasible in analog synthesizers, while on the other hand, many musicians prefer the character of analog synthesizers over their digital equivalent.
The new wave era of the 1980s first brought the digital synthesizer to the public ear. Bands like Talking Heads and Duran Duran used the digitally made sounds on some of their most popular albums. Other more pop-inspired bands like Hall & Oates began incorporating the digital synthesizer into their sound in the 1980s. Through breakthroughs in technology in the 1990s many modern synthesizers use DSP.
Working in more or less the same way, every digital synthesizer appears similar to a computer. At a steady sample rate, digital synthesis produces a stream of numbers. Sound from speakers is then produced by a conversion to analog form. Direct digital synthesis is the typical architecture for digital synthesizers. Through signal generation, voice and instrument-level processing, a signal flow is created and controlled either by MIDI capabilities or voice and instrument-level controls. | https://en.wikipedia.org/wiki?curid=8247 |
Definition of music
A definition of music endeavors to give an accurate and concise explanation of music's basic attributes or essential nature and it involves a process of defining what is meant by the term "music". Many authorities have suggested definitions, but defining music turns out to be more difficult than might first be imagined, and there is ongoing debate. A number of explanations start with the notion of music as "organized sound," but they also highlight that this is perhaps too broad a definition and cite examples of organized sound that are not defined as music, such as human speech and sounds found in both natural and industrial environments . The problem of defining music is further complicated by the influence of culture in music cognition.
The "Concise Oxford Dictionary" defines music as "the art of combining vocal or instrumental sounds (or both) to produce beauty of form, harmony, and expression of emotion" . However, some music genres, such as noise music and musique concrète, challenge these ideas by using sounds not widely considered as musical, beautiful or harmonius, like randomly produced electronic distortion, feedback, static, cacophony, and sounds produced using compositional processes which utilize indeterminacy (; ).
An oft cited example of the dilemma in defining music is the work "4'33"" (1952) by the American composer John Cage (1912–1992). The written score has three movements and directs the performer(s) to appear on stage, indicate by gesture or other means when the piece begins, then make no sound throughout the duration of the piece, marking sections and the end by gesture. The audience hears only whatever ambient sounds may occur in the room. Some argue that "4'33" is not music because, among other reasons, it contains no sounds that are conventionally considered "musical" and the composer and performer(s) exert no control over the organization of the sounds heard . Others argue it is music because the conventional definitions of musical sounds are unnecessarily and arbitrarily limited, and control over the organization of the sounds is achieved by the composer and performer(s) through their gestures that divide what is heard into specific sections and a comprehensible form .
Because of differing fundamental concepts of music, the languages of many cultures do not contain a word that can be accurately translated as "music" as that word is generally understood by Western cultures . Inuit and most North American Indian languages do not have a general term for music. Among the Aztecs, the ancient Mexican theory of rhetoric, poetry, dance, and instrumental music used the Nahuatl term "In xochitl-in kwikatl" to refer to a complex mix of music and other poetic verbal and non-verbal elements, and reserved the word "Kwikakayotl" (or cuicacayotl) only for the sung expressions . There is no term for music in the African languages Tiv, Yoruba, Igbo, Efik, Birom, Hausa, Idoma, Eggon or Jarawa. Many other languages have terms which only partly cover what Western culture typically means by the term "music" (). The Mapuche of Argentina do not have a word for "music", but they do have words for instrumental versus improvised forms ("kantun"), European and non-Mapuche music ("kantun winka"), ceremonial songs ("öl"), and "tayil" .
While some languages in West Africa have no term for music, some West African languages accept the general concepts of music (). "Musiqi" is the Persian word for the science and art of music, "muzik" being the sound and performance of music (), though some things European-influenced listeners would include, such as Quran chanting, are excluded.
Ben Watson points out that Ludwig van Beethoven's "Grosse Fuge" (1825) "sounded like noise" to his audience at the time. Indeed, Beethoven's publishers persuaded him to remove it from its original setting as the last movement of a string quartet. He did so, replacing it with a sparkling "Allegro". They subsequently published it separately . Musicologist Jean-Jacques Nattiez considers the difference between noise and music nebulous, explaining that "The border between music and noise is always culturally defined—which implies that, even within a single society, this border does not always pass through the same place; in short, there is rarely a consensus ... By all accounts there is no "single" and "intercultural" universal concept defining what music might be" .
An often-cited definition of music is that it is "organized sound", a term originally coined by modernist composer Edgard Varèse in reference to his own musical aesthetic. Varèse's concept of music as "organized sound" fits into his vision of "sound as living matter" and of "musical space as open rather than bounded" . He conceived the elements of his music in terms of "sound-masses", likening their organization to the natural phenomenon of crystallization . Varèse thought that "to stubbornly conditioned ears, anything new in music has always been called noise", and he posed the question, "what is music but organized noises?" .
The fifteenth edition of the "Encyclopædia Britannica" states that "while there are no sounds that can be described as inherently unmusical, musicians in each culture have tended to restrict the range of sounds they will admit." A human organizing element is often felt to be implicit in music (sounds produced by non-human agents, such as waterfalls or birds, are often described as "musical", but perhaps less often as "music"). The composer R. Murray states that the sound of classical music "has decays; it is granular; it has attacks; it fluctuates, swollen with impurities—and all this creates a musicality that comes before any 'cultural' musicality." However, in the view of semiologist Jean-Jacques Nattiez, "just as music is whatever people choose to recognize as such, noise is whatever is recognized as disturbing, unpleasant, or both" . (See "music as social construct" below.)""'
Levi R. Bryant defines music not as a language, but as a marked-based, problem-solving method, comparable to mathematics .
Most definitions of music include a reference to sound and a list of universals of music can be generated by stating the elements (or aspects) of sound: pitch, timbre, loudness, duration, spatial location and texture (). However, in terms more specifically relating to music: following Wittgenstein, cognitive psychologist Eleanor Rosch proposes that categories are not clean cut but that something may be more or less a member of a category . As such the search for musical universals would fail and would not provide one with a valid definition . This is primarily because other cultures have different understandings in relation to the sounds that English language writers refer to as music.
Many people do, however, share a general idea of music. The Websters definition of music is a typical example: "the science or art of ordering tones or sounds in succession, in combination, and in temporal relationships to produce a composition having unity and continuity" ("Webster's Collegiate Dictionary", online edition).
This approach to the definition focuses not on the "construction" but on the "experience" of music. An extreme statement of the position has been articulated by the Italian composer Luciano Berio: “Music is everything that one listens to with the intention of listening to music” . This approach permits the boundary between music and noise to change over time as the conventions of musical interpretation evolve within a culture, to be different in different cultures at any given moment, and to vary from person to person according to their experience and proclivities. It is further consistent with the subjective reality that even what would commonly be considered music is experienced as non-music if the mind is concentrating on other matters and thus not perceiving the sound's "essence" "as music" .
In his 1983 book, "Music as Heard", which sets out from the phenomenological position of Husserl, Merleau-Ponty, and Ricœur, Thomas Clifton defines music as "an ordered arrangement of sounds and silences whose meaning is presentative rather than denotative. . . . This definition distinguishes music, as an end in itself, from compositional technique, and from sounds as purely physical objects." More precisely, "music is the actualization of the possibility of any sound whatever to present to some human being a meaning which he experiences with his body—that is to say, with his mind, his feelings, his senses, his will, and his metabolism" . It is therefore "a certain reciprocal relation established between a person, his behavior, and a sounding object" .
Clifton accordingly differentiates music from non-music on the basis of the human behavior involved, rather than on either the nature of compositional technique or of sounds as purely physical objects. Consequently, the distinction becomes a question of what is meant by musical behavior: "a musically behaving person is one whose very being is absorbed in the significance of the sounds being experienced." However, "It is not altogether accurate to say that this person is listening "to" the sounds. First, the person is doing more than listening: he is perceiving, interpreting, judging, and feeling. Second, the preposition 'to' puts too much stress on the sounds as such. Thus, the musically behaving person experiences musical significance by means of, or through, the sounds" .
In this framework, Clifton finds that there are two things that separate music from non-music: (1) musical meaning is presentative, and (2) music and non-music are distinguished in the idea of personal involvement. "It is the notion of personal involvement which lends significance to the word "ordered" in this definition of music" . This is not to be understood, however, as a sanctification of extreme relativism, since "it is precisely the 'subjective' aspect of experience which lured many writers earlier in this century down the path of sheer opinion-mongering. Later on this trend was reversed by a renewed interest in 'objective,' scientific, or otherwise non-introspective musical analysis. But we have good reason to believe that a musical experience is not a purely private thing, like seeing pink elephants, and that reporting about such an experience need not be subjective in the sense of it being a mere matter of opinion" .
Clifton's task, then, is to describe musical experience and the objects of this experience which, together, are called "phenomena," and the activity of describing phenomena is called "phenomenology" . It is important to stress that this definition of music says nothing about aesthetic standards. Music is not a fact or a thing in the world, but a meaning constituted by human beings. . . . To talk about such experience in a meaningful way demands several things. First, we have to be willing to let the composition speak to us, to let it reveal its own order and significance. . . . Second, we have to be willing to question our assumptions about the nature and role of musical materials. . . . Last, and perhaps most important, we have to be ready to admit that describing a meaningful experience is itself meaningful.
"Music, often an art/entertainment, is a total social fact whose definitions vary according to era and culture," according to Jean . It is often contrasted with noise. According to musicologist Jean-Jacques Nattiez: "The border between music and noise is always culturally defined—which implies that, even within a single society, this border does not always pass through the same place; in short, there is rarely a consensus... By all accounts there is no "single" and "intercultural" universal concept defining what music might be" . Given the above demonstration that "there is no limit to the number or the genre of variables that might intervene in a definition of the musical," an organization of definitions and elements is necessary.
Nattiez (1990, 17) describes definitions according to a tripartite semiological scheme similar to the following:
There are three levels of description, the poietic, the neutral, and the esthesic:
Table describing types of definitions of music :
Because of this range of definitions, the study of music comes in a wide variety of forms. There is the study of sound and vibration or acoustics, the cognitive study of music, the study of music theory and performance practice or music theory and ethnomusicology and the study of the reception and history of music, generally called musicology.
Composer Iannis Xenakis in "Towards a Metamusic" (chapter 7 of "Formalized Music") defined music in the following way : | https://en.wikipedia.org/wiki?curid=8249 |
Dayton, Ohio
Dayton () is the sixth-largest city in the state of Ohio and the county seat of Montgomery County. A small part of the city extends into Greene County. The 2019 U.S. census estimate put the city population at 140,407, while Greater Dayton was estimated to be at 803,416 residents. This makes Dayton the fourth-largest metropolitan area in Ohio and 63rd in the United States. Dayton is within Ohio's Miami Valley region, just north of Greater Cincinnati.
Ohio's borders are within of roughly 60 percent of the country's population and manufacturing infrastructure, making the Dayton area a logistical centroid for manufacturers, suppliers, and shippers. Dayton also hosts significant research and development in fields like industrial, aeronautical, and astronautical engineering that have led to many technological innovations. Much of this innovation is due in part to Wright-Patterson Air Force Base and its place in the community. With the decline of heavy manufacturing, Dayton's businesses have diversified into a service economy that includes insurance and legal sectors as well as healthcare and government sectors.
Along with defense and aerospace, healthcare accounts for much of the Dayton area's economy. Hospitals in the Greater Dayton area have an estimated combined employment of nearly 32,000 and a yearly economic impact of $6.8 billion. It is estimated that Premier Health Partners, a hospital network, contributes more than $2 billion a year to the region through operating, employment, and capital expenditures. In 2011, Dayton was rated the #3 city in the nation by HealthGrades for excellence in healthcare.
Dayton is also noted for its association with aviation; the city is home to the National Museum of the United States Air Force and is the birthplace of Orville Wright. Other well-known individuals born in the city include poet Paul Laurence Dunbar and entrepreneur John H. Patterson. Dayton is also known for its many patents, inventions, and inventors, most notably the Wright brothers' invention of powered flight. In 2007 Dayton was a part of the top 100 cities in America. In 2008, 2009, and 2010, "Site Selection" magazine ranked Dayton the #1 mid-sized metropolitan area in the nation for economic development. Also in 2010, Dayton was named one of the best places in the United States for college graduates to find a job.
On Memorial Day of 2019 Dayton was affected by a tornado outbreak, in which a total of 15 tornadoes touched down in the Dayton area. One was a half-mile wide EF4 that tore through the heart of the city causing damage.
Dayton was founded on April 1, 1796, by 12 settlers known as the Thompson Party. They traveled in March from Cincinnati up the Great Miami River by pirogue and landed at what is now St. Clair Street, where they found two small camps of Native Americans. Among the Thompson Party was Benjamin Van Cleve, whose memoirs provide insights into the Ohio Valley's history. Two other groups traveling overland arrived several days later.
In 1797, Daniel C. Cooper laid out Mad River Road, the first overland connection between Cincinnati and Dayton, opening the "Mad River Country" to settlement. Ohio was admitted into the Union in 1803, and the village of Dayton was incorporated in 1805 and chartered as a city in 1841. The city was named after Jonathan Dayton, a captain in the American Revolutionary War who signed the U.S. Constitution and owned a significant amount of land in the area. In 1827, construction on the Dayton-Cincinnati canal began, which would provide a better way to transport goods from Dayton to Cincinnati and contribute significantly to Dayton's economic growth during the 1800s.
Innovation led to business growth in the region. In 1884, John Henry Patterson acquired James Ritty's National Manufacturing Company along with his cash register patents and formed the National Cash Register Company (NCR). The company manufactured the first mechanical cash registers and played a crucial role in the shaping of Dayton's reputation as an epicenter for manufacturing in the early 1900s. In 1906, Charles F. Kettering, a leading engineer at the company, helped develop the first electric cash register, which propelled NCR into the national spotlight. NCR also helped develop the US Navy Bombe, a code-breaking machine that helped crack the Enigma machine cipher during World War II.
Dayton has been the home for many patents and inventions since the 1870s. According to the National Park Service, citing information from the U.S. Patent Office, Dayton had granted more patents per capita than any other U.S. city in 1890 and ranked fifth in the nation as early as 1870. | https://en.wikipedia.org/wiki?curid=8253 |
Diode
A diode is a two-terminal electronic component that conducts current primarily in one direction (asymmetric conductance); it has low (ideally zero) resistance in one direction, and high (ideally infinite) resistance in the other. A diode vacuum tube or thermionic diode is a vacuum tube with two electrodes, a heated cathode and a plate, in which electrons can flow in only one direction, from cathode to plate. A semiconductor diode, the most commonly used type today, is a crystalline piece of semiconductor material with a p–n junction connected to two electrical terminals. Semiconductor diodes were the first semiconductor electronic devices. The discovery of asymmetric electrical conduction across the contact between a crystalline mineral and a metal was made by German physicist Ferdinand Braun in 1874. Today, most diodes are made of silicon, but other semiconducting materials such as gallium arsenide and germanium are also used.
The most common function of a diode is to allow an electric current to pass in one direction (called the diode's "forward" direction), while blocking it in the opposite direction (the "reverse" direction). As such, the diode can be viewed as an electronic version of a check valve. This unidirectional behavior is called rectification, and is used to convert alternating current (ac) to direct current (dc). Forms of rectifiers, diodes can be used for such tasks as extracting modulation from radio signals in radio receivers.
However, diodes can have more complicated behavior than this simple on–off action, because of their nonlinear current-voltage characteristics. Semiconductor diodes begin conducting electricity only if a certain threshold voltage or cut-in voltage is present in the forward direction (a state in which the diode is said to be "forward-biased"). The voltage drop across a forward-biased diode varies only a little with the current, and is a function of temperature; this effect can be used as a temperature sensor or as a voltage reference. Also, diodes' high resistance to current flowing in the reverse direction suddenly drops to a low resistance when the reverse voltage across the diode reaches a value called the breakdown voltage.
A semiconductor diode's current–voltage characteristic can be tailored by selecting the semiconductor materials and the doping impurities introduced into the materials during manufacture. These techniques are used to create special-purpose diodes that perform many different functions. For example, diodes are used to regulate voltage (Zener diodes), to protect circuits from high voltage surges (avalanche diodes), to electronically tune radio and TV receivers (varactor diodes), to generate radio-frequency oscillations (tunnel diodes, Gunn diodes, IMPATT diodes), and to produce light (light-emitting diodes). Tunnel, Gunn and IMPATT diodes exhibit negative resistance, which is useful in microwave and switching circuits.
Diodes, both vacuum and semiconductor, can be used as shot-noise generators.
Thermionic (vacuum-tube) diodes and solid-state (semiconductor) diodes were developed separately, at approximately the same time, in the early 1900s, as radio receiver detectors. Until the 1950s, vacuum diodes were used more frequently in radios because the early point-contact semiconductor diodes were less stable. In addition, most receiving sets had vacuum tubes for amplification that could easily have the thermionic diodes included in the tube (for example the 12SQ7 double diode triode), and vacuum-tube rectifiers and gas-filled rectifiers were capable of handling some high-voltage/high-current rectification tasks better than the semiconductor diodes (such as selenium rectifiers) that were available at that time.
In 1873, Frederick Guthrie observed that a grounded, white hot metal ball brought in close proximity to an electroscope would discharge a positively charged electroscope, but not a negatively charged electroscope.
In 1880, Thomas Edison observed unidirectional current between heated and unheated elements in a bulb, later called Edison effect, and was granted a patent on application of the phenomenon for use in a dc voltmeter.
About 20 years later, John Ambrose Fleming (scientific adviser to the Marconi Company
and former Edison employee) realized that the Edison effect could be used as a radio detector. Fleming patented the first true thermionic diode, the Fleming valve, in Britain on November 16, 1904 (followed by in November 1905).
Throughout the vacuum tube era, valve diodes were used in almost all electronics such as radios, televisions, sound systems and instrumentation. They slowly lost market share beginning in the late 1940s due to selenium rectifier technology and then to semiconductor diodes during the 1960s. Today they are still used in a few high power applications where their ability to withstand transient voltages and their robustness gives them an advantage over semiconductor devices, and in musical instrument and audiophile applications.
In 1874, German scientist Karl Ferdinand Braun discovered the "unilateral conduction" across a contact between a metal and a mineral. Jagadish Chandra Bose was the first to use a crystal for detecting radio waves in 1894. The crystal detector was developed into a practical device for wireless telegraphy by Greenleaf Whittier Pickard, who invented a silicon crystal detector in 1903 and received a patent for it on November 20, 1906. Other experimenters tried a variety of other minerals as detectors. Semiconductor principles were unknown to the developers of these early rectifiers. During the 1930s understanding of physics advanced and in the mid 1930s researchers at Bell Telephone Laboratories recognized the potential of the crystal detector for application in microwave technology. Researchers at Bell Labs, Western Electric, MIT, Purdue and in the UK intensively developed point-contact diodes ("crystal rectifiers" or "crystal diodes") during World War II for application in radar. After World War II, AT&T used these in their microwave towers that criss-crossed the United States, and many radar sets use them even in the 21st century. In 1946, Sylvania began offering the 1N34 crystal diode. During the early 1950s, junction diodes were developed.
At the time of their invention, asymmetrical conduction devices were known as rectifiers. In 1919, the year tetrodes were invented, William Henry Eccles coined the term "diode" from the Greek roots "di" (from "δί"), meaning 'two', and "ode" (from "οδός"), meaning 'path'. The word "diode", however, as well as "triode, tetrode, pentode, hexode", were already in use as terms of multiplex telegraphy.
Although all diodes "rectify", the term "rectifier" is usually applied to diodes intended for power supply application in order to differentiate them from diodes intended for small signal circuits.
A thermionic diode is a thermionic-valve device consisting of a sealed, evacuated glass or metal envelope containing two electrodes: a cathode and a plate. The cathode is either "indirectly heated" or "directly heated". If indirect heating is employed, a heater is included in the envelope.
In operation, the cathode is heated to red heat (800–1000 °C, 1500-1800°F). A directly heated cathode is made of tungsten wire and is heated by current passed through it from an external voltage source. An indirectly heated cathode is heated by infrared radiation from a nearby heater that is formed of Nichrome wire and supplied with current provided by an external voltage source.
The operating temperature of the cathode causes it to release electrons into the vacuum, a process called thermionic emission. The cathode is coated with oxides of alkaline earth metals, such as barium and strontium oxides. These have a low work function, meaning that they more readily emit electrons than would the uncoated cathode.
The plate, not being heated, does not emit electrons; but is able to absorb them.
The alternating voltage to be rectified is applied between the cathode and the plate. When the plate voltage is positive with respect to the cathode, the plate electrostatically attracts the electrons from the cathode, so a current of electrons flows through the tube from cathode to plate. When the plate voltage is negative with respect to the cathode, no electrons are emitted by the plate, so no current can pass from the plate to the cathode.
Point-contact diodes were developed starting in the 1930s, out of the early crystal detector technology, and are now generally used in the 3 to 30 gigahertz range. Point-contact diodes use a small diameter metal wire in contact with a semiconductor crystal, and are of either "non-welded" contact type or "welded contact" type. Non-welded contact construction utilizes the Schottky barrier principle. The metal side is the pointed end of a small diameter wire that is in contact with the semiconductor crystal. In the welded contact type, a small P region is formed in the otherwise N type crystal around the metal point during manufacture by momentarily passing a relatively large current through the device. Point contact diodes generally exhibit lower capacitance, higher forward resistance and greater reverse leakage than junction diodes.
A p–n junction diode is made of a crystal of semiconductor, usually silicon, but germanium and gallium arsenide are also used. Impurities are added to it to create a region on one side that contains negative charge carriers (electrons), called an n-type semiconductor, and a region on the other side that contains positive charge carriers (holes), called a p-type semiconductor. When the n-type and p-type materials are attached together, a momentary flow of electrons occur from the n to the p side resulting in a third region between the two where no charge carriers are present. This region is called the depletion region because there are no charge carriers (neither electrons nor holes) in it. The diode's terminals are attached to the n-type and p-type regions. The boundary between these two regions, called a p–n junction, is where the action of the diode takes place. When a sufficiently higher electrical potential is applied to the P side (the anode) than to the N side (the cathode), it allows electrons to flow through the depletion region from the N-type side to the P-type side. The junction does not allow the flow of electrons in the opposite direction when the potential is applied in reverse, creating, in a sense, an electrical check valve.
Another type of junction diode, the Schottky diode, is formed from a metal–semiconductor junction rather than a p–n junction, which reduces capacitance and increases switching speed.
A semiconductor diode's behavior in a circuit is given by its current–voltage characteristic, or I–V graph (see graph below). The shape of the curve is determined by the transport of charge carriers through the so-called "depletion layer" or "depletion region" that exists at the p–n junction between differing semiconductors. When a p–n junction is first created, conduction-band (mobile) electrons from the N-doped region diffuse into the P-doped region where there is a large population of holes (vacant places for electrons) with which the electrons "recombine". When a mobile electron recombines with a hole, both hole and electron vanish, leaving behind an immobile positively charged donor (dopant) on the N side and negatively charged acceptor (dopant) on the P side. The region around the p–n junction becomes depleted of charge carriers and thus behaves as an insulator.
However, the width of the depletion region (called the depletion width) cannot grow without limit. For each electron–hole pair recombination made, a positively charged dopant ion is left behind in the N-doped region, and a negatively charged dopant ion is created in the P-doped region. As recombination proceeds and more ions are created, an increasing electric field develops through the depletion zone that acts to slow and then finally stop recombination. At this point, there is a "built-in" potential across the depletion zone.
If an external voltage is placed across the diode with the same polarity as the built-in potential, the depletion zone continues to act as an insulator, preventing any significant electric current flow (unless electron–hole pairs are actively being created in the junction by, for instance, light; see photodiode). This is called the "reverse bias" phenomenon.
However, if the polarity of the external voltage opposes the built-in potential, recombination can once again proceed, resulting in a substantial electric current through the p–n junction (i.e. substantial numbers of electrons and holes recombine at the junction). For silicon diodes, the built-in potential is approximately 0.7 V (0.3 V for germanium and 0.2 V for Schottky). Thus, if an external voltage greater than and opposite to the built-in voltage is applied, a current will flow and the diode is said to be "turned on" as it has been given an external "forward bias". The diode is commonly said to have a forward "threshold" voltage, above which it conducts and below which conduction stops. However, this is only an approximation as the forward characteristic is smooth (see I-V graph above).
A diode's I–V characteristic can be approximated by four regions of operation:
In a small silicon diode operating at its rated currents, the voltage drop is about 0.6 to 0.7 volts. The value is different for other diode types—Schottky diodes can be rated as low as 0.2 V, germanium diodes 0.25 to 0.3 V, and red or blue light-emitting diodes (LEDs) can have values of 1.4 V and 4.0 V respectively.
At higher currents the forward voltage drop of the diode increases. A drop of 1 V to 1.5 V is typical at full rated current for power diodes.
The "Shockley ideal diode equation" or the "diode law" (named after the bipolar junction transistor co-inventor William Bradford Shockley) gives the I–V characteristic of an ideal diode in either forward or reverse bias (or no bias). The following equation is called the "Shockley ideal diode equation" when "n", the ideality factor, is set equal to 1 :
where
The thermal voltage "V"T is approximately 25.85 mV at 300 K, a temperature close to "room temperature" commonly used in device simulation software. At any temperature it is a known constant defined by:
where "k" is the Boltzmann constant, "T" is the absolute temperature of the p–n junction, and "q" is the magnitude of charge of an electron (the elementary charge).
The reverse saturation current, "I"S, is not constant for a given device, but varies with temperature; usually more significantly than "V"T, so that "V"D typically decreases as "T" increases.
The "Shockley ideal diode equation" or the "diode law" is derived with the assumption that the only processes giving rise to the current in the diode are drift (due to electrical field), diffusion, and thermal recombination–generation (R–G) (this equation is derived by setting n = 1 above). It also assumes that the R–G current in the depletion region is insignificant. This means that the "Shockley ideal diode equation" does not account for the processes involved in reverse breakdown and photon-assisted R–G. Additionally, it does not describe the "leveling off" of the I–V curve at high forward bias due to internal resistance. Introducing the ideality factor, n, accounts for recombination and generation of carriers.
Under "reverse bias" voltages the exponential in the diode equation is negligible, and the current is a constant (negative) reverse current value of −"IS". The reverse "breakdown region" is not modeled by the Shockley diode equation.
For even rather small "forward bias" voltages the exponential is very large, since the thermal voltage is very small in comparison. The subtracted '1' in the diode equation is then negligible and the forward diode current can be approximated by
The use of the diode equation in circuit problems is illustrated in the article on diode modeling.
At forward voltages less than the saturation voltage, the voltage versus current characteristic curve of most diodes is not a straight line. The current can be approximated by formula_3 as mentioned in the previous section.
In detector and mixer applications, the current can be estimated by a Taylor's series. The odd terms can be omitted because they produce frequency components that are outside the pass band of the mixer or detector. Even terms beyond the second derivative usually need not be included because they are small compared to the second order term. The desired current component is approximately proportional to the square of the input voltage, so the response is called "square law" in this region.
Following the end of forward conduction in a p–n type diode, a reverse current can flow for a short time. The device does not attain its blocking capability until the mobile charge in the junction is depleted.
The effect can be significant when switching large currents very quickly. A certain amount of "reverse recovery time" r (on the order of tens of nanoseconds to a few microseconds) may be required to remove the reverse recovery charge r from the diode. During this recovery time, the diode can actually conduct in the reverse direction. This might give rise to a large constant current in the reverse direction for a short time while the diode is reverse biased. The magnitude of such a reverse current is determined by the operating circuit (i.e., the series resistance) and the diode is said to be in the storage-phase. In certain real-world cases it is important to consider the losses that are incurred by this non-ideal diode effect. However, when the slew rate of the current is not so severe (e.g. Line frequency) the effect can be safely ignored. For most applications, the effect is also negligible for Schottky diodes.
The reverse current ceases abruptly when the stored charge is depleted; this abrupt stop is exploited in step recovery diodes for generation of extremely short pulses.
Normal (p–n) diodes, which operate as described above, are usually made of doped silicon or germanium. Before the development of silicon power rectifier diodes, cuprous oxide and later selenium was used. Their low efficiency required a much higher forward voltage to be applied (typically 1.4 to 1.7 V per "cell", with multiple cells stacked so as to increase the peak inverse voltage rating for application in high voltage rectifiers), and required a large heat sink (often an extension of the diode's metal substrate), much larger than the later silicon diode of the same current ratings would require. The vast majority of all diodes are the p–n diodes found in CMOS integrated circuits, which include two diodes per pin and many other internal diodes.
Other uses for semiconductor diodes include the sensing of temperature, and computing analog logarithms (see Operational amplifier applications#Logarithmic output).
The symbol used to represent a particular type of diode in a circuit diagram conveys the general electrical function to the reader. There are alternative symbols for some types of diodes, though the differences are minor. The triangle in the symbols points to the forward direction, i.e. in the direction of conventional current flow.
There are a number of common, standard and manufacturer-driven numbering and coding schemes for diodes; the two most common being the EIA/JEDEC standard and the European Pro Electron standard:
The standardized 1N-series numbering "EIA370" system was introduced in the US by EIA/JEDEC (Joint Electron Device Engineering Council) about 1960. Most diodes have a 1-prefix designation (e.g., 1N4003). Among the most popular in this series were: 1N34A/1N270 (germanium signal), 1N914/1N4148 (silicon signal), 1N400x (silicon 1A power rectifier), and 1N580x (silicon 3A power rectifier).
The JIS semiconductor designation system has all semiconductor diode designations starting with "1S".
The European Pro Electron coding system for active components was introduced in 1966 and comprises two letters followed by the part code. The first letter represents the semiconductor material used for the component (A = germanium and B = silicon) and the second letter represents the general function of the part (for diodes, A = low-power/signal, B = variable capacitance, X = multiplier, Y = rectifier and Z = voltage reference); for example:
Other common numbering / coding systems (generally manufacturer-driven) include:
In optics, an equivalent device for the diode but with laser light would be the Optical isolator, also known as an Optical Diode, that allows light to only pass in one direction. It uses a Faraday rotator as the main component.
The first use for the diode was the demodulation of amplitude modulated (AM) radio broadcasts. The history of this discovery is treated in depth in the radio article. In summary, an AM signal consists of alternating positive and negative peaks of a radio carrier wave, whose amplitude or envelope is proportional to the original audio signal. The diode rectifies the AM radio frequency signal, leaving only the positive peaks of the carrier wave. The audio is then extracted from the rectified carrier wave using a simple filter and fed into an audio amplifier or transducer, which generates sound waves.
In microwave and millimeter wave technology, beginning in the 1930s, researchers improved and miniaturized the crystal detector. Point contact diodes ("crystal diodes") and Schottky diodes are used in radar, microwave and millimeter wave detectors.
Rectifiers are constructed from diodes, where they are used to convert alternating current (AC) electricity into direct current (DC). Automotive alternators are a common example, where the diode, which rectifies the AC into DC, provides better performance than the commutator or earlier, dynamo. Similarly, diodes are also used in "Cockcroft–Walton voltage multipliers" to convert AC into higher DC voltages.
Since most electronic circuits can be damaged when the polarity of their power supply inputs are reversed, a series diode is sometimes used to protect against such situations. This concept is known by multiple naming variations that mean the same thing: reverse voltage protection, reverse polarity protection, and reverse battery protection.
Diodes are frequently used to conduct damaging high voltages away from sensitive electronic devices. They are usually reverse-biased (non-conducting) under normal circumstances. When the voltage rises above the normal range, the diodes become forward-biased (conducting). For example, diodes are used in (stepper motor and H-bridge) motor controller and relay circuits to de-energize coils rapidly without the damaging voltage spikes that would otherwise occur. (A diode used in such an application is called a flyback diode). Many integrated circuits also incorporate diodes on the connection pins to prevent external voltages from damaging their sensitive transistors. Specialized diodes are used to protect from over-voltages at higher power (see Diode types above).
Diodes can be combined with other components to construct AND and OR logic gates. This is referred to as diode logic.
In addition to light, mentioned above, semiconductor diodes are sensitive to more energetic radiation. In electronics, cosmic rays and other sources of ionizing radiation cause noise pulses and single and multiple bit errors.
This effect is sometimes exploited by particle detectors to detect radiation. A single particle of radiation, with thousands or millions of electron volts of energy, generates many charge carrier pairs, as its energy is deposited in the semiconductor material. If the depletion layer is large enough to catch the whole shower or to stop a heavy particle, a fairly accurate measurement of the particle's energy can be made, simply by measuring the charge conducted and without the complexity of a magnetic spectrometer, etc.
These semiconductor radiation detectors need efficient and uniform charge collection and low leakage current. They are often cooled by liquid nitrogen. For longer-range (about a centimetre) particles, they need a very large depletion depth and large area. For short-range particles, they need any contact or un-depleted semiconductor on at least one surface to be very thin. The back-bias voltages are near breakdown (around a thousand volts per centimetre). Germanium and silicon are common materials. Some of these detectors sense position as well as energy.
They have a finite life, especially when detecting heavy particles, because of radiation damage. Silicon and germanium are quite different in their ability to convert gamma rays to electron showers.
Semiconductor detectors for high-energy particles are used in large numbers. Because of energy loss fluctuations, accurate measurement of the energy deposited is of less use.
A diode can be used as a temperature measuring device, since the forward voltage drop across the diode depends on temperature, as in a silicon bandgap temperature sensor. From the Shockley ideal diode equation given above, it might "appear" that the voltage has a "positive" temperature coefficient (at a constant current), but usually the variation of the reverse saturation current term is more significant than the variation in the thermal voltage term. Most diodes therefore have a "negative" temperature coefficient, typically −2 mV/°C for silicon diodes. The temperature coefficient is approximately constant for temperatures above about 20 kelvin. Some graphs are given for 1N400x series, and CY7 cryogenic temperature sensor.
Diodes will prevent currents in unintended directions. To supply power to an electrical circuit during a power failure, the circuit can draw current from a battery. An uninterruptible power supply may use diodes in this way to ensure that current is only drawn from the battery when necessary. Likewise, small boats typically have two circuits each with their own battery/batteries: one used for engine starting; one used for domestics. Normally, both are charged from a single alternator, and a heavy-duty split-charge diode is used to prevent the higher-charge battery (typically the engine battery) from discharging through the lower-charge battery when the alternator is not running.
Diodes are also used in electronic musical keyboards. To reduce the amount of wiring needed in electronic musical keyboards, these instruments often use keyboard matrix circuits. The keyboard controller scans the rows and columns to determine which note the player has pressed. The problem with matrix circuits is that, when several notes are pressed at once, the current can flow backwards through the circuit and trigger "phantom keys" that cause "ghost" notes to play. To avoid triggering unwanted notes, most keyboard matrix circuits have diodes soldered with the switch under each key of the musical keyboard. The same principle is also used for the switch matrix in solid-state pinball machines.
Diodes can be used to limit the positive or negative excursion of a signal to a prescribed voltage.
A diode clamp circuit can take a periodic alternating current signal that oscillates between positive and negative values, and vertically displace it such that either the positive, or the negative peaks occur at a prescribed level. The clamper does not restrict the peak-to-peak excursion of the signal, it moves the whole signal up or down so as to place the peaks at the reference level.
Diodes are usually referred to as "D" for diode on PCBs. Sometimes the abbreviation "CR" for "crystal rectifier" is used. | https://en.wikipedia.org/wiki?curid=8254 |
Drexel University
Drexel University is a private research university with its main campus in Philadelphia, Pennsylvania. It was founded in 1891 by Anthony J. Drexel, a financier and philanthropist. Founded as Drexel Institute of Art, Science, and Industry, it was renamed Drexel Institute of Technology in 1936, before assuming the name Drexel University in 1970.
, more than 26,000 students were enrolled in over 70 undergraduate programs and more than 100 master's, doctoral, and professional programs at the university. Drexel's cooperative education program (co-op) is a prominent aspect of the school's degree programs, offering students the opportunity to gain up to 18 months of paid, full-time work experience in a field relevant to their undergraduate major or graduate degree program prior to graduation.
Drexel University was founded in 1891 as the Drexel Institute of Art, Science and Industry, by Philadelphia financier and philanthropist Anthony J. Drexel. The original mission of the institution was to provide educational opportunities in the "practical arts and sciences" for women and men of all backgrounds. The institution became known as the Drexel Institute of Technology in 1936, and in 1970 the Drexel Institute of Technology gained university status, becoming Drexel University.
Although there were many changes during its first century, the university's identity has been held constant as a privately controlled, non-sectarian, coeducational center of higher learning, distinguished by a commitment to practical education and hands-on experience in an occupational setting. The central aspect of Drexel University's focus on career preparation, in the form of its cooperative education program, was introduced in 1919. The program became integral to the university's unique educational experience. Participating students alternate periods of classroom-based study with periods of full-time, practical work experience related to their academic major and career interests.
Between 1995 and 2009, Drexel University underwent a period of significant change to its programs, enrollment, and facilities under the leadership of Dr. Constantine Papadakis, the university's president during that time. Papadakis oversaw Drexel's largest expansion in its history, with a 471 percent increase in its endowment and a 102 percent increase in student enrollment. His leadership also guided the university toward improved performance in collegiate rankings, a more selective approach to admissions, and a more rigorous academic program at all levels. It was during this period of expansion that Drexel acquired and assumed management of the former MCP Hahnemann University, creating the Drexel University College of Medicine in 2002. In 2006, the university established the Thomas R. Kline School of Law, and in 2011 the School of Law achieved full accreditation by the American Bar Association.
Dr. Constantine Papadakis died of pneumonia in April 2009 while still employed as the university's president. His successor, John Anderson Fry, was formerly the president of Franklin & Marshall College and served as the Executive Vice President of the University of Pennsylvania. Under Fry's leadership, Drexel has continued its expansion, including the July 2011 acquisition of The Academy of Natural Sciences.
The College of Arts and Sciences was formed in 1990 when Drexel merged the two existing College of Sciences and College of Humanities together.
The College of Media Arts and Design "fosters the study, exploration and management of the arts: media, design, the performing and visual". The college offers sixteen undergraduate programs, and 6 graduate programs, in modern art and design fields that range from graphic design and dance to fashion design and television management. Its wide range of programs has helped the college earn full accreditation from the National Association of Schools of Art and Design, the National Architectural Accrediting Board, and the Council for Interior Design Accreditation.
The Bennett S. LeBow College of Business history dates to the founding in 1891 of the Drexel Institute, that later became Drexel University, and of its Business Department in 1896. Today LeBow offers thirteen undergraduate majors, eight graduate programs, and two doctoral programs; 22 percent of Drexel University's undergraduate students are enrolled in a LeBow College of Business program.
The LeBow College of Business has been ranked as the 38th best private business school in the nation. Its online MBA program is ranked 14th in the world by the "Financial Times"; the publication also ranks the undergraduate business program at LeBow as 19th in the United States. The part-time MBA program ranks 1st in academic quality in the 2015 edition of "Business Insider's" rankings. Undergraduate and graduate entrepreneurship programs are ranked 19th in the country by the "Princeton Review".
Economics programs at the LeBow College of Business are housed within the School of Economics. In addition to the undergraduate program in economics, the school is home to an M.S. in Economics program as well as a PhD program in economics. Faculty members in the School of Economics have been published in the "American Economic Review", "Rand Journal of Economics", and "Review of Economics and Statistics." The school has been ranked among the best in the world for its extensive research into matters of international trade.
Drexel's College of Engineering is one of its oldest and largest academic colleges, and served as the original focus of the career-oriented school upon its founding in 1891. The College of Engineering is home to several notable alumni, including two astronauts; financier Bennett S. LeBow, for whom the university's College of Business is named; and Paul Baran, inventor of the packet-switched network. Today, Drexel University's College of Engineering, which is home to 19 percent of the undergraduate student body, is known for creating the world's first engineering degree in appropriate technology. The college is also one of only 17 U.S. universities to offer a bachelor's degree in architectural engineering, and only one of five private institutions to do so.
The 2006 edition of U.S. News ranks the undergraduate engineering program #57 in the country and the 2007 edition of graduate schools ranks the graduate program #61. The 2008 edition ranks the University Engineering Program at #55 and in the 2009 US News Ranking, the university has moved up to the #52 position.
The engineering curriculum used by the school was originally called E4 (Enhanced Educational Experience for Engineers) which was established in 1986 and funded in part by the Engineering Directorate of the National Science Foundation. In 1988 the program evolved into tDEC (the Drexel Engineering Curriculum) which is composed of two full years of rigorous core engineering courses which encompass the freshman and sophomore years of the engineering student. The College of Engineering hasn't used the tDEC curriculum since approximately 2005.
The College of Computing and Informatics is a recent addition to Drexel University, though its programs have been offered to students for many years. The college was formed by the consolidation of the former College of Information Science & Technology (often called the "iSchool"), the Department of Computer Science, and the Computing and Security Technology program. Undergraduate and graduate programs in computer science, software engineering, data science, information systems, and computer security are offered by the college.
The Drexel University College of Medicine was added to the colleges and schools of the university in 2002, having been formed upon the acquisition of MCP Hahnemann University. In addition to its M.D. program, the College of Medicine offers more than 40 graduate programs in its Graduate School of Biomedical Sciences and Professional Studies.
The Graduate School of Biomedical Sciences and Professional studies offers both Master of Science and Doctor of Philosophy degree programs in fields like biochemistry, biotechnology, clinical research, and forensic science. The school also serves as the center for biomedical research at Drexel University.
Founded in 1961 as the United States’ first Biomedical Engineering and Science Institute, the School of Biomedical Engineering, Science and Health Systems focuses on the emerging field of biomedical science at the undergraduate, graduate, and doctoral levels. Primary research areas within the school include bioinformatics, biomechanics, biomaterials, neuroengineering, and cardiovascular engineering.
Formed in 2002 along with the College of Medicine, Drexel's College of Nursing and Health Professions offers more than 25 programs to undergraduate and graduate students in the fields of nursing, nutrition, health sciences, health services, and radiologic technology. The college's research into matters of nutrition and rehabilitation have garnered approximately $2.9 million in external research funding on an annual basis. The physician assistant program at Drexel's College of Nursing and Health Professions is ranked in the top 15 such programs in the United States; its anesthesia programs and physical therapy programs are, respectively, ranked as top-50 programs nationwide.
Established in 1892, the department now known as the College of Professional Studies has focused exclusively on educational programs and pursuits for nontraditional adult learners. Today, the Goodwin College of Professional Studies offers several options designed for adult learners at all stages of career and educational development. Bachelor of Science degree completion programs are offered in part-time evening or weekend formats; graduate programs and doctoral programs are offered at the graduate level, as are self-paced "continuing education" courses and nearly a dozen self-paced certification programs.
The Pennoni Honors College, named for Drexel alumnus and trustee Dr. C.R. "Chuck" Pennoni '63, '66, Hon. '92, and his wife Annette, recognizes and promotes excellence among Drexel students. Students admitted to the Honors College live together and take many of the same classes; the college provides these students with access to unique cultural and social activities and a unique guest speaker series. Students are also involved in the university's Honors Student Advisory Committee and have the opportunity to take part in Drexel's "Alternative Spring Break", an international study tour held each spring.
Upon its founding in 2006, the Thomas R. Kline School of Law, originally known as the Earle Mack School of Law, was the first law school founded in Philadelphia in more than three decades. The School of Law offers L.L.M. and Master of Legal Studies degrees, in addition to the flagship Juris Doctorate program, and uniquely offers cooperative education as part of its curriculum across all programs. In 2015, "Bloomberg Business" ranked the Kline School of Law as the second most underrated law school in the United States.
One of the oldest schools within Drexel University, the modern School of Education dates back to the 1891 founding of the school. Originally, the Department of Education offered teacher training to women as one of its original, career-focused degree programs. Today, the School of Education offers a coeducational approach to teacher training at the elementary and secondary levels for undergraduates. Other undergraduate programs include those focused on the intersection between learning and technology, teacher certification for non-education majors, and a minor in education for students with an interest in instruction. Graduate degrees offered by the School of Education include those in administration and leadership, special education, higher education, mathematics education, international education, and educational creativity and innovation. Doctoral degrees are offered in educational leadership and learning technologies.
The School of Public Health states that its mission is to "provide education, conduct research, and partner with communities and organizations to improve the health of populations". To that end, the school offers both a B.S. and a minor in public health for undergraduate students as well as several options for students pursuing graduate and doctoral degrees in the field. At the graduate level, the Dornsife School offers both a Master of Public Health and an Executive Master of Public Health, as well as an M.S. in biostatistics and an M.S. in epidemiology. Two Doctor of Public Health degrees are also offered, as isa Doctor of Philosophy in epidemiology. The school's graduate and doctoral students are heavily invested in the research activities of the Dornsife School of Public Health, which has helped the school attract annual funding for its four research centers.
The Center for Hospitality and Sport Management was formed in 2013, in an effort to house and consolidate academic programs in hospitality, tourism management, the culinary arts, and sport management. Academic programs combine the unique skills required of the sports and hospitality industries with the principles and curriculum espoused by the management programs within Drexel's LeBow College of Business.
Focusing specifically on the skills required to successfully start and launch a business, the Charles D. Close School of Entrepreneurship is the first and only freestanding school of entrepreneurship in the United States. Undergraduate students take part in a B.A. program in entrepreneurship and innovation, while graduate students a combined Master of Science degree in biomedicine and entrepreneurship. Minors in entrepreneurship are also offered to undergraduate students.
Housed within the Close School is the Baiada Institute for Entrepreneurship. The institute serves as an incubator for Drexel student startups, providing resources and mentorships to students and some post-graduates who are starting their own business while enrolled in one of the Close School's degree programs or academic minors.
Drexel University launched its first Internet-based education program, a master's degree in Library & Information Science, in 1996. In 2001, Drexel created its wholly owned, for-profit online education subsidiary, Drexel e-Learning, Inc., better known as Drexel University Online. It was announced in October 2013 that Drexel University Online would no longer be a for-profit venture, but rather become an internal division within the university to better serve its online student population. Although headquartered in Philadelphia, Drexel announced a new Washington, D.C., location in December 2012 to serve as both an academic and outreach center, catering to the online student population.
In an effort to create greater awareness of distance learning and to recognize exceptional leaders and best practices in the field, Drexel University Online founded National Distance Learning Week, in conjunction with the United States Distance Learning Association, in 2007. In September 2010, Drexel University Online received the Sloan-C award for institution-wide excellence in online education indicating that it had exceptional programs of "demonstrably high quality" at the regional and national levels and across disciplines. Drexel University Online won the 2008 United States Distance Learning Association's Best Practices Awards for Distance Learning Programming. In 2007, the online education subsidiary had a revenue of $40 million. In March 2013, Drexel Online had more than 7,000 unique students from all 50 states and more than 20 countries pursuing a bachelor's, master's, or certificate. , Drexel University Online offers more than 100 fully accredited master's degrees, bachelor's degrees and certificate programs.
Drexel's longstanding cooperative education, or "co-op" program is one of the largest and oldest in the United States. Drexel has a fully internet-based job database, where students can submit résumés and request interviews with any of the thousands of companies that offer positions. Students also have the option of obtaining a co-op via independent search. A student graduating from Drexel's 5-year degree program typically has a total of 18 months of co-op with up to three different companies. The majority of co-ops are paid, averaging $15,912 per 6-month period, however this figure changes with major. About one third of Drexel graduates are offered full-time positions by their co-op employers right after graduation.
Drexel is classified among ". The university was ranked 51st in the 2018 edition of the "Top 100 Worldwide Universities Granted U.S. Utility Patents" list released by the National Academy of Inventors and the Intellectual Property Owners Association.
Research Centers and Institutes at Drexel include:
In its 2020 rankings, "U.S. News & World Report" ranked Drexel tied for 97th among national universities in the United States, 23rd in the "Most Innovative Schools" category, and 74th in "Best Value Schools".
In its 2018 rankings, "Times Higher Education World University Rankings" and the "Wall Street Journal" ranked Drexel 74th among national universities and 351st-400th among international universities.
In its 2018 rankings, "Forbes" ranked Drexel 24th among STEM universities. In 2019, it also ranked Drexel 226th among 650 national universities, liberal arts colleges and service academies, 120th among research universities, 154th among private universities, and 96th among universities in the Northeast.
In 2016, "Bloomberg Businessweek" ranked the undergraduate business program 78th in the country. In 2014, Business Insider ranked Drexel's graduate business school 19th in the country for networking.
In 2014, "The Princeton Review" ranked Drexel 20th in its list of worst college libraries.
Drexel University's programs are divided across three Philadelphia-area campuses: the University City Campus, the Center City Campus and the Queen Lane College of Medicine Campus.
The University City Main Campus of Drexel University is located just west of the Schuylkill River in the University City district of Philadelphia. It is Drexel's largest and oldest campus; the campus contains the university's administrative offices and serves as the main academic center for students. The northern, residential portion of the main campus is located in the Powelton Village section of West Philadelphia. The two prominent performing stages at Drexel University are the Mandell Theater and the Main Auditorium. The Main Auditorium dates back to the founding of Drexel and construction of its main hall. It features over 1000 seats, and a pipe organ installed in 1928. The organ was purchased by Saturday Evening Post publisher Cyrus H. K. Curtis after he had donated a similar organ, the Curtis Organ, to nearby University of Pennsylvania and it was suggested that he do the same for Drexel. The 424-seat Mandell Theater was built in 1973 and features a more performance-oriented stage, including a full fly system, modern stage lighting facilities, stadium seating, and accommodations for wheelchairs. It is used for the semiannual spring musical, as well as various plays and many events.
The Queen Lane Campus was purchased by Drexel University as part of its acquisition of MCP Hahnemann University. It is located in the East Falls neighborhood of northwest Philadelphia and is primarily utilized by first- and second-year medical students, and researchers. A free shuttle is available, connecting the Queen Lane Campus to the Center City Hahnemann and University City Main campuses.
The Center City Campus is in the middle of Philadelphia, straddling the Vine Street Expressway between Broad and 15th Streets. Shuttle service is offered between the Center City Campus and both the University City and Queen Lane campuses of the university.
In 2011, The Academy of Natural Sciences entered into an agreement to become a subsidiary of Drexel University. Founded in 1812, the Academy of Natural Sciences is America's oldest natural history museum and is a world leader in biodiversity and environmental research.
On January 5, 2009, Drexel University opened the Center for Graduate Studies in Sacramento, California. Eventually renamed Drexel University Sacramento upon the addition of an undergraduate program in business administration, the campus also offered an Ed.D. program in Educational Leadership and Management and master's degree programs in Business Administration, Finance, Higher Education, Human Resource Development, Public Health, and Interdepartmental Medical Science. On March 5, 2015, Drexel University announced the closure of the Sacramento campus, with an 18-month "phase out" period designed to allow current students to complete their degrees.
The Undergraduate Student Government Association of Drexel University works with administrators to solve student problems and tries to promote communication between the students and the administration.
The Graduate Student Association "advocates the interests and addresses concerns of graduate students at Drexel; strives to enhance graduate student life at the University in all aspects, from academic to campus security; and provides a formal means of communication between graduate students and the University community".
The Campus Activities Board (CAB) is an undergraduate, student-run event planning organization. CAB creates events for the undergraduate population. To assist with planning and organization, the Campus Activities Board is broken down into 5 committees: Special Events, Traditions, Marketing, Culture and Discovery, and Performing and Fine Arts.
Drexel has an approximate Jewish population of 5% and has both a Chabad House and a Hillel. Both provide services to Jewish and non-Jewish students at Drexel. Due to the recent influx of Orthodox Jewish students the Chabad now has its own daily kosher meal plan. The Hillel also has hot kosher food but only on select nights. There is also an eruv which is jointly managed by Jewish students from Drexel and the University of Pennsylvania.
WKDU is Drexel's student-run FM radio station, with membership open to all undergraduate students. Its status as an 800-watt, non-commercial station in a major market city has given it a wider audience and a higher profile than many other college radio stations.
DUTV is Drexel's Philadelphia cable television station. The student operated station is part of the Paul F. Harron Studios at Drexel University. The purpose of DUTV is to provide "the people of Philadelphia with quality educational television, and providing Drexel students the opportunity to gain experience in television management and production". The Programing includes an eclectic variety of shows from a bi-monthly news show, DNews, to old films, talk shows dealing with important current issues and music appreciation shows.Over 75 percent of DUTV’s programming is student produced.
"The Triangle" has been the university's newspaper since 1926 and currently publishes on a weekly basis every Friday. The yearbook was first published in 1911 and named the Lexerd in 1913. Prior to the publishing of a campus wide yearbook in 1911 "The Hanseatic" and "The Eccentric" were both published in 1896 as class books. Other publications include "MAYA", the undergraduate student literary and artistic magazine; "D&M Magazine", Design & Merchandising students crafted magazine; "The Smart Set from Drexel University", an online magazine founded in 2005; and "The Drexelist" a blog-style news source founded in 2010.
The Drexel Publishing Group serves as a medium for literary publishing on campus. The Drexel Publishing Group oversees "ASK" (The Journal of the College of Arts and Sciences at Drexel University), "Painted Bride Quarterly", a 36-year-old national literary magazine housed at Drexel; "The 33rd", an annual anthology of student and faculty writing at Drexel; "DPG Online Magazine", and "Maya", the undergraduate literary and artistic magazine. The Drexel Publishing Group also serves as a pedagogical organization by allowing students to intern and work on its publications.
Drexel requires all non-commuting first- and second-year students to live in one of its ten residence halls or in "university approved housing". First year students must live in one of the residence halls designated specifically for first-years. These residence halls include Millennium, Bentley, Kelly, Myers, Towers, Van Rensselaer, North, and Race Halls. Kelly, Myers, Towers, and Bentley Halls are traditional residence halls (a bedroom shared with one or more roommate(s) and one bathroom per floor), while Race, North, Caneris, and Van Rensselaer Halls are suite-style residence halls (shared bedrooms, private bathrooms, kitchens, and common area within the suite). Millennium Hall, Drexel's newest residence hall, is a modified suite (a bedroom shared with one roommate, and bathrooms and showers that look like closets with open sinks in the hallway).
Each residence hall is designed to facilitate the Freshman Experience in a slightly different way. Millennium, Kelly, and Towers Halls are all typical residence halls. Myers Hall offers "Living Learning Communities" where a group of students who share common interests such as language or major live together. Most of Bentley Hall is reserved for students of the Pennoni Honors College, although some floors are occupied by other students.
Second-year students have the option of living in a residence hall designated for upperclassmen, or "university approved housing". The residence halls for upperclassmen are North and Caneris Halls. North Hall operates under the For Students By Students Residential Experience Engagement Model, developed by the Residential Living Office. There are many apartments that are university approved that second-year students can choose to live in. Three of the largest apartment buildings that fit this description are Chestnut Square, University Crossings, and The Summit, all owned by American Campus Communities. Many other students live in smaller apartment buildings or individual townhouse-style apartments in Powelton Village. A second-year student can choose one of the already listed university approved housing options or petition the university to add a new property to the approved list. While living in a university approved apartment offers the freedom of living outside a residence hall, due to the Drexel co-op system, many students end up in the residence halls because they operate on a quarter to quarter basis, and don't require students to be locked into leases.
Graduate students can live in Stiles Hall.
All residence halls except Caneris Hall and Stiles Memorial Hall are located north of Arch Street between 34th Street and 32nd Street in the Powelton Village area.
Drexel University recognizes over 250 student organizations in the following categories:
The following groups are recognized as honors or professional organizations under the Office of Campus Activities and are not considered part of social Greek life at Drexel University.
Approximately 12 percent of Drexel's undergraduate population are members of a social Greek-letter organization. There are currently fourteen Interfraternity Council (IFC) chapters, seven Panhellenic Council (PHC) chapters and thirteen Multi-cultural Greek Council (MGC) chapters.
Two IFC chapters have been awarded Top Chapters in 2008 by their respective national organizations; Pi Kappa Alpha, and Alpha Chi Rho. In 2013, Sigma Phi Epsilon and Alpha Epsilon Pi were awarded the Top Chapter award by their respective national headquarters.
Drexel's school mascot is a dragon known, as "Mario the Magnificent", named in honor of alumnus and Board of Trustees member Mario V. Mascioli. The Dragon has been the mascot of the school since around the mid-1920s; the first written reference to the Dragons occurred in 1928, when the football team was called "The Dragons in The Triangle". Before becoming known as the Dragons, the athletic teams had been known by such names as the Blue & Gold, the Engineers, and the Drexelites. The school's sports teams, now known as the Drexel Dragons, participate in the NCAA's Division I as a member of the Colonial Athletic Association. They do not currently field a varsity football team.
In addition to its NCAA Division I teams, Drexel University is home to 33 active club teams including men's ice hockey, lacrosse, water polo, squash, triathlon, and cycling. Other club teams include soccer, baseball, rugby, field hockey, and roller hockey. The club teams operate under the direction of the Club Sports Council and the Recreational Sports Office.
Tradition suggests that rubbing the toe of the bronze "Waterboy" statue, located in the Main Building atrium, can result in receiving good grades on exams. Although the rest of the bronze statue has developed a dark brown patina over the years, the toe has remained highly polished and shines like new.
Frustrated by unresponsive university administrators, students throughout Drexel's history have spoken of a "Drexel Shaft" to describe their interactions with the administration during their academic career at the school. The "Drexel Shaft" was once associated with the Flame of Knowledge fountain, now located in front of North Hall. As the legend of the Drexel Shaft grew larger, however, the "shaft" itself grew alongside the legend. Eventually, the chimney atop the Amtrak Boiler House in Penn Coach Yard, located just east of 32nd Street on the University City main campus, came to embody the unresponsive treatment that frustrated many students during their time at Drexel. The smokestack was demolished, to cheers by students and faculty members alike, in November 15, 2009, in what the university community hopes will be a transformation of both the campus' aesthetics and the legend of the "Drexel Shaft" itself.
Drexel has appeared in news and television media several times. In 2006 Drexel served as the location for ABC Family's reality show "Back on Campus". Also in 2006, the Epsilon Zeta chapter of Delta Zeta won ABC Daytime's Summer of Fun contest. As a result, the sorority was featured in national television spots for a week and hosted an ABC party on campus, which was attended by cast members from "General Hospital" and "All My Children".
John Langdon, who taught typography in the Antoinette Westphal College of Media Arts & Design from 1988 to 2015, created the ambigram featured on the cover of Dan Brown's Angels & Demons; a number of other ambigrams served as the central focus of the book and its corresponding film. It is believed Prof. Langdon was the inspiration for the name of the lead character, played by Tom Hanks in the film adaptation.
In 2007, Drexel was the host of the 2008 Democratic Presidential candidate debate in Philadelphia, televised by MSNBC. The university hosted the US Table Tennis Olympic Trials between January 10 and 13, 2008. Drexel University also hosted the 2011 U.S. Open Squash Championships from October 1–6, 2011, as well as the 2012 U.S. Open Squash Championships from October 4–12, 2012.
Since its founding the university has graduated over 100,000 alumni. Certificate-earning alumni such as artist Violet Oakley and illustrator Frank Schoonover reflect the early emphasis on art as part of the university's curriculum. With World War II, the university's technical programs swelled, and as a result Drexel graduated alumni such as Paul Baran, one of the founding fathers of the Internet and one of the inventors of the packet switching network, and Norman Joseph Woodland the inventor of barcode technology. In addition to its emphasis on technology Drexel has graduated several notable athletes such as National Basketball Association (NBA) basketball players Michael Anderson and Malik Rose, and several notable business people such as Raj Gupta, former President and Chief executive officer (CEO) of Rohm and Haas, and Kenneth C. Dahlberg, former CEO of Science Applications International Corporation (SAIC). Alassane Dramane Ouattara President of the Republic of Ivory Coast. In 2018, Tirthak Saha -a 2016 graduate of the ECE school - was named to the Forbes 30 Under 30 list for achievements in the Energy field.
In 1991, the university's centennial anniversary, Drexel created an association called the Drexel 100, for alumni who have demonstrated excellence work, philanthropy, or public service. After the creation of the association 100 alumni were inducted in 1992 and since then the induction process has been on a biennial basis. In 2006 164 total alumni had been inducted into the association.
Drexel University created the annual $100,000 Anthony J. Drexel Exceptional Achievement Award to recognize a faculty member from a U.S. institution whose work transforms both research and the society it serves. The first recipient was bioengineer James J. Collins of Boston University (now at MIT) and the Howard Hughes Medical Institute.
In 2004, in conjunction with BAYADA Home Health Care, Drexel University's College of Nursing and Health Professions created the BAYADA Award for Technological Innovation in Nursing Education and Practice. The award honors nursing educators and practicing nurses whose innovation leads to improved patient care or improved nursing education. | https://en.wikipedia.org/wiki?curid=8256 |
Daedalus
In Greek mythology, Daedalus (; ; ; Etruscan: "Taitale") was a skillful architect, craftsman and artist, and was seen as a symbol of wisdom, knowledge, and power. He is the father of Icarus, the uncle of Perdix, and possibly also the father of Iapyx, although this is unclear. He invented and built the Labyrinth for King Minos of Crete, but shortly after finishing it King Minos had Daedalus imprisoned within the labyrinth. He and his son Icarus devised a plan to escape by using wings made of wax that Daedalus had invented. They escaped, but sadly Icarus did not heed his father's warnings and flew too close to the sun. The wax melted and Icarus fell to his death. This left Daedalus heartbroken, but instead of giving up he flew to the island of Sicily.
Daedalus's parentage was supplied as a later addition, providing him with a father in Metion, Eupalamus, or Palamaon, and a mother, Alcippe, Iphinoe, or Phrasmede. Daedalus had two sons: Icarus and Iapyx, along with a nephew either Talos or Perdix.
Athenians transferred Cretan Daedalus to make him Athenian-born, the grandson of the ancient king Erechtheus, claiming that Daedalus fled to Crete after killing his nephew Talos. Over time, other stories were told of Daedalus.
Daedalus is first mentioned by Homer as the creator of a wide dancing-ground for Ariadne. He also created the Labyrinth on Crete, in which the Minotaur (part man, part bull) was kept. In the story of the labyrinth as told by the Hellenes, the Athenian hero Theseus is challenged to kill the Minotaur, finding his way with the help of Ariadne's thread. Daedalus' appearance in Homer is in an extended metaphor, "plainly not Homer's invention", Robin Lane Fox observes: "He is a point of comparison and so he belongs in stories which Homer's audience already recognized." In Bronze Age Crete, an inscription (//) has been read as referring to a place at Knossos, and a place of worship.
In Homer's language, "daidala" refers to finely crafted objects. They are mostly objects of armor, but fine bowls and furnishings are also "daidala", and on one occasion so are the "bronze-working" of "clasps, twisted brooches, earrings and necklaces" made by Hephaestus while cared for in secret by the goddesses of the sea.
Ignoring Homer, later writers envisaged the Labyrinth as an edifice rather than a single dancing path to the center and out again, and gave it numberless winding passages and turns that opened into one another, seeming to have neither beginning nor end. Ovid, in his "Metamorphoses", suggests that Daedalus constructed the Labyrinth so cunningly that he himself could barely escape it after he built it. Daedalus built the labyrinth for King Minos, who needed it to imprison his wife's son the Minotaur. The story is told that Poseidon had given a white bull to Minos so that he might use it as a sacrifice. Instead, Minos kept it for himself; and in revenge, Poseidon, with the help of Aphrodite, made Pasiphaë, King Minos's wife, lust for the bull. For Pasiphaë, as Greek mythologers interpreted it, Daedalus also built a wooden cow so she could mate with the bull, for the Greeks imagined the Minoan bull of the sun to be an actual, earthly bull, the slaying of which later required a heroic effort by Theseus.
The most familiar literary telling explaining Daedalus' wings is a late one, that of Ovid: in his "Metamorphoses" (VIII:183–235) Daedalus was shut up in a tower to prevent the knowledge of his Labyrinth from spreading to the public. He could not leave Crete by sea, as the king kept a strict watch on all vessels, permitting none to sail without being carefully searched. Since Minos controlled the land and sea routes, Daedalus set to work to fabricate wings for himself and his young son Icarus. He tied feathers together, from smallest to largest so as to form an increasing surface. He secured the feathers at their midpoints with string and at their bases with wax, and gave the whole a gentle curvature like the wings of a bird. When the work was done, the artist, waving his wings, found himself buoyed upward and hung suspended, poising himself on the beaten air. He next equipped his son in the same manner, and taught him how to fly. When both were prepared for flight, Daedalus warned Icarus not to fly too high, because the heat of the sun would melt the wax, nor too low, because the sea foam would soak the feathers.
They had passed Samos, Delos and Lebynthos by the time the boy, forgetting himself, began to soar upward toward the sun. The blazing sun softened the wax that held the feathers together and they came off. Icarus quickly fell in the sea and drowned. His father cried, bitterly lamenting his own arts, and called the island near the place where Icarus fell into the ocean Icaria in memory of his child. Some time later, the goddess Athena visited Daedalus and gave him wings, telling him to fly like a god.
An early image of winged Daedalus appears on an Etruscan jug of ca 630 BC found at Cerveteri, where a winged figure captioned "Taitale" appears on one side of the vessel, paired on the other side, uniquely, with "Metaia", Medea: "its linking of these two mythical figures is unparalleled," Robin Lane Fox observes: "The link was probably based on their wondrous, miraculous art. Magically, Daedalus could fly, and magically Medea was able to rejuvenate the old (the scene on the jug seems to show her doing just this)". The image of Daedalus demonstrates that he was already well known in the West.
Further to the west Daedalus arrived safely in Sicily, in the care of King Cocalus of Kamikos on the island's south coast; there Daedalus built a temple to Apollo, and hung up his wings, an offering to the god. In an invention of Virgil ("Aeneid" VI), Daedalus flies to Cumae and founds his temple there, rather than in Sicily; long afterward Aeneas confronts the sculpted golden doors of the temple.
Minos, meanwhile, searched for Daedalus by traveling from city to city asking a riddle. He presented a spiral seashell and asked for a string to be run through it. When he reached Kamikos, King Cocalus, knowing Daedalus would be able to solve the riddle, privately fetched the old man to him. He tied the string to an ant which, lured by a drop of honey at one end, walked through the seashell stringing it all the way through. Minos then knew Daedalus was in the court of King Cocalus and demanded he be handed over. Cocalus managed to convince Minos to take a bath first, where Cocalus' daughters killed Minos. In some versions, Daedalus himself poured boiling water on Minos and killed him.
The anecdotes are literary and late; however, in the founding tales of the Greek colony of Gela, founded in the 680s on the southwest coast of Sicily, a tradition was preserved that the Greeks had seized cult images wrought by Daedalus from their local predecessors, the Sicani.
Daedalus was so proud of his achievements that he could not bear the idea of a rival. His sister had placed her son, named variously as Perdix, Talos, or Calos, under his charge to be taught the mechanical arts. The nephew was an art scholar and showed striking evidence of ingenuity. Walking on the seashore, he picked up the spine of a fish. According to Ovid, imitating it, he took a piece of iron and notched it on the edge, and thus invented the saw. He put two pieces of iron together, connecting them at one end with a rivet, and sharpening the other ends, and made a pair of compasses. Daedalus was so envious of his nephew's accomplishments that he took an opportunity and caused him to fall from the Acropolis. Athena turned Perdix into a partridge and left a scar that looked like a partridge on Daedalus' right shoulder and Daedalus left Athens due to this.
Such anecdotal details as these were embroideries upon the reputation of Daedalus as an innovator in many arts. In Pliny's Natural History (7.198) he is credited with inventing carpentry "and with it the saw, axe, plumb-line, drill, glue, and isinglass". Pausanias, in travelling around Greece, attributed to Daedalus numerous archaic wooden cult figures (see "xoana") that impressed him: "All the works of this artist, though somewhat uncouth to look at, nevertheless have a touch of the divine in them."
It is said he first conceived masts and sails for ships for the navy of Minos. He is said to have carved statues so well they looked as if alive; even possessing self-motion. They would have escaped if not for the chain that bound them to the wall.
Daedalus gave his name, eponymously, to any Greek artificer and to many Greek contraptions that represented dextrous skill. At Plataea there was a festival, the Daedala, in which a temporary wooden altar was fashioned, and an effigy was made from an oak-tree and dressed in bridal attire. It was carried in a cart with a woman who acted as bridesmaid. The image was called "Daedale" and the archaic ritual given an explanation through a myth to the purpose
In the period of Romanticism, Daedalus came to denote the classic artist, a skilled mature craftsman, while Icarus symbolized the romantic artist, whose impetuous, passionate and rebellious nature, as well as his defiance of formal aesthetic and social conventions, may ultimately prove to be self-destructive. Stephen Dedalus, in Joyce's "Portrait of the Artist as a Young Man" envisages his future artist-self "a winged form flying above the waves ... a hawk-like man flying sunward above the sea, a prophecy of the end he had been born to serve”.
Daedalus is said to have created statues that were so realistic that they had to be tied down to stop them from wandering off. In "Meno", Socrates and Meno are debating the nature of knowledge and true belief when Socrates refers to Daedalus' statues: "... if they are not fastened up they play truant and run away; but, if fastened, they stay where they are." | https://en.wikipedia.org/wiki?curid=8258 |
Deception Pass
Deception Pass is a strait separating Whidbey Island from Fidalgo Island, in the northwest part of the U.S. state of Washington. It connects Skagit Bay, part of Puget Sound, with the Strait of Juan de Fuca. A pair of bridges known collectively as Deception Pass Bridge cross Deception Pass. The bridges were added to the National Register of Historic Places in 1982.
The Deception Pass area has been home to various Coast Salish tribes for thousands of years. The first Europeans to see Deception Pass were members of the 1790 expedition of Manuel Quimper on the "Princesa Real". The Spanish gave it the name "Boca de Flon". A group of sailors led by Joseph Whidbey, part of the Vancouver Expedition, found and mapped Deception Pass on June 7, 1792. George Vancouver gave it the name "Deception" because it had misled him into thinking Whidbey Island was a peninsula. The "deception" was heightened due to Whidbey's failure to find the strait at first. In May 1792, Vancouver was anchored near the southern end of Whidbey Island. He sent Joseph Whidbey to explore the waters east of Whidbey Island, now known as Saratoga Passage, using small boats. Whidbey reached the northern end of Saratoga Passage and explored eastward into Skagit Bay, which is shallow and difficult to navigate. He returned south to rejoin Vancouver without having found Deception Pass. It appeared that Skagit Bay was a dead-end and that Whidbey Island and Fidalgo Island were a long peninsula attached to the mainland. In June the expedition sailed north along the west coast of Whidbey Island. Vancouver sent Joseph Whidbey to explore inlets leading to the east. The first inlet turned out to be a "very narrow and intricate channel, which...abounded with rocks above and beneath the surface of the water". This channel led to Skagit Bay, thus separating Whidbey Island from the mainland. Vancouver apparently felt he and Joseph Whidbey had been deceived by the tricky strait. Vancouver wrote of Whidbey's efforts: "This determined [the shore they had been exploring] to be an island, which, in consequence of Mr. Whidbey’s circumnavigation, I distinguished by the name of Whidbey’s Island: and this northern pass, leading into [Skagit Bay], Deception Passage".
In the waters of Deception Pass, just east of the present-day Deception Pass Bridge, is a small island known as Ben Ure Island. The island became infamous for its activity of human smuggling of migrant Chinese people for local labor. Ben Ure and his partner Lawrence "Pirate" Kelly were quite profitable at their human smuggling business and played hide-and-seek with the United States Customs Department for years. Ure's own operation at Deception Pass in the late 1880s consisted of Ure and his Native-American wife. Local tradition has it that his wife would camp on the nearby Strawberry Island (which was visible from the open sea) and signal him with a fire on the island's summit to alert him to whether or not it was safe to attempt to bring the human cargo he illegally transported ashore. For transport, Ure would tie the people up in burlap bags so that if customs agents were to approach he could easily toss the people in bags overboard. The tidal currents would carry the entrapped drowned migrants' bodies to San Juan Island to the north and west of the pass and many ended up in what became known as Dead Man's Bay.
Between the years 1910 and 1914, a prison rock quarry was operated on the Fidalgo Island side of the pass. Nearby barracks housed some 40 prisoners, members of an honors program out of Walla Walla State Penitentiary and the prison population was made up of several types of prisoners, including those convicted of murder. Guards stood watch at the quarry as the prisoners cut the rock into gravel and loaded it onto barges located at the base of the cliff atop the pass's waters. The quarried rock was then taken by barge to the Seattle waterfront. The camp was dismantled in 1924 and although abandoned as a quarry, the remains of the camp can still be found. The location, however, is hazardous and over the years there have been several fatal accidents when visitors have ventured onto the steep cliffs.
Upon completion on July 31, 1935, the span Deception Pass Bridge connected Whidbey Island to the tiny Pass Island, and Pass Island to Fidalgo Island. Prior to the bridge, travellers and businessmen would use an inter-island ferry to commute between Fidalgo and Whidbey islands.
Deception Pass is a dramatic seascape where the tidal flow and whirlpools beneath the twin bridges connecting Fidalgo Island to Whidbey Island move quickly. During ebb and flood tide current speed reaches about , flowing in opposite directions between ebb and flood. This swift current can lead to standing waves, large whirlpools, and roiling eddies. This swift current phenomenon can be viewed from the twin bridges' pedestrian walkways or from the trail leading below the larger south bridge from the parking lot on the Whidbey Island side. Boats can be seen waiting on either side of the pass for the current to stop or change direction before going through. Thrill-seeking kayakers go there during large tide changes to surf the standing waves and brave the class 2 and 3 rapid conditions.
Diving Deception Pass is dangerous and only for the most competent and prepared divers. There are a few times each year that the tides are right for a drift dive from the cove, under the bridge, and back to the cove as the tide changes. These must be planned well in advance by divers who know how to read currents and are aware of the dangerous conditions. However, because of the large tidal exchange, Deception Pass hosts some of the most spectacular colors and life in the Pacific Northwest. The walls and bottom are covered in colorful invertebrates, lingcod, greenlings, and barnacles everywhere.
Deception Pass is surrounded by Deception Pass State Park, one of the most visited Washington state park's with over two million annual visitors.
The park was officially established in 1923, when the original of a military reserve was transferred to Washington State Parks. The park's facilities were greatly enhanced in the 1930s when the Civilian Conservation Corps (CCC) built roads, trails, and buildings in order to develop the park. The road to West Beach was created in 1950, opening up a stretch of beach to hordes of vehicles. The former fish hatchery at Bowman Bay became a part of the park in the early 1970s. The old entrance to the park was closed in 1997 when a new entrance was created at the intersection of Highway 20 and Cornet Bay road, improving access into and out of the park.
The park's recreational facilities include campgrounds, hiking trails, beaches, and tidepools. Several miles of the Pacific Northwest Trail are within the park, most notably including the section that crosses Deception Pass on the Highway 20 bridge. In addition, the Cornet Bay Retreat Center provides cabins and dining and recreation facilities. Cornet Bay offers boat launches and fishing opportunities, while Bowman Bay has an interpretive center that explains the story of the Civilian Conservation Corps throughout Washington state. Near the center is a CCC honor statue, which can be found in 30 different states in the country. Fishing is popular in Pass Lake, on the north side of the bridge. Boat rentals and guided tours of the park are also offered.
Included in the park are ten islands: Northwest Island, Deception Island, Pass Island, Strawberry, Ben Ure, Kiket, Skagit, Hope, and Big and Little Deadman Islands. Ben Ure Island is partially privately owned. The island is not open to the public except for a small rentable cabin available via the state park, which is only accessible by rowboat.
The 2002 horror movie "The Ring" was in part filmed near the pass. The bridge is fictionalized as a toll bridge named "Desolation Bridge" in season one of The Killing. Seattle shoegaze act The Sight Below filmed the 2008 video for their track "Further Away" at Deception Pass, with Deception Island's scenic imagery prominently featured. Seattle grunge band Mudhoney named a song on their 1993 EP Five Dollar Bob's Mock Cooter Stew "Deception Pass." Seattle progressive rock band Queensrÿche filmed scenes of their video "Anybody Listening" near Deception Pass and Deception Island. | https://en.wikipedia.org/wiki?curid=8259 |
Dominoes
Dominoes is a family of tile-based games played with rectangular "domino" tiles. Each domino is a rectangular tile with a line dividing its face into two square "ends". Each end is marked with a number of spots (also called "pips", "nips", or "dobs") or is blank. The backs of the dominoes in a set are indistinguishable, either blank or having some common design. The domino gaming pieces make up a domino set, sometimes called a "deck" or "pack". The traditional Sino-European domino set consists of 28 dominoes, featuring all combinations of spot counts between zero and six. A domino set is a generic gaming device, similar to playing cards or dice, in that a variety of games can be played with a set.
The earliest mention of dominoes is from Song dynasty China found in the text "Former Events in Wulin" by Zhou Mi (1232–1298). Modern dominoes first appeared in Italy during the 18th century, but how Chinese dominoes developed into the modern game is unknown. Italian missionaries in China may have brought the game to Europe.
The name "domino" is most likely from the resemblance to a kind of carnival costume worn during the Venetian Carnival, often consisting of a black-hooded robe and a white mask. Despite the coinage of the word polyomino as a generalization, there is no connection between the word "domino" and the number 2 in any language.
European-style dominoes are traditionally made of bone or ivory, or a dark hardwood such as ebony, with contrasting black or white pips (inlaid or painted). Alternatively, domino sets have been made from many different natural materials: stone (e.g., marble, granite or soapstone); other woods (e.g., ash, oak, redwood, and cedar); metals (e.g., brass or pewter); ceramic clay, or even frosted glass or crystal. These sets have a more novel look, and the often heavier weight makes them feel more substantial; also, such materials and the resulting products are usually much more expensive than polymer materials.
Modern commercial domino sets are usually made of synthetic materials, such as ABS or polystyrene plastics, or Bakelite and other phenolic resins; many sets approximate the look and feel of ivory while others use colored or even translucent plastics to achieve a more contemporary look. Modern sets also commonly use a different color for the dots of each different end value (one-spots might have black pips while two-spots might be green, three red, etc.) to facilitate finding matching ends. Occasionally, one may find a domino set made of card stock like that for playing cards. Such sets are lightweight, compact, and inexpensive, and like cards are more susceptible to minor disturbances such as a sudden breeze. Sometimes, dominoes have a metal pin (called a spinner or pivot) in the middle.
The traditional set of dominoes contains one unique piece for each possible combination of two ends with zero to six spots, and is known as a double-six set because the highest-value piece has six pips on each end (the "double six"). The spots from one to six are generally arranged as they are on six-sided dice, but because blank ends having no spots are used, seven faces are possible, allowing 28 unique pieces in a double-six set.
However, this is a relatively small number especially when playing with more than four people, so many domino sets are "extended" by introducing ends with greater numbers of spots, which increases the number of unique combinations of ends and thus of pieces. Each progressively larger set increases the maximum number of pips on an end by three, so the common extended sets are double-nine (55 tiles), double-12 (91 tiles), double-15 (136 tiles), and double-18 (190 tiles), which is the maximum in practice. Larger sets such as double-21 (253 tiles) could theoretically exist, but they seem to be extremely rare if non-existant, as that would be far more than is normally necessary for most domino games even with eight players. As the set becomes larger, identifying the number of pips on each domino becomes more difficult, so some large domino sets use more readable Arabic numerals instead of pips..
The oldest confirmed written mention of dominoes in China comes from the "Former Events in Wulin" (i.e., the capital Hangzhou) written by the Yuan Dynasty (1271–1368) author Zhou Mi (1232–1298), who listed "pupai" (gambling plaques or dominoes), as well as dice as items sold by peddlers during the reign of Emperor Xiaozong of Song (r. 1162–1189). Andrew Lo asserts that Zhou Mi meant dominoes when referring to "pupai", since the Ming author Lu Rong (1436–1494) explicitly defined "pupai" as dominoes (in regard to a story of a suitor who won a maiden's hand by drawing out four winning "pupai" from a set).
The earliest known manual written about dominoes is the "(Manual of the Xuanhe Period)" written by Qu You (1341–1437), but some Chinese scholars believe this manual is a forgery from a later time.
In the "Encyclopedia of a Myriad of Treasures", Zhang Pu (1602–1641) described the game of laying out dominoes as "pupai", although the character for "pu" had changed, yet retained the same pronunciation. Traditional Chinese domino games include "Tien Gow, Pai Gow, Che Deng", and others. The 32-piece Chinese domino set, made to represent each possible face of two thrown dice and thus have no blank faces, differs from the 28-piece domino set found in the West during the mid 18th century. Chinese dominoes with blank faces were known during the 17th century.
Many different domino sets have been used for centuries in various parts of the world to play a variety of domino games. Each domino originally represented one of the 21 results of throwing two six-sided dice (2d6). One half of each domino is set with the pips from one die and the other half contains the pips from the second die. Chinese sets also introduce duplicates of some throws and divide the dominoes into two suits: military and civil. Chinese dominoes are also longer than typical European dominoes.
The early 18th century had dominoes making their way to Europe, making their first appearance in Italy. The game changed somewhat in the translation from Chinese to the European culture. European domino sets contain neither suit distinctions nor the duplicates that went with them. Instead, European sets contain seven additional dominoes, with six of these representing the values that result from throwing a single die with the other half of the tile left blank, and the seventh domino representing the blank-blank (0–0) combination.
Domino tiles (also known as "bones"), are normally twice as long as they are wide, which makes it easier to re-stack pieces after use. Tiles usually feature a line in the middle to divide them visually into two squares. The value of either side is the number of spots or pips. In the most common variant (double-six), the values range from six pips down to none or blank. The sum of the two values, i.e. the total number of pips, may be referred to as the rank or weight of a tile; a tile may be described as "heavier" than a "lighter" one that has fewer (or no) pips.
Tiles are generally named after their two values. For instance, the following are descriptions of a tile bearing the values two and five:
A tile that has the same pips-value on each end is called a double, and is typically referred to as double-zero, double-one, and so on. Conversely, a tile bearing different values is called a single.
Every tile which features a given number is a member of the suit of that number. A single tile is a member of two suits: for example, 0-3 belongs both to the suit of threes and the suit of blanks, or 0 suit.
In some versions the doubles can be treated as an additional suit of doubles. In these versions, the double-six belongs both to the suit of sixes and the suit of doubles. However, the dominant approach is that each double belongs to only one suit.
The most common domino sets commercially available are double six (with 28 tiles) and double nine (with 55 tiles). Larger sets exist and are popular for games involving several players or for players looking for long domino games.
The number of tiles in a double-n set obeys the following formula:
The total number of pips in a double-n set is found by:
formula_2 i.e. the number of tiles multiplied by the maximum pip-count (n)
e.g. a 6-6 set has (7 x 8) / 2 = 56/2 = 28 tiles, the average number of pips per tile is 6 (range is from 0 to 12), giving a total pip count of 6 x 28 = 168
This formula can be simplified a little bit when formula_3 is made equal to the "total number of doubles in the domino set":
formula_4
The most popular type of play are layout games, which fall into two main categories, blocking games and scoring games.
The most basic domino variant is for two players and requires a double-six set. The 28 tiles are shuffled face down and form the "stock" or "boneyard". Each player draws seven tiles from the stock. Once the players begin drawing tiles, they are typically placed on-edge in front of the players, so each player can see their own tiles, but none can see the value of other players' tiles. Every player can thus see how many tiles remain in the opponent's hands at all times during gameplay.
One player begins by downing (playing the first tile) one of their tiles. This tile starts the line of play, in which values of adjacent pairs of tile ends must match. The players alternately extend the line of play with one tile at one of its two ends; if a player is unable to place a valid tile, they must continue drawing tiles from the stock until they are able to place a tile. The game ends when one player wins by playing their last tile, or when the game is blocked because neither player can play. If that occurs, whoever caused the block receives all of the remaining player points not counting their own.
Players accrue points during game play for certain configurations, moves, or emptying one's hand. Most scoring games use variations of the draw game. If a player does not call "domino" before the tile is laid on the table, and another player says domino after the tile is laid, the first player must pick up an extra domino.
In a draw game (blocking or scoring), players are additionally allowed to draw as many tiles as desired from the stock before playing a tile, and they are not allowed to pass before the stock is (nearly) empty. The score of a game is the number of pips in the losing player's hand plus the number of pips in the stock. Most rules prescribe that two tiles need to remain in the stock. The draw game is often referred to as simply "dominoes".
Adaptations of both games can accommodate more than two players, who may play individually or in teams.
The line of play is the configuration of played tiles on the table. It starts with a single tile and typically grows in two opposite directions when players add matching tiles. In practice, players often play tiles at right angles when the line of play gets too close to the edge of the table.
The rules for the line of play often differ from one variant to another. In many rules, the doubles serve as spinners, i.e., they can be played on all four sides, causing the line of play to branch. Sometimes, the first tile is required to be a double, which serves as the only spinner. In some games such as Chicken Foot, all sides of a spinner must be occupied before anybody is allowed to play elsewhere. Matador has unusual rules for matching. Bendomino uses curved tiles, so one side of the line of play (or both) may be blocked for geometrical reasons.
In Mexican Train and other train games, the game starts with a spinner from which various trains branch off. Most trains are owned by a player and in most situations players are allowed to extend only their own train.
In blocking games, scoring happens at the end of the game. After a player has emptied their hand, thereby winning the game for the team, the score consists of the total pip count of the losing team's hands. In some rules, the pip count of the remaining stock is added. If a game is blocked because no player can move, the winner is often determined by adding the pips in players' hands.
In scoring games, each individual can potentially add to the score. For example, in Bergen, players score two points whenever they cause a configuration in which both open ends have the same value and three points if additionally one open end is formed by a double. In Muggins, players score by ensuring the total pip count of the open ends is a multiple of a certain number. In variants of Muggins, the line of play may branch due to spinners.
In British public houses and social clubs, a scoring version of "5s-and-3s" is used. The game is normally played in pairs (two against two) and is played as a series of "ends". In each "end", the objective is for players to attach a domino from their hand to one end of those already played so that the sum of the end dominoes is divisible by five or three. One point is scored for each time five or three can be divided into the sum of the two dominoes, i.e. four at one end and five at the other makes nine, which is divisible by three three times, resulting in three points. Double five at one end and five at the other makes 15, which is divisible by three five times (five points) and divisible by five three times (three points) for a total of eight points.
An "end" stops when one of the players is out, i.e., has played all of their dominoes. In the event no player is able to empty their hand, then the player with the lowest domino left in hand is deemed to be out and scores one point. A game consists of any number of ends with points scored in the ends accumulating towards a total. The game ends when one of the pair's total score exceeds a set number of points. A running total score is often kept on a cribbage board. 5s-and-3s is played in a number of competitive leagues in the British Isles.
For 40 years the game has been played by four people, with the winner being the first player to score 150 points, in multiples of five, by using 28 bones, using mathematical strategic defenses and explosive offense. At times, it has been played with pairs of partners. The double-six set is the preferred deck with the lowest denomination of game pieces, with 28 dominoes.
In many versions of the game, the player with the highest double leads with that double, for example "double-six". If no one has it, the next-highest double is called: "double-five?", then "double-four?", etc. until the highest double in any of the players' hands is played. If no player has an "opening" double, the next heaviest domino in the highest suit is called - "six-five?", "six-four?". In some variants, players take turns picking dominoes from the stock until an opening double is picked and played. In other variants, the hand is reshuffled and each player picks seven dominoes. After the first hand, the winner (or winning team) of the previous hand is allowed to pick first and begins by playing any domino in his or her hand.
Playing the first bone of a hand is sometimes called setting, leading, downing, or posing the first bone. Dominoes aficionados often call this procedure smacking down the bone. After each hand, bones are shuffled and each player draws the number of bones required, normally seven. Play proceeds clockwise. Players, in turn, must play a bone with an end that matches one of the open ends of the layouts.
In some versions of the games, the pips or points on the end, and the section to be played next to it must add up to a given number. For example, in a double-six set, the "sum" would be six, requiring a blank to be played next to a six, an ace (one) next to a five, a deuce (two) next to a four, etc.
The stock of bones left behind, if any, is called the bone yard, and the bones therein are said to be sleeping. In draw games, players take part in the bone selection, typically drawing from the bone yard when they do not have a "match" in their hands.
If a player inadvertently picks up and sees one or more extra dominoes, those dominoes become part of his or her hand.
A player who can play a tile may be allowed to pass anyway. Passing can be signalled by tapping twice on the table or by saying "go" or "pass".
Play continues until one of the players has played all the dominoes in his or her hand, calls "Out!", "I win", or "Domino!" and wins the hand, or until all players are blocked and no legal plays remain. This is sometimes referred to as locked down or sewed up. In a common version of the game, the next player after the block picks up all the dominoes in the bone yard as if trying to find a (nonexistent) match. If all the players are blocked, or locked out, the player with the lowest hand (pip count) wins. In team play, the team with the lowest individual hand wins. In the case of a tie, the first of tied players or the first "team" in the play rotation wins.
In games where points accrue, the winning player scores a point for each pip on each bone still held by each opponent or the opposing team. If no player went out, the win is determined by the lightest hand, sometimes only the excess points held by opponents.
A game is generally played to 100 points, the tally being kept on paper. In more common games, mainly urban rules, games are played to 150, 200, or 250 points.
In some games, the tally is kept by creating , where the beginning of the house (the first 10 points) is a large +, the next 10 points are O, and scoring with a five is a /, and are placed in the four corners of the house. One house is equal to 50 points.
In some versions, if a lock down occurs, the first person to call a lock-down gains the other players bones and adds the amount of the pips to his or her house. If a person who calls rocks after a call of lock-down or domino finds the number of pips a player called is incorrect, those points become his.
When a player plays out of turn or knocks when he could have played and someone calls bogus play, the other person is awarded 50 points.
In some places this is known as a compulsory pass.
Apart from the usual blocking and scoring games, also domino games of a very different character are played, such as solitaire or trick-taking games. Most of these are adaptations of card games and were once popular in certain areas to circumvent religious proscriptions against playing cards.
A very simple example is a Concentration variant played with a double-six set; two tiles are considered to match if their total pip count is 12.
A popular domino game in Texas is 42. The game is similar to the card game spades. It is played with four players paired into teams. Each player draws seven dominoes, and the dominoes are played into tricks. Each trick counts as one point, and any domino with a multiple of five dots counts toward the total of the hand. These 35 points of "five count" and seven tricks equals 42 points, hence the name.
Dominoes is played at a professional level, similar to poker. Numerous organisations and clubs of amateur domino players exist around the world. Some organizations organize international competitions.
Besides playing games, another use of dominoes is the domino show, which involves standing them on end in long lines so that when the first tile is toppled, it topples the second, which topples the third, etc., resulting in all of the tiles falling. By analogy, the phenomenon of small events causing similar events leading to eventual catastrophe is called the domino effect.
Arrangements of millions of tiles have been made that have taken many minutes, even hours to fall. For large and elaborate arrangements, special blockages (also known as firebreaks) are employed at regular distances to prevent a premature toppling from undoing more than a section of the dominoes while still being able to be removed without damage.
The phenomenon also has some theoretical relevance (amplifier, digital signal, information processing), and this amounts to the theoretical possibility of building domino computers. Dominoes are also commonly used as components in Rube Goldberg machines.
The Netherlands has hosted an annual domino-toppling exhibition called Domino Day since 1986. The event held on 18 November 2005 knocked over 4 million dominoes by a team from Weijers Domino Productions. On Domino Day 2008 (14 November 2008), the Weijers Domino Productions team attempted to set 10 records:
This record attempt was held in the in Leeuwarden. The artist who toppled the first stone was the Finnish acrobat Salima Peippo.
At one time, Pressman Toys manufactured a product called Domino Rally that contained tiles and mechanical devices for setting up toppling exhibits.
In Berlin on 9 November 2009, giant dominoes were toppled in a 20th-anniversary commemoration of the fall of the Berlin Wall. Former Polish president and Solidarity leader Lech Wałęsa set the toppling in motion.
A 2-1 tile is used in the logo of pizza retailer Domino's Pizza.
Since April 2008, the character encoding standard Unicode includes characters that represent the double-six domino tiles in various orientations. All combinations of blank through six pips on the left or right provides 49 glyphs, the same combinations vertically for another 49, and also a horizontal and a vertical "back" for a total of 100 glyphs. In this arrangement, both orientations are present: horizontally both tiles [1|6] and [6|1] exist, while a regular game set only has one such tile. The Unicode range for dominoes is U+1F030–U+1F09F. The naming pattern in Unicode is, by example, . Few fonts are known to support these glyphs. While the complete domino set has only 28 tiles, for printing layout reasons, the Unicode set needs both horizontal and vertical forms for each tile, plus the 01-03 (plain) 03-01 (reversed) pairs, and generic backsides. | https://en.wikipedia.org/wiki?curid=8262 |
Dissociation constant
In chemistry, biochemistry, and pharmacology, a dissociation constant (formula_1) is a specific type of equilibrium constant that measures the propensity of a larger object to separate (dissociate) reversibly into smaller components, as when a complex falls apart into its component molecules, or when a salt splits up into its component ions. The dissociation constant is the inverse of the association constant. In the special case of salts, the dissociation constant can also be called an ionization constant.
For a general reaction:
in which a complex formula_2 breaks down into "x" A subunits and "y" B subunits, the dissociation constant is defined
where [A], [B], and [AxBy] are the equilibrium concentrations of A, B, and the complex AxBy, respectively.
One reason for the popularity of the dissociation constant in biochemistry and pharmacology is that in the frequently encountered case where x=y=1, Kd has a simple physical interpretation: when formula_4, formula_5 or equivalently formula_6
The dissociation constant of water is denoted "K"w:
The concentration of water H2O is omitted by convention, which means that the value of "K"w differs from the value of "K"eq that would be computed using that concentration.
The value of "K"w varies with temperature, as shown in the table below. This variation must be taken into account when making precise measurements of quantities such as pH. | https://en.wikipedia.org/wiki?curid=8263 |
Dimensional analysis
In engineering and science, dimensional analysis is the analysis of the relationships between different physical quantities by identifying their base quantities (such as length, mass, time, and electric charge) and units of measure (such as miles vs. kilometres, or pounds vs. kilograms) and tracking these dimensions as calculations or comparisons are performed. The conversion of units from one dimensional unit to another is often easier within the metric or SI system than in others, due to the regular 10-base in all units. Dimensional analysis, or more specifically the factor-label method, also known as the unit-factor method, is a widely used technique for such conversions using the rules of algebra.
The concept of physical dimension was introduced by Joseph Fourier in 1822. Physical quantities that are of the same kind (also called "commensurable") (e.g., length or time or mass) have the same dimension and can be directly compared to other physical quantities of the same kind, even if they are originally expressed in differing units of measure (such as yards and metres). If physical quantities have different dimensions (such as length vs. mass), they cannot be expressed in terms of similar units and cannot be compared in quantity (also called "incommensurable"). For example, asking whether a kilogram is larger than an hour is meaningless.
Any physically meaningful equation (and any inequality) will have the same dimensions on its left and right sides, a property known as "dimensional homogeneity". Checking for dimensional homogeneity is a common application of dimensional analysis, serving as a plausibility check on derived equations and computations. It also serves as a guide and constraint in deriving equations that may describe a physical system in the absence of a more rigorous derivation.
Many parameters and measurements in the physical sciences and engineering are expressed as a concrete number—a numerical quantity and a corresponding dimensional unit. Often a quantity is expressed in terms of several other quantities; for example, speed is a combination of length and time, e.g. 60 kilometres per hour or 1.4 kilometres per second. Compound relations with "per" are expressed with division, e.g. 60 km/1 h. Other relations can involve multiplication (often shown with a centered dot or juxtaposition), powers (like m2 for square metres), or combinations thereof.
A set of base units for a system of measurement is a conventionally chosen set of units, none of which can be expressed as a combination of the others and in terms of which all the remaining units of the system can be expressed. For example, units for length and time are normally chosen as base units. Units for volume, however, can be factored into the base units of length (m3), thus they are considered derived or compound units.
Sometimes the names of units obscure the fact that they are derived units. For example, a newton (N) is a unit of force, which has units of mass (kg) times units of acceleration (m⋅s−2). The newton is defined as .
Percentages are dimensionless quantities, since they are ratios of two quantities with the same dimensions. In other words, the % sign can be read as "hundredths", since .
Taking a derivative with respect to a quantity adds the dimension of the variable one is differentiating with respect to, in the denominator. Thus:
In economics, one distinguishes between stocks and flows: a stock has units of "units" (say, widgets or dollars), while a flow is a derivative of a stock, and has units of "units/time" (say, dollars/year).
In some contexts, dimensional quantities are expressed as dimensionless quantities or percentages by omitting some dimensions. For example, debt-to-GDP ratios are generally expressed as percentages: total debt outstanding (dimension of currency) divided by annual GDP (dimension of currency)—but one may argue that, in comparing a stock to a flow, annual GDP should have dimensions of currency/time (dollars/year, for instance) and thus Debt-to-GDP should have units of years, which indicates that Debt-to-GDP is the number of years needed for a constant GDP to pay the debt, if all GDP is spent on the debt and the debt is otherwise unchanged.
In dimensional analysis, a ratio which converts one unit of measure into another without changing the quantity is called a conversion factor. For example, kPa and bar are both units of pressure, and . The rules of algebra allow both sides of an equation to be divided by the same expression, so this is equivalent to . Since any quantity can be multiplied by 1 without changing it, the expression "" can be used to convert from bars to kPa by multiplying it with the quantity to be converted, including units. For example, because , and bar/bar cancels out, so .
The most basic rule of dimensional analysis is that of dimensional homogeneity.
However, the dimensions form an abelian group under multiplication, so:
For example, it makes no sense to ask whether 1 hour is more, the same, or less than 1 kilometre, as these have different dimensions, nor to add 1 hour to 1 kilometre. However, it makes perfect sense to ask whether 1 mile is more, the same, or less than 1 kilometre being the same dimension of physical quantity even though the units are different. On the other hand, if an object travels 100 km in 2 hours, one may divide these and conclude that the object's average speed was 50 km/h.
The rule implies that in a physically meaningful "expression" only quantities of the same dimension can be added, subtracted, or compared. For example, if "m"man, "m"rat and "L"man denote, respectively, the mass of some man, the mass of a rat and the length of that man, the dimensionally homogeneous expression is meaningful, but the heterogeneous expression is meaningless. However, "m"man/"L"2man is fine. Thus, dimensional analysis may be used as a sanity check of physical equations: the two sides of any equation must be commensurable or have the same dimensions.
This has the implication that most mathematical functions, particularly the transcendental functions, must have a dimensionless quantity, a pure number, as the argument and must return a dimensionless number as a result. This is clear because many transcendental functions can be expressed as an infinite power series with dimensionless coefficients.
All powers of "x" must have the same dimension for the terms to be commensurable. But if "x" is not dimensionless, then the different powers of "x" will have different, incommensurable dimensions. However, power functions including root functions may have a dimensional argument and will return a result having dimension that is the same power applied to the argument dimension. This is because power functions and root functions are, loosely, just an expression of multiplication of quantities.
Even when two physical quantities have identical dimensions, it may nevertheless be meaningless to compare or add them. For example, although torque and energy share the dimension , they are fundamentally different physical quantities.
To compare, add, or subtract quantities with the same dimensions but expressed in different units, the standard procedure is first to convert them all to the same units. For example, to compare 32 metres with 35 yards, use 1 yard = 0.9144 m to convert 35 yards to 32.004 m.
A related principle is that any physical law that accurately describes the real world must be independent of the units used to measure the physical variables. For example, Newton's laws of motion must hold true whether distance is measured in miles or kilometres. This principle gives rise to the form that conversion factors must take between units that measure the same dimension: multiplication by a simple constant. It also ensures equivalence; for example, if two buildings are the same height in feet, then they must be the same height in metres.
The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that any dimensional unit appearing in both the numerator and denominator of any of the fractions can be cancelled out until only the desired set of dimensional units is obtained. For example, 10 miles per hour can be converted to meters per second by using a sequence of conversion factors as shown below:
Each conversion factor is chosen based on the relationship between one of the original units and one of the desired units (or some intermediary unit), before being re-arranged to create a factor that cancels out the original unit. For example, as "mile" is the numerator in the original fraction and formula_3, "mile" will need to be the denominator in the conversion factor. Dividing both sides of the equation by 1 mile yields formula_4, which when simplified results in the dimensionless formula_5. Multiplying any quantity (physical quantity or not) by the dimensionless 1 does not change that quantity. Once this and the conversion factor for seconds per hour have been multiplied by the original fraction to cancel out the units "mile" and "hour", 10 miles per hour converts to 4.4704 meters per second.
As a more complex example, the concentration of nitrogen oxides (i.e., formula_6) in the flue gas from an industrial furnace can be converted to a mass flow rate expressed in grams per hour (i.e., g/h) of formula_7 by using the following information as shown below:
After canceling out any dimensional units that appear both in the numerators and denominators of the fractions in the above equation, the NOx concentration of 10 ppmv converts to mass flow rate of 24.63 grams per hour.
The factor-label method can also be used on any mathematical equation to check whether or not the dimensional units on the left hand side of the equation are the same as the dimensional units on the right hand side of the equation. Having the same units on both sides of an equation does not ensure that the equation is correct, but having different units on the two sides (when expressed in terms of base units) of an equation implies that the equation is wrong.
For example, check the Universal Gas Law equation of , when:
As can be seen, when the dimensional units appearing in the numerator and denominator of the equation's right hand side are cancelled out, both sides of the equation have the same dimensional units. Dimensional analysis can be used as a tool to construct equations that relate non-associated physico-chemical properties. The equations may reveal hitherto unknown or overlooked properties of matter, in the form of left-over dimensions — dimensional adjusters — that can then be assigned physical significance. It is important to point out that such ‘mathematical manipulation’ is neither without prior precedent, nor without considerable scientific significance. Indeed, the Planck's constant, a fundamental constant of the universe, was ‘discovered’ as a purely mathematical abstraction or representation that built on the Rayleigh-Jeans Equation for preventing the ultraviolet catastrophe. It was assigned and ascended to its quantum physical significance either in tandem or post mathematical dimensional adjustment – not earlier.
The factor-label method can convert only unit quantities for which the units are in a linear relationship intersecting at 0. (Ratio scale in Stevens's typology) Most units fit this paradigm. An example for which it cannot be used is the conversion between degrees Celsius and kelvins (or degrees Fahrenheit). Between degrees Celsius and kelvins, there is a constant difference rather than a constant ratio, while between degrees Celsius and degrees Fahrenheit there is neither a constant difference nor a constant ratio. There is, however, an affine transform (formula_10, rather than a linear transform formula_11) between them.
For example, the freezing point of water is 0 °C and 32 °F, and a 5 °C change is the same as a 9 °F change. Thus, to convert from units of Fahrenheit to units of Celsius, one subtracts 32 °F (the offset from the point of reference), divides by 9 °F and multiplies by 5 °C (scales by the ratio of units), and adds 0 °C (the offset from the point of reference). Reversing this yields the formula for obtaining a quantity in units of Celsius from units of Fahrenheit; one could have started with the equivalence between 100 °C and 212 °F, though this would yield the same formula at the end.
Hence, to convert the numerical quantity value of a temperature "T"[F] in degrees Fahrenheit to a numerical quantity value "T"[C] in degrees Celsius, this formula may be used:
To convert "T"[C] in degrees Celsius to "T"[F] in degrees Fahrenheit, this formula may be used:
In some applications, non-SI units are used for brevity; In such cases, the numerical calculation of a formula can be done by first working out the pre-factor, and then plug in the numerical values of the given/known quantities.
For example, in the study of Bose–Einstein condensate, atomic mass is usually given in daltons, instead of kilograms, and chemical potential is often given in Boltzmann constant times nanokelvin. The condensate's healing length is given by:
For a 23Na condensate with chemical potential of (Boltzmann constant times) 128 nK, the calculation of healing length (in microns) can be done in two steps:
Assume that formula_13 this gives
which is our pre-factor.
Now, make use of the fact that formula_15. With formula_16, formula_17.
This method is especially useful for programming and/or making a worksheet, where input quantities are taking multiple different values; For example, with the pre-factor calculated above, it's very easy to see that the healing length of 174Yb with chemical potential 20.3 nK is formula_18.
Dimensional analysis is most often used in physics and chemistry – and in the mathematics thereof – but finds some applications outside of those fields as well.
A simple application of dimensional analysis to mathematics is in computing the form of the volume of an "n"-ball (the solid ball in "n" dimensions), or the area of its surface, the "n"-sphere: being an "n"-dimensional figure, the volume scales as formula_19 while the surface area, being formula_20-dimensional, scales as formula_21 Thus the volume of the "n"-ball in terms of the radius is formula_22 for some constant formula_23 Determining the constant takes more involved mathematics, but the form can be deduced and checked by dimensional analysis alone.
In finance, economics, and accounting, dimensional analysis is most commonly referred to in terms of the distinction between stocks and flows. More generally, dimensional analysis is used in interpreting various financial ratios, economics ratios, and accounting ratios.
In fluid mechanics, dimensional analysis is performed in order to obtain dimensionless Pi terms or groups. According to the principles of dimensional analysis, any prototype can be described by a series of these terms or groups that describe the behaviour of the system. Using suitable Pi terms or groups, it is possible to develop a similar set of Pi terms for a model that has the same dimensional relationships. In other words, Pi terms provide a shortcut to developing a model representing a certain prototype. Common dimensionless groups in fluid mechanics include:
The origins of dimensional analysis have been disputed by historians.
The first written application of dimensional analysis has been credited to an article of François Daviet at the Turin Academy of Science. Daviet had the master Lagrange as teacher.
His fundamental works are contained in acta of the Academy dated 1799.
This led to the conclusion that meaningful laws must be homogeneous equations in their various units of measurement, a result which was eventually later formalized in the Buckingham π theorem.
Simeon Poisson also treated the same problem of the parallelogram law by Daviet, in his treatise of 1811 and 1833 (vol I, p.39). In the second edition of 1833, Poisson explicitly introduces the term "dimension" instead of the Daviet "homogeneity".
In 1822, the important Napoleonic scientist Joseph Fourier made the first credited important contributions based on the idea that physical laws like should be independent of the units employed to measure the physical variables.
Maxwell played a major role in establishing modern use of dimensional analysis by distinguishing mass, length, and time as fundamental units, while referring to other units as derived. Although Maxwell defined length, time and mass to be "the three fundamental units", he also noted that gravitational mass can be derived from length and time by assuming a form of Newton's law of universal gravitation in which the gravitational constant "G" is taken as unity, thereby defining . By assuming a form of Coulomb's law in which Coulomb's constant "k"e is taken as unity, Maxwell then determined that the dimensions of an electrostatic unit of charge were , which, after substituting his equation for mass, results in charge having the same dimensions as mass, viz. .
Dimensional analysis is also used to derive relationships between the physical quantities that are involved in a particular phenomenon that one wishes to understand and characterize. It was used for the first time in this way in 1872 by Lord Rayleigh, who was trying to understand why the sky is blue. Rayleigh first published the technique in his 1877 book "The Theory of Sound".
The original meaning of the word "dimension", in Fourier's "Theorie de la Chaleur", was the numerical value of the exponents of the base units. For example, acceleration was considered to have the dimension 1 with respect to the unit of length, and the dimension −2 with respect to the unit of time. This was slightly changed by Maxwell, who said the dimensions of acceleration are LT−2, instead of just the exponents.
The Buckingham π theorem describes how every physically meaningful equation involving "n" variables can be equivalently rewritten as an equation of dimensionless parameters, where "m" is the rank of the dimensional matrix. Furthermore, and most importantly, it provides a method for computing these dimensionless parameters from the given variables.
A dimensional equation can have the dimensions reduced or eliminated through nondimensionalization, which begins with dimensional analysis, and involves scaling quantities by characteristic units of a system or natural units of nature. This gives insight into the fundamental properties of the system, as illustrated in the examples below.
The dimension of a physical quantity can be expressed as a product of the basic physical dimensions such as length, mass and time, each raised to a rational power. The "dimension" of a physical quantity is more fundamental than some "scale" unit used to express the amount of that physical quantity. For example, "mass" is a dimension, while the kilogram is a particular scale unit chosen to express a quantity of mass. Except for natural units, the choice of scale is cultural and arbitrary.
There are many possible choices of basic physical dimensions. The SI standard recommends the usage of the following dimensions and corresponding symbols: length (L), mass (M), time (T), electric current (I), absolute temperature (Θ), amount of substance (N) and luminous intensity (J). The symbols are by convention usually written in roman sans serif typeface. Mathematically, the dimension of the quantity "Q" is given by
where "a", "b", "c", "d", "e", "f", "g" are the dimensional exponents. Other physical quantities could be defined as the base quantities, as long as they form a linearly independent basis. For instance, one could replace the dimension of electric current (I) of the SI basis with a dimension of electric charge (Q), since Q = IT.
As examples, the dimension of the physical quantity speed "v" is
and the dimension of the physical quantity force "F" is
The unit chosen to express a physical quantity and its dimension are related, but not identical concepts. The units of a physical quantity are defined by convention and related to some standard; e.g., length may have units of metres, feet, inches, miles or micrometres; but any length always has a dimension of L, no matter what units of length are chosen to express it. Two different units of the same physical quantity have conversion factors that relate them. For example, 1 in = 2.54 cm; in this case (2.54 cm/in) is the conversion factor, which is itself dimensionless. Therefore, multiplying by that conversion factor does not change the dimensions of a physical quantity.
There are also physicists that have cast doubt on the very existence of incompatible fundamental dimensions of physical quantity, although this does not invalidate the usefulness of dimensional analysis.
The dimensions that can be formed from a given collection of basic physical dimensions, such as M, L, and T, form an abelian group: The identity is written as 1; , and the inverse to L is 1/L or L−1. L raised to any rational power "p" is a member of the group, having an inverse of L−"p" or 1/Lp. The operation of the group is multiplication, having the usual rules for handling exponents ().
This group can be described as a vector space over the rational numbers, with for example dimensional symbol M"i"L"j"T"k" corresponding to the vector . When physical measured quantities (be they like-dimensioned or unlike-dimensioned) are multiplied or divided by one other, their dimensional units are likewise multiplied or divided; this corresponds to addition or subtraction in the vector space. When measurable quantities are raised to a rational power, the same is done to the dimensional symbols attached to those quantities; this corresponds to scalar multiplication in the vector space.
A basis for such a vector space of dimensional symbols is called a set of base quantities, and all other vectors are called derived units. As in any vector space, one may choose different bases, which yields different systems of units (e.g., choosing whether the unit for charge is derived from the unit for current, or vice versa).
The group identity 1, the dimension of dimensionless quantities, corresponds to the origin in this vector space.
The set of units of the physical quantities involved in a problem correspond to a set of vectors (or a matrix). The nullity describes some number (e.g., "m") of ways in which these vectors can be combined to produce a zero vector. These correspond to producing (from the measurements) a number of dimensionless quantities, {π1, ..., π"m"}. (In fact these ways completely span the null subspace of another different space, of powers of the measurements.) Every possible way of multiplying (and exponentiating) together the measured quantities to produce something with the same units as some derived quantity "X" can be expressed in the general form
Consequently, every possible commensurate equation for the physics of the system can be rewritten in the form
Knowing this restriction can be a powerful tool for obtaining new insight into the system.
The dimension of physical quantities of interest in mechanics can be expressed in terms of base dimensions M, L, and T – these form a 3-dimensional vector space. This is not the only valid choice of base dimensions, but it is the one most commonly used. For example, one might choose force, length and mass as the base dimensions (as some have done), with associated dimensions F, L, M; this corresponds to a different basis, and one may convert between these representations by a change of basis. The choice of the base set of dimensions is thus a convention, with the benefit of increased utility and familiarity. The choice of base dimensions is not arbitrary, because the dimensions must form a basis: they must span the space, and be linearly independent.
For example, F, L, M form a set of fundamental dimensions because they form a basis that is equivalent to M, L, T: the former can be expressed as [F = ML/T2], L, M, while the latter can be expressed as M, L, [T = (ML/F)1/2].
On the other hand, length, velocity and time do not form a set of as base dimensions, for two reasons:
Depending on the field of physics, it may be advantageous to choose one or another extended set of dimensional symbols. In electromagnetism, for example, it may be useful to use dimensions of M, L, T, and Q, where Q represents the dimension of electric charge. In thermodynamics, the base set of dimensions is often extended to include a dimension for temperature, Θ. In chemistry, the amount of substance (the number of molecules divided by the Avogadro constant, ≈ ) is defined as a base dimension, N, as well.
In the interaction of relativistic plasma with strong laser pulses, a dimensionless relativistic similarity parameter, connected with the symmetry properties of the collisionless Vlasov equation, is constructed from the plasma-, electron- and critical-densities in addition to the electromagnetic vector potential. The choice of the dimensions or even the number of dimensions to be used in different fields of physics is to some extent arbitrary, but consistency in use and ease of communications are common and necessary features.
Scalar arguments to transcendental functions such as exponential, trigonometric and logarithmic functions, or to inhomogeneous polynomials, must be dimensionless quantities. (Note: this requirement is somewhat relaxed in Siano's orientational analysis described below, in which the square of certain dimensioned quantities are dimensionless.)
While most mathematical identities about dimensionless numbers translate in a straightforward manner to dimensional quantities, care must be taken with logarithms of ratios: the identity log(a/b) = log a − log b, where the logarithm is taken in any base, holds for dimensionless numbers a and b, but it does "not" hold if a and b are dimensional, because in this case the left-hand side is well-defined but the right-hand side is not.
Similarly, while one can evaluate monomials ("x""n") of dimensional quantities, one cannot evaluate polynomials of mixed degree with dimensionless coefficients on dimensional quantities: for "x"2, the expression (3 m)2 = 9 m2 makes sense (as an area), while for "x"2 + "x", the expression (3 m)2 + 3 m = 9 m2 + 3 m does not make sense.
However, polynomials of mixed degree can make sense if the coefficients are suitably chosen physical quantities that are not dimensionless. For example,
This is the height to which an object rises in time "t" if the acceleration of gravity is 9.8 meter per second per second and the initial upward speed is 500 meter per second. It is not necessary for "t" to be in "seconds". For example, suppose "t" = 0.01 minutes. Then the first term would be
The value of a dimensional physical quantity "Z" is written as the product of a unit ["Z"] within the dimension and a dimensionless numerical factor, "n".
When like-dimensioned quantities are added or subtracted or compared, it is convenient to express them in consistent units so that the numerical values of these quantities may be directly added or subtracted. But, in concept, there is no problem adding quantities of the same dimension expressed in different units. For example, 1 meter added to 1 foot is a length, but one cannot derive that length by simply adding 1 and 1. A conversion factor, which is a ratio of like-dimensioned quantities and is equal to the dimensionless unity, is needed:
The factor formula_38 is identical to the dimensionless 1, so multiplying by this conversion factor changes nothing. Then when adding two quantities of like dimension, but expressed in different units, the appropriate conversion factor, which is essentially the dimensionless 1, is used to convert the quantities to identical units so that their numerical values can be added or subtracted.
Only in this manner is it meaningful to speak of adding like-dimensioned quantities of differing units.
Some discussions of dimensional analysis implicitly describe all quantities as mathematical vectors. (In mathematics scalars are considered a special case of vectors; vectors can be added to or subtracted from other vectors, and, inter alia, multiplied or divided by scalars. If a vector is used to define a position, this assumes an implicit point of reference: an origin. While this is useful and often perfectly adequate, allowing many important errors to be caught, it can fail to model certain aspects of physics. A more rigorous approach requires distinguishing between position and displacement (or moment in time versus duration, or absolute temperature versus temperature change).
Consider points on a line, each with a position with respect to a given origin, and distances among them. Positions and displacements all have units of length, but their meaning is not interchangeable:
This illustrates the subtle distinction between "affine" quantities (ones modeled by an affine space, such as position) and "vector" quantities (ones modeled by a vector space, such as displacement).
Properly then, positions have dimension of "affine" length, while displacements have dimension of "vector" length. To assign a number to an "affine" unit, one must not only choose a unit of measurement, but also a point of reference, while to assign a number to a "vector" unit only requires a unit of measurement.
Thus some physical quantities are better modeled by vectorial quantities while others tend to require affine representation, and the distinction is reflected in their dimensional analysis.
This distinction is particularly important in the case of temperature, for which the numeric value of absolute zero is not the origin 0 in some scales. For absolute zero,
but for temperature differences,
(Here °R refers to the Rankine scale, not the Réaumur scale).
Unit conversion for temperature differences is simply a matter of multiplying by, e.g., 1 °F / 1 K (although the ratio is not a constant value). But because some of these scales have origins that do not correspond to absolute zero, conversion from one temperature scale to another requires accounting for that. As a result, simple dimensional analysis can lead to errors if it is ambiguous whether 1 K means the absolute temperature equal to −272.15 °C, or the temperature difference equal to 1 °C.
Similar to the issue of a point of reference is the issue of orientation: a displacement in 2 or 3 dimensions is not just a length, but is a length together with a "direction". (This issue does not arise in 1 dimension, or rather is equivalent to the distinction between positive and negative.) Thus, to compare or combine two dimensional quantities in a multi-dimensional space, one also needs an orientation: they need to be compared to a frame of reference.
This leads to the extensions discussed below, namely Huntley's directed dimensions and Siano's orientational analysis.
What is the period of oscillation of a mass attached to an ideal linear spring with spring constant suspended in gravity of strength ? That period is the solution for of some dimensionless equation in the variables , , , and .
The four quantities have the following dimensions: [T]; [M]; [M/T2]; and [L/T2]. From these we can form only one dimensionless product of powers of our chosen variables, formula_39 = formula_40 , and putting formula_41 for some dimensionless constant gives the dimensionless equation sought. The dimensionless product of powers of variables is sometimes referred to as a dimensionless group of variables; here the term "group" means "collection" rather than mathematical group. They are often called dimensionless numbers as well.
Note that the variable does not occur in the group. It is easy to see that it is impossible to form a dimensionless product of powers that combines with , , and , because is the only quantity that involves the dimension L. This implies that in this problem the is irrelevant. Dimensional analysis can sometimes yield strong statements about the "irrelevance" of some quantities in a problem, or the need for additional parameters. If we have chosen enough variables to properly describe the problem, then from this argument we can conclude that the period of the mass on the spring is independent of : it is the same on the earth or the moon. The equation demonstrating the existence of a product of powers for our problem can be written in an entirely equivalent way: formula_42, for some dimensionless constant κ (equal to formula_43 from the original dimensionless equation).
When faced with a case where dimensional analysis rejects a variable (, here) that one intuitively expects to belong in a physical description of the situation, another possibility is that the rejected variable is in fact relevant, but that some other relevant variable has been omitted, which might combine with the rejected variable to form a dimensionless quantity. That is, however, not the case here.
When dimensional analysis yields only one dimensionless group, as here, there are no unknown functions, and the solution is said to be "complete" – although it still may involve unknown dimensionless constants, such as .
Consider the case of a vibrating wire of length "ℓ" (L) vibrating with an amplitude "A" (L). The wire has a linear density "ρ" (M/L) and is under tension "s" (ML/T2), and we want to know the energy "E" (ML2/T2) in the wire. Let "π"1 and "π"2 be two dimensionless products of powers of the variables chosen, given by
The linear density of the wire is not involved. The two groups found can be combined into an equivalent form as an equation
where "F" is some unknown function, or, equivalently as
where "f" is some other unknown function. Here the unknown function implies that our solution is now incomplete, but dimensional analysis has given us something that may not have been obvious: the energy is proportional to the first power of the tension. Barring further analytical analysis, we might proceed to experiments to discover the form for the unknown function "f". But our experiments are simpler than in the absence of dimensional analysis. We'd perform none to verify that the energy is proportional to the tension. Or perhaps we might guess that the energy is proportional to "ℓ", and so infer that . The power of dimensional analysis as an aid to experiment and forming hypotheses becomes evident.
The power of dimensional analysis really becomes apparent when it is applied to situations, unlike those given above, that are more complicated, the set of variables involved are not apparent, and the underlying equations hopelessly complex. Consider, for example, a small pebble sitting on the bed of a river. If the river flows fast enough, it will actually raise the pebble and cause it to flow along with the water. At what critical velocity will this occur? Sorting out the guessed variables is not so easy as before. But dimensional analysis can be a powerful aid in understanding problems like this, and is usually the very first tool to be applied to complex problems where the underlying equations and constraints are poorly understood. In such cases, the answer may depend on a dimensionless number such as the Reynolds number, which may be interpreted by dimensional analysis.
Consider the case of a thin, solid, parallel-sided rotating disc of axial thickness "t" (L) and radius "R" (L). The disc has a density "ρ" (M/L3), rotates at an angular velocity "ω" (T−1) and this leads to a stress "S" (ML−1T−2) in the material. There is a theoretical linear elastic solution, given by Lame, to this problem when the disc is thin relative to its radius, the faces of the disc are free to move axially, and the plane stress constitutive relations can be assumed to be valid. As the disc becomes thicker relative to the radius then the plane stress solution breaks down. If the disc is restrained axially on its free faces then a state of plane strain will occur. However, if this is not the case then the state of stress may only be determined though consideration of three-dimensional elasticity and there is no known theoretical solution for this case. An engineer might, therefore, be interested in establishing a relationship between the five variables. Dimensional analysis for this case leads to the following (5 − 3 = 2) non-dimensional groups:
Through the use of numerical experiments using, for example, the finite element method, the nature of the relationship between the two non-dimensional groups can be obtained as shown in the figure. As this problem only involves two non-dimensional groups, the complete picture is provided in a single plot and this can be used as a design/assessment chart for rotating discs
Huntley has pointed out that it is sometimes productive to refine our concept of dimension. Two possible refinements are:
As an example of the usefulness of the first refinement, suppose we wish to calculate the distance a cannonball travels when fired with a vertical velocity component formula_47 and a horizontal velocity component formula_48, assuming it is fired on a flat surface. Assuming no use of directed lengths, the quantities of interest are then formula_48, formula_47, both dimensioned as LT−1, , the distance travelled, having dimension L, and the downward acceleration of gravity, with dimension LT−2.
With these four quantities, we may conclude that the equation for the range may be written:
Or dimensionally
from which we may deduce that formula_53 and formula_54, which leaves one exponent undetermined. This is to be expected since we have two fundamental dimensions L and T, and four parameters, with one equation.
If, however, we use directed length dimensions, then formula_48 will be dimensioned as LT−1, formula_47 as LT−1, as L and as LT−2. The dimensional equation becomes:
and we may solve completely as formula_58, formula_59 and formula_60. The increase in deductive power gained by the use of directed length dimensions is apparent.
In a similar manner, it is sometimes found useful (e.g., in fluid mechanics and thermodynamics) to distinguish between mass as a measure of inertia (inertial mass), and mass as a measure of quantity (substantial mass). For example, consider the derivation of Poiseuille's Law. We wish to find the rate of mass flow of a viscous fluid through a circular pipe. Without drawing distinctions between inertial and substantial mass we may choose as the relevant variables
There are three fundamental variables so the above five equations will yield two dimensionless variables which we may take to be formula_63 and formula_64 and we may express the dimensional equation as
where and are undetermined constants. If we draw a distinction between inertial mass with dimension formula_66 and substantial mass with dimension formula_67, then mass flow rate and density will use substantial mass as the mass parameter, while the pressure gradient and coefficient of viscosity will use inertial mass. We now have four fundamental parameters, and one dimensionless constant, so that the dimensional equation may be written:
where now only is an undetermined constant (found to be equal to formula_69 by methods outside of dimensional analysis). This equation may be solved for the mass flow rate to yield Poiseuille's law.
Huntley's extension has some serious drawbacks:
It also is often quite difficult to assign the L, L, L, L, symbols to the physical variables involved in the problem of interest. He invokes a procedure that involves the "symmetry" of the physical problem. This is often very difficult to apply reliably: It is unclear as to what parts of the problem that the notion of "symmetry" is being invoked. Is it the symmetry of the physical body that forces are acting upon, or to the points, lines or areas at which forces are being applied? What if more than one body is involved with different symmetries?
Consider the spherical bubble attached to a cylindrical tube, where one wants the flow rate of air as a function of the pressure difference in the two parts. What are the Huntley extended dimensions of the viscosity of the air contained in the connected parts? What are the extended dimensions of the pressure of the two parts? Are they the same or different? These difficulties are responsible for the limited application of Huntley's addition to real problems.
Angles are, by convention, considered to be dimensionless variables. As an example, consider again the projectile problem in which a point mass is launched from the origin "(x,y)=(0,0)" at a speed "v" and angle "θ" above the "x"-axis, with the force of gravity directed along the negative "y"-axis. It is desired to find the range "R", at which point the mass returns to the "x"-axis. Conventional analysis will yield the dimensionless variable "π=R g/v^2", but offers no insight into the relationship between "R" and "θ"
Note that the orientational symbols form a group (the Klein four-group or "Viergruppe"). In this system, scalars always have the same orientation as the identity element, independent of the "symmetry of the problem". Physical quantities that are vectors have the orientation expected: a force or a velocity in the z-direction has the orientation of . For angles, consider an angle that lies in the z-plane. Form a right triangle in the z-plane with being one of the acute angles. The side of the right triangle adjacent to the angle then has an orientation and the side opposite has an orientation . Then, since we conclude that an angle in the xy-plane must have an orientation , which is not unreasonable. Analogous reasoning forces the conclusion that has orientation while has orientation 10. These are different, so one concludes (correctly), for example, that there are no solutions of physical equations that are of the form , where and are real scalars. Note that an expression such as formula_71 is not dimensionally inconsistent since it is a special case of the sum of angles formula and should properly be written:
which for formula_73 and formula_74 yields formula_75. Physical quantities may be expressed as complex numbers (e.g. formula_76) which imply that the complex quantity has an orientation equal to that of the angle it is associated with ( in the above example).
The assignment of orientational symbols to physical quantities and the requirement that physical equations be orientationally homogeneous can actually be used in a way that is similar to dimensional analysis to derive a little more information about acceptable solutions of physical problems. In this approach one sets up the dimensional equation and solves it as far as one can. If the lowest power of a physical variable is fractional, both sides of the solution is raised to a power such that all powers are integral. This puts it into "normal form". The orientational equation is then solved to give a more restrictive condition on the unknown powers of the orientational symbols, arriving at a solution that is more complete than the one that dimensional analysis alone gives. Often the added information is that one of the powers of a certain variable is even or odd.
As an example, for the projectile problem, using orientational symbols, θ, being in the xy-plane will thus have dimension and the range of the projectile will be of the form:
Dimensional homogeneity will now correctly yield and , and orientational homogeneity requires that formula_78. In other words, that must be an odd integer. In fact the required function of theta will be which is a series of odd powers of .
It is seen that the Taylor series of and are orientationally homogeneous using the above multiplication table, while expressions like and are not, and are (correctly) deemed unphysical.
In orientational analysis, the unit of angle is considered to be a base unit, rather than dimensionless, which will require more careful specification of the units of physical variables. For example, the question of whether torque and energy have the same units is answered in the negative. Torque will have dimensions "ML2θ/T" while energy will have units "ML2/T" where "θ" is a unit of angular measure (radians, degrees, etc.). Since torque "τ" is "τ=r x p" which is proportional to sin("θ"), it can be seen that the units of the cross product of two physical vectors (i.e. pseudovectors) will be the product of the dimensions of the two physical vectors times an angular unit.
The dimensionless constants that arise in the results obtained, such as the C in the Poiseuille's Law problem and the formula_79 in the spring problems discussed above, come from a more detailed analysis of the underlying physics and often arise from integrating some differential equation. Dimensional analysis itself has little to say about these constants, but it is useful to know that they very often have a magnitude of order unity. This observation can allow one to sometimes make "back of the envelope" calculations about the phenomenon of interest, and therefore be able to more efficiently design experiments to measure it, or to judge whether it is important, etc.
Paradoxically, dimensional analysis can be a useful tool even if all the parameters in the underlying theory are dimensionless, e.g., lattice models such as the Ising model can be used to study phase transitions and critical phenomena. Such models can be formulated in a purely dimensionless way. As we approach the critical point closer and closer, the distance over which the variables in the lattice model are correlated (the so-called correlation length, formula_80 ) becomes larger and larger. Now, the correlation length is the relevant length scale related to critical phenomena, so one can, e.g., surmise on "dimensional grounds" that the non-analytical part of the free energy per lattice site should be formula_81 where formula_82 is the dimension of the lattice.
It has been argued by some physicists, e.g., M. J. Duff, that the laws of physics are inherently dimensionless. The fact that we have assigned incompatible dimensions to Length, Time and Mass is, according to this point of view, just a matter of convention, borne out of the fact that before the advent of modern physics, there was no way to relate mass, length, and time to each other. The three independent dimensionful constants: "c", "ħ", and "G", in the fundamental equations of physics must then be seen as mere conversion factors to convert Mass, Time and Length into each other.
Just as in the case of critical properties of lattice models, one can recover the results of dimensional analysis in the appropriate scaling limit; e.g., dimensional analysis in mechanics can be derived by reinserting the constants "ħ", "c", and "G" (but we can now consider them to be dimensionless) and demanding that a nonsingular relation between quantities exists in the limit formula_83, formula_84 and formula_85. In problems involving a gravitational field the latter limit should be taken such that the field stays finite.
Following are tables of commonly occurring expressions in physics, related to the dimensions of energy, momentum, and force.
If , where "c" is the speed of light and "ħ" is the reduced Planck constant, and a suitable fixed unit of energy is chosen, then all quantities of length "L", mass "M" and time "T" can be expressed (dimensionally) as a power of energy "E", because length, mass and time can be expressed using speed "v", action "S", and energy "E":
though speed and action are dimensionless ( and ) – so the only remaining quantity with dimension is energy. In terms of powers of dimensions:
This is particularly useful in particle physics and high energy physics, in which case the energy unit is the electron volt (eV). Dimensional checks and estimates become very simple in this system.
However, if electric charges and currents are involved, another unit to be fixed is for electric charge, normally the electron charge "e" though other choices are possible.
Dimensional correctness as part of type checking has studied since 1977.
Implementations for Ada and C++ were described in 1985 and 1988.
Kennedy's 1996 thesis describes an implementation in Standard ML, and later in F#.
Griffioen's 2019 thesis extended Kennedy's Hindley–Milner type system to support Hart's matrices | https://en.wikipedia.org/wiki?curid=8267 |
Digital television
Digital television (DTV) is the transmission of television audiovisual signals using digital encoding, in contrast to the earlier analog television technology which used analog signals. At the time of its development it was considered an innovative advancement and represented the first significant evolution in television technology since color television in the 1950s. Modern digital television is transmitted in high definition (HDTV) with greater resolution than analog TV. It typically uses a widescreen aspect ratio (commonly 16:9) in contrast to the narrower format of analog TV. It makes more economical use of scarce radio spectrum space; it can transmit up to seven channels in the same bandwidth as a single analog channel, and provides many new features that analog television cannot. A transition from analog to digital broadcasting began around 2000. Different digital television broadcasting standards have been adopted in different parts of the world; below are the more widely used standards:
Digital television's roots have been tied very closely to the availability of inexpensive, high performance computers. It was not until the 1990s that digital TV became a real possibility. Digital television was previously not practically feasible due to the impractically high bandwidth requirements of uncompressed digital video, requiring around 200Mbit/s (25MB/s) bit-rate for a standard-definition television (SDTV) signal, and over 1Gbit/s for high-definition television (HDTV).
Digital TV became practically feasible in the early 1990s due to a major technological development, discrete cosine transform (DCT) video compression. DCT coding is a lossy compression technique that was first proposed for image compression by Nasir Ahmed in 1972, and was later adapted into a motion-compensated DCT video coding algorithm, for video coding standards such as the H.26x formats from 1988 onwards and the MPEG formats from 1991 onwards. Motion-compensated DCT video compression significantly reduced the amount of bandwidth required for a digital TV signal. DCT coding compressed down the bandwidth requirements of digital television signals to about 34Mpps bit-rate for SDTV and around 70140 Mbit/s for HDTV while maintaining near-studio-quality transmission, making digital television a practical reality in the 1990s.
A digital TV service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an "Integrated Network System" service. However, it was not possible to practically implement such a digital TV service until the adoption of discrete cosine transform (DCT) video compression technology made it possible in the early 1990s.
In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, and as the MUSE analog format was proposed by Japan's public broadcaster NHK as a worldwide standard, Japanese advancements were seen as pacesetters that threatened to eclipse U.S. electronics companies. Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration.
Between 1988 and 1991, several European organizations were working on DCT-based digital video coding standards for both SDTV and HDTV. The EU 256 project by the CMTT and ETSI, along with research by Italian broadcaster RAI, developed a DCT video codec that broadcast SDTV at 34Mbit/s bit-rate and near-studio-quality HDTV at about 70140 Mbit/s bit-rate. RAI demonstrated this with a 1990 FIFA World Cup broadcast in March 1990. An American company, General Instrument, also demonstrated the feasibility of a digital television signal in 1990. This led to the FCC being persuaded to delay its decision on an ATV standard until a digitally based standard could be developed.
In March 1990, when it became clear that a digital standard was feasible, the FCC made a number of critical decisions. First, the Commission declared that the new TV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels. The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements.
The final standard adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This outcome resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—is superior. Interlaced scanning, which is used in televisions worldwide, scans even-numbered lines first, then odd-numbered ones. Progressive scanning, which is the format used in computers, scans lines in sequences, from top to bottom. The computer industry argued that progressive scanning is superior because it does not "flicker" in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet, and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format.
DirecTV in the U.S. launched the first commercial digital satellite platform in May 1994, using the Digital Satellite System (DSS) standard. Digital cable broadcasts were tested and launched in the U.S. in 1996 by TCI and Time Warner. The first digital terrestrial platform was launched in November 1998 as ONdigital in the United Kingdom, using the DVB-T standard.
Digital television supports many different picture formats defined by the broadcast television systems which are a combination of size and aspect ratio (width to height ratio).
With digital terrestrial television (DTT) broadcasting, the range of formats can be broadly divided into two categories: high definition television (HDTV) for the transmission of high-definition video and standard-definition television (SDTV). These terms by themselves are not very precise, and many subtle intermediate cases exist.
One of several different HDTV formats that can be transmitted over DTV is: 1280 × 720 pixels in progressive scan mode (abbreviated "720p") or 1920 × 1080 pixels in interlaced video mode ("1080i"). Each of these uses a aspect ratio. HDTV cannot be transmitted over analog television channels because of channel capacity issues.
SDTV, by comparison, may use one of several different formats taking the form of various aspect ratios depending on the technology used in the country of broadcast. In terms of rectangular pixels, NTSC countries can deliver a 640 × 480 resolution in 4:3 and 854 × 480 in , while PAL can give 768 × 576 in and 1024 × 576 in . However, broadcasters may choose to reduce these resolutions to reduce bit rate (e.g., many DVB-T channels in the United Kingdom use a horizontal resolution of 544 or 704 pixels per line).
Each commercial broadcasting terrestrial television DTV channel in North America is permitted to be broadcast at a bit rate up to 19 megabits per second. However, the broadcaster does not need to use this entire bandwidth for just one broadcast channel. Instead the broadcast can use the channel to include PSIP and can also subdivide across several video subchannels (a.k.a. feeds) of varying quality and compression rates, including non-video datacasting services that allow one-way high-bit-rate streaming of data to computers like National Datacast.
A broadcaster may opt to use a standard-definition (SDTV) digital signal instead of an HDTV signal, because current convention allows the bandwidth of a DTV channel (or "multiplex") to be subdivided into multiple digital subchannels, (similar to what most FM radio stations offer with HD Radio), providing multiple feeds of entirely different television programming on the same channel. This ability to provide either a single HDTV feed or multiple lower-resolution feeds is often referred to as distributing one's "bit budget" or multicasting. This can sometimes be arranged automatically, using a statistical multiplexer (or "stat-mux"). With some implementations, image resolution may be less directly limited by bandwidth; for example in DVB-T, broadcasters can choose from several different modulation schemes, giving them the option to reduce the transmission bit rate and make reception easier for more distant or mobile viewers.
There are several different ways to receive digital television. One of the oldest means of receiving DTV (and TV in general) is from terrestrial transmitters using an antenna (known as an "aerial" in some countries). This way is known as Digital terrestrial television (DTT). With DTT, viewers are limited to channels that have a terrestrial transmitter in range of their antenna.
Other ways have been devised to receive digital television. Among the most familiar to people are digital cable and digital satellite. In some countries where transmissions of TV signals are normally achieved by microwaves, digital MMDS is used. Other standards, such as Digital multimedia broadcasting (DMB) and DVB-H, have been devised to allow handheld devices such as mobile phones to receive TV signals. Another way is IPTV, that is receiving TV via Internet Protocol, relying on digital subscriber line (DSL) or optical cable line. Finally, an alternative way is to receive digital TV signals via the open Internet (Internet television), whether from a central streaming service or a P2P (peer-to-peer) system.
Some signals carry encryption and specify use conditions (such as "may not be recorded" or "may not be viewed on displays larger than 1 m in diagonal measure") backed up with the force of law under the World Intellectual Property Organization Copyright Treaty (WIPO Copyright Treaty) and national legislation implementing it, such as the U.S. Digital Millennium Copyright Act. Access to encrypted channels can be controlled by a removable smart card, for example via the Common Interface (DVB-CI) standard for Europe and via Point Of Deployment (POD) for IS or named differently CableCard.
Digital television signals must not interfere with each other, and they must also coexist with analog television until it is phased out.
The following table gives allowable signal-to-noise and signal-to-interference ratios for various interference scenarios. This table is a crucial regulatory tool for controlling the placement and power levels of stations. Digital TV is more tolerant of interference than analog TV, and this is the reason a smaller range of channels can carry an all-digital set of television stations.
People can interact with a DTV system in various ways. One can, for example, browse the electronic program guide. Modern DTV systems sometimes use a return path providing feedback from the end user to the broadcaster. This is possible with a coaxial or fiber optic cable, a dialup modem, or Internet connection but is not possible with a standard antenna.
Some of these systems support video on demand using a communication channel localized to a neighborhood rather than a city (terrestrial) or an even larger area (satellite).
1seg (1-segment) is a special form of ISDB. Each channel is further divided into 13 segments. The 12 segments of them are allocated for HDTV and remaining segment, the 13th, is used for narrow-band receivers such as mobile television or cell phone.
DTV has several advantages over analog TV, the most significant being that digital channels take up less bandwidth, and the bandwidth needs are continuously variable, at a corresponding reduction in image quality depending on the level of compression as well as the resolution of the transmitted image. This means that digital broadcasters can provide more digital channels in the same space, provide high-definition television service, or provide other non-television services such as multimedia or interactivity. DTV also permits special services such as multiplexing (more than one program on the same channel), electronic program guides and additional languages (spoken or subtitled). The sale of non-television services may provide an additional revenue source.
Digital and analog signals react to interference differently. For example, common problems with analog television include ghosting of images, noise from weak signals, and many other potential problems which degrade the quality of the image and sound, although the program material may still be watchable. With digital television, the audio and video must be synchronized digitally, so reception of the digital signal must be very nearly complete; otherwise, neither audio nor video will be usable. Short of this complete failure, "blocky" video is seen when the digital signal experiences interference.
Analog TV began with monophonic sound, and later developed multichannel television sound with two independent audio signal channels. DTV allows up to 5 audio signal channels plus a subwoofer bass channel, with broadcasts similar in quality to movie theaters and DVDs.
DTV images have some picture defects that are not present on analog television or motion picture cinema, because of present-day limitations of bit rate and compression algorithms such as MPEG-2. This defect is sometimes referred to as "mosquito noise".
Because of the way the human visual system works, defects in an image that are localized to particular features of the image or that come and go are more perceptible than defects that are uniform and constant. However, the DTV system is designed to take advantage of other limitations of the human visual system to help mask these flaws, e.g. by allowing more compression artifacts during fast motion where the eye cannot track and resolve them as easily and, conversely, minimizing artifacts in still backgrounds that may be closely examined in a scene (since time allows).
Broadcast, cable, satellite, and Internet DTV operators control the picture quality of television signal encodes using sophisticated, neuroscience-based algorithms, such as the structural similarity (SSIM) video quality measurement tool, which was accorded each of its inventors a Primetime Emmy because of its global use. Another tool, called Visual Information Fidelity (VIF), is a top-performing algorithm at the core of the Netflix VMAF video quality monitoring system, which accounts for about 35% of all U.S. bandwidth consumption.
Changes in signal reception from factors such as degrading antenna connections or changing weather conditions may gradually reduce the quality of analog TV. The nature of digital TV results in a perfectly decodable video initially, until the receiving equipment starts picking up interference that overpowers the desired signal or if the signal is too weak to decode. Some equipment will show a garbled picture with significant damage, while other devices may go directly from perfectly decodable video to no video at all or lock up. This phenomenon is known as the digital cliff effect.
Block error may occur when transmission is done with compressed images. A block error in a single frame often results in black boxes in several subsequent frames, making viewing difficult.
For remote locations, distant channels that, as analog signals, were previously usable in a snowy and degraded state may, as digital signals, be perfectly decodable or may become completely unavailable. The use of higher frequencies will add to these problems, especially in cases where a clear line-of-sight from the receiving antenna to the transmitter is not available.
Television sets with only analog tuners cannot decode digital transmissions. When analog broadcasting over the air ceases, users of sets with analog-only tuners may use other sources of programming (e.g. cable, recorded media) or may purchase set-top converter boxes to tune in the digital signals. In the United States, a government-sponsored coupon was available to offset the cost of an external converter box. Analog switch-off (of full-power stations) took place on December 11, 2006 in The Netherlands, June 12, 2009 in the United States for full-power stations, and later for Class-A Stations on September 1, 2016, July 24, 2011 in Japan, August 31, 2011 in Canada, February 13, 2012 in Arab states, May 1, 2012 in Germany, October 24, 2012 in the United Kingdom and Ireland, October 31, 2012 in selected Indian cities, and December 10, 2013 in Australia. Completion of analog switch-off is scheduled for December 31, 2017 in the whole of India, December 2018 in Costa Rica and around 2020 for the Philippines.
Prior to the conversion to digital TV, analog television broadcast audio for TV channels on a separate FM carrier signal from the video signal. This FM audio signal could be heard using standard radios equipped with the appropriate tuning circuits.
However, after the transition of many countries to digital TV, no portable radio manufacturer has yet developed an alternative method for portable radios to play just the audio signal of digital TV channels. (DTV radio is not the same thing.)
The adoption of a broadcast standard incompatible with existing analog receivers has created the problem of large numbers of analog receivers being discarded during digital television transition. One superintendent of public works was quoted in 2009 saying; "some of the studies I’ve read in the trade magazines say up to a quarter of American households could be throwing a TV out in the next two years following the regulation change". In 2009, an estimated 99 million analog TV receivers were sitting unused in homes in the US alone and, while some obsolete receivers are being retrofitted with converters, many more are simply dumped in landfills where they represent a source of toxic metals such as lead as well as lesser amounts of materials such as barium, cadmium and chromium.
According to one campaign group, a CRT computer monitor or TV contains an average of of lead. According to another source, the lead in glass of a CRT varies from 1.08 lb to 11.28 lb, depending on screen size and type, but the lead is in the form of "stable and immobile" lead oxide mixed into the glass. It is claimed that the lead can have long-term negative effects on the environment if dumped as landfill. However, the glass envelope can be recycled at suitably equipped facilities. Other portions of the receiver may be subject to disposal as hazardous material.
Local restrictions on disposal of these materials vary widely; in some cases second-hand stores have refused to accept working color television receivers for resale due to the increasing costs of disposing of unsold TVs. Those thrift stores which are still accepting donated TVs have reported significant increases in good-condition working used television receivers abandoned by viewers who often expect them not to work after digital transition.
In Michigan in 2009, one recycler estimated that as many as one household in four would dispose of or recycle a TV set in the following year. The digital television transition, migration to high-definition television receivers and the replacement of CRTs with flatscreens are all factors in the increasing number of discarded analog CRT-based television receivers. | https://en.wikipedia.org/wiki?curid=8271 |
Declaration of Arbroath
The Declaration of Arbroath (; ; ) is the name usually given to a letter, dated 6 April 1320 at Arbroath, written by Scottish barons and addressed to Pope John XXII. It constituted King Robert I's response to his excommunication for disobeying the pope's demand in 1317 for a truce in the First War of Scottish Independence. The letter asserted the antiquity of the independence of the Kingdom of Scotland, denouncing English attempts to subjugate it.
Generally believed to have been written in Arbroath Abbey by Bernard of Kilwinning (or of Linton), then Chancellor of Scotland and Abbot of Arbroath, and sealed by fifty-one magnates and nobles, the letter is the sole survivor of three created at the time. The others were a letter from the King of Scots, Robert I, and a letter from four Scottish bishops which all made similar points. The "Declaration" was intended to assert Scotland's status as an independent, sovereign state and defend Scotland's right to use military action when unjustly attacked.
Submitted in Latin, the "Declaration" was little known until the late 17th century and is unmentioned by any of Scotland's major 16th century historians. In the 1680s the Latin text was printed for the first time and translated into English in the wake of the Glorious Revolution, after which time it was sometimes described as a declaration of independence.
The "Declaration" was part of a broader diplomatic campaign, which sought to assert Scotland's position as an independent kingdom, rather than its being a feudal land controlled by England's Norman kings, as well as lift the excommunication of Robert the Bruce. The pope had recognised Edward I of England's claim to overlordship of Scotland in 1305 and Bruce was excommunicated by the Pope for murdering John Comyn before the altar at Greyfriars Church in Dumfries in 1306. This excommunication was lifted in 1308; subsequently the pope threatened Robert with excommunication again if Avignon's demands in 1317 for peace with England were ignored. Warfare continued, and in 1320 John XXII again excommunicated Robert I. In reply, the "Declaration" was composed and signed and, in response, the papacy rescinded King Robert Bruce's excommunication and thereafter addressed him using his royal title.
The wars of Scottish independence began as a result of the deaths of King Alexander III of Scotland in 1286 and his heir the "Maid of Norway" in 1290, which left the throne of Scotland vacant and the subsequent succession crisis of 1290-1296 ignited a struggle among the Competitors for the Crown of Scotland, chiefly between the House of Comyn, the House of Balliol, and the House of Bruce who all claimed the crown. After July 1296's deposition of King John Balliol by Edward of England and then February 1306's killing of John Comyn III, Robert Bruce's rivals to the throne of Scotland were gone, and Robert was crowned king at Scone that year. Edward I, the "Hammer of Scots", died in 1307; his son and successor Edward II did not renew his father's campaigns in Scotland. In 1309 a parliament held at St Andrews acknowledged Robert's right to rule, received emissaries from the Kingdom of France recognising the Bruce's title, and proclaimed the independence of the kingdom from England.
By 1314 only Edinburgh, Berwick-upon-Tweed, Roxburgh, and Stirling remained in English hands. In June 1314 the Battle of Bannockburn had secured Robert Bruce's position as King of Scots; Stirling, the Central Belt, and much of Lothian came under Robert's control while the defeated Edward II's power on escaping to England via Berwick weakened under the sway of his cousin Henry, Earl of Lancaster. King Robert was thus able to consolidate his power, and sent his brother Edward Bruce to claim the Kingdom of Ireland in 1315 with an army landed in Ulster the previous year with the help of Gaelic lords from the Isles. Edward Bruce died in 1318 without achieving success, but the Scots campaigns in Ireland and in northern England were intended to press for the recognition of Robert's crown by King Edward. At the same time, it undermined the House of Plantagenet's claims to overlordship of the British Isles and halted the Plantagenets' effort to absorb Scotland as had been done in Ireland and Wales. Thus were the Scots nobles confident in their letters to Pope John of the distinct and independent nature of Scotland's kingdom; the "Declaration of Arbroath" was one such. According to historian David Crouch, "The two nations were mutually hostile kingdoms and peoples, and the ancient idea of Britain as an informal empire of peoples under the English king's presidency was entirely dead."
The text makes claims about the ancient history of Scotland and especially the "Scoti", forbears of the Scots, who the "Declaration" claims originated in "Scythia Major" and migrated via Spain to Britain, dating their migration to "1,200 years from the Israelite people's crossing of the Red Sea". The "Declaration" describes how the Scots had "thrown out the Britons and completely destroyed the Picts", resisted the invasions of "the Norse, the Danes and the English", and "held itself ever since, free from all slavery". It then claims that in the Kingdom of Scotland, "one hundred and thirteen kings have reigned of their own Blood Royal, without interruption by foreigners". The text compares Robert Bruce with the Biblical warriors Judas Maccabeus and Joshua.
The "Declaration" made a number of points: that Edward I of England had unjustly attacked Scotland and perpetrated atrocities; that Robert the Bruce had delivered the Scottish nation from this peril; and, most controversially, that the independence of Scotland was the prerogative of the Scottish people, rather than the King of Scots. (However this should be taken in the context of the time - ‘Scottish People’ refers to the Scottish nobility, rather than commoners.) In fact it stated that the nobility would choose someone else to be king if Bruce proved to be unfit in maintaining Scotland's independence.
Some have interpreted this last point as an early expression of 'popular sovereignty' – that government is contractual and that kings can be chosen by the community rather than by God alone. Modern Scottish nationalists point to the “Declaration" as evidence of the long-term persistence of the Scots as a distinct national community, giving a very early date for the emergence of nationalism. However, "the overwhelming majority of academics challenge this vision. Scholars point out that definitions change with time. The meaning ascribed to words similar to nation during the ancient and medieval periods was often quite different than it is today."
It has also been argued that the "Declaration" was not a statement of popular sovereignty (and that its signatories would have had no such concept) but a statement of royal propaganda supporting Bruce's faction. A justification had to be given for the rejection of King John Balliol in whose name William Wallace and Andrew de Moray had rebelled in 1297. The reason given in the "Declaration" is that Bruce was able to defend Scotland from English aggression whereas, by implication, King John could not.
Whatever the true motive, the idea of a contract between King and people was advanced to the Pope as a justification for Bruce's coronation whilst John de Balliol still lived in Papal custody.
For the full text in Latin and a translation in English, See on WikiSource.
There are 39 names—eight earls and thirty one barons—at the start of the document, all of whom may have had their seals appended, probably over the space of some weeks and months, with nobles sending in their seals to be used. On the extant copy of the "Declaration" there are only 19 seals, and of those 19 people only 12 are named within the document. It is thought likely that at least 11 more seals than the original 39 might have been appended. The "Declaration" was then taken to the papal court at Avignon by Bishop Kininmund, Sir Adam Gordon and Sir Odard de Maubuisson.
The Pope heeded the arguments contained in the "Declaration", influenced by the offer of support from the Scots for his long-desired crusade if they no longer had to fear English invasion. He exhorted Edward II in a letter to make peace with the Scots, but the following year was again persuaded by the English to take their side and issued six bulls to that effect.
Eight years later, on 1 March 1328 the new English king, Edward III signed a peace treaty between Scotland and England, the Treaty of Edinburgh-Northampton. In this treaty, which was in effect for five years until 1333, Edward renounced all English claims to Scotland. Eight months later, in October 1328, the interdict on Scotland, and the excommunication of its king, were removed by the Pope.
The original copy of the "Declaration" that was sent to Avignon is lost. The only existing manuscript copy of the "Declaration" survives among Scotland's state papers, measuring 540mm wide by 675mm long (including the seals), it is held by the National Archives of Scotland in Edinburgh, a part of the National Records of Scotland.
The most widely known English language translation was made by Sir James Fergusson, formerly Keeper of the Records of Scotland, from text that he reconstructed using this extant copy and early copies of the original draft.
G. W. S. Barrow has shown that one passage in particular, often quoted from the Fergusson translation, was carefully written using different parts of "The Conspiracy of Catiline" by the Roman author, Sallust (86–35 BC) as the direct source:
Listed below are the signatories of the Declaration of Arbroath in 1320.
The letter itself is written in Latin. It uses the Latin versions of the signatories' titles, and in some cases, the spelling of names has changed over the years. This list generally uses the titles of the signatories' Wikipedia biographies.
In addition, the names of the following do not appear in the document's text, but their names are written on seal tags and their seals are present:
In 1998 former majority leader Trent Lott succeeded in instituting an annual "National Tartan Day" on 6 April by resolution of the United States Senate. US Senate Resolution 155 of 10 November 1997 states that "the Declaration of Arbroath, the Scottish Declaration of Independence, was signed on April 6, 1320 and the American Declaration of Independence was modeled [sic] on that inspirational document". However, although this influence is accepted by some historians, it is disputed by others.
In 2016 the Declaration of Arbroath was placed on the UK Memory of the World Register, part of UNESCO's Memory of the World Programme.
2020 is the 700th anniversary of the Declaration of Arbroath's composition; an "Arbroath 2020" festival was arranged but postponed due to the COVID-19 pandemic. The National Museum of Scotland in Edinburgh planned to display the document to the public for the first time in fifteen years. | https://en.wikipedia.org/wiki?curid=8274 |
Digital data
Digital data, in information theory and information systems, is the discrete, discontinuous representation of information or works. Numbers and letters are commonly used representations.
Digital data can be contrasted with analog signals which behave in a continuous manner, and with continuous functions such as sounds, images, and other measurements.
The word "digital" comes from the same source as the words digit and "digitus" (the Latin word for "finger"), as fingers are often used for discrete counting. Mathematician George Stibitz of Bell Telephone Laboratories used the word "digital" in reference to the fast electric pulses emitted by a device designed to aim and fire anti-aircraft guns in 1942. The term is most commonly used in computing and electronics, especially where real-world information is converted to binary numeric form as in digital audio and digital photography.
Since symbols (for example, alphanumeric characters) are not continuous, representing symbols digitally is rather simpler than conversion of continuous or analog information to digital. Instead of sampling and quantization as in analog-to-digital conversion, such techniques as polling and encoding are used.
A symbol input device usually consists of a group of switches that are polled at regular intervals to see which switches are switched. Data will be lost if, within a single polling interval, two switches are pressed, or a switch is pressed, released, and pressed again. This polling can be done by a specialized processor in the device to prevent burdening the main CPU. When a new symbol has been entered, the device typically sends an interrupt, in a specialized format, so that the CPU can read it.
For devices with only a few switches (such as the buttons on a joystick), the status of each can be encoded as bits (usually 0 for released and 1 for pressed) in a single word. This is useful when combinations of key presses are meaningful, and is sometimes used for passing the status of modifier keys on a keyboard (such as shift and control). But it does not scale to support more keys than the number of bits in a single byte or word.
Devices with many switches (such as a computer keyboard) usually arrange these switches in a scan matrix, with the individual switches on the intersections of x and y lines. When a switch is pressed, it connects the corresponding x and y lines together. Polling (often called scanning in this case) is done by activating each x line in sequence and detecting which y lines then have a signal, thus which keys are pressed. When the keyboard processor detects that a key has changed state, it sends a signal to the CPU indicating the scan code of the key and its new state. The symbol is then encoded, or converted into a number, based on the status of modifier keys and the desired character encoding.
A custom encoding can be used for a specific application with no loss of data. However, using a standard encoding such as ASCII is problematic if a symbol such as 'ß' needs to be converted but is not in the standard.
It is estimated that in the year 1986 less than 1% of the world's technological capacity to store information was digital and in 2007 it was already 94%. The year 2002 is assumed to be the year when humankind was able to store more information in digital than in analog format (the "beginning of the digital age").
Digital data come in these three states: data at rest, data in transit and data in use. The confidentiality, integrity and availability have to be managed during the entire lifecycle from 'birth' to the destruction of the data.
All digital information possesses common properties that distinguish it from analog data with respect to communications:
Even though digital signals are generally associated with the binary electronic digital systems used in modern electronics and computing, digital systems are actually ancient, and need not be binary or electronic. | https://en.wikipedia.org/wiki?curid=8276 |
Demon
A demon is a supernatural being, typically associated with evil, prevalent historically in religion, occultism, literature, fiction, mythology, and folklore; as well as in media such as comics, video games, movies and television series.
The original Greek word "daimon" does not carry negative connotations. The Ancient Greek word "daimōn" denotes a spirit or divine power, much like the Latin "genius" or "numen". The Greek conception of a "daimōn" notably appears in the works of Plato, where it describes the divine inspiration of Socrates.
In Ancient Near Eastern religions and in the Abrahamic traditions, including ancient and medieval Christian demonology, a demon is considered a harmful spiritual entity which may cause demonic possession, calling for an exorcism.
In Western occultism and Renaissance magic, which grew out of an amalgamation of Greco-Roman magic, Jewish Aggadah and Christian demonology, a demon is believed to be a spiritual entity that may be conjured and controlled.
The Ancient Greek word "daemon" denotes a spirit or divine power, much like the Latin "genius" or "numen". "Daimōn" most likely came from the Greek verb "daiesthai" (to divide, distribute). The Greek conception of a "daimōn" notably appears in the works of Plato, where it describes the divine inspiration of Socrates. The original Greek word "daimon" does not carry the negative connotation initially understood by implementation of the Koine ("daimonion"), and later ascribed to any cognate words sharing the root.
The Greek terms do not have any connotations of evil or malevolence. In fact, "eudaimonia", (literally good-spiritedness) means happiness. By the early Roman Empire, cult statues were seen, by pagans and their Christian neighbors alike, as inhabited by the numinous presence of the gods: "Like pagans, Christians still sensed and saw the gods and their power, and as something, they had to assume, lay behind it, by an easy traditional shift of opinion they turned these pagan "daimones" into malevolent 'demons', the troupe of Satan... Far into the Byzantine period Christians eyed their cities' old pagan statuary as a seat of the demons' presence. It was no longer beautiful, it was infested." The term had first acquired its negative connotations in the Septuagint translation of the Hebrew Bible into Greek, which drew on the mythology of ancient Semitic religions. This was then inherited by the Koine text of the New Testament. The Western medieval and neo-medieval conception of a "demon" derives seamlessly from the ambient popular culture of Late Antiquity. The Hellenistic "daemon" eventually came to include many Semitic and Near Eastern gods as evaluated by Christianity.
The supposed existence of demons remains an important concept in many modern religions and occultist traditions. Demons are still feared largely due to their alleged power to possess living creatures. In the contemporary Western occultist tradition (perhaps epitomized by the work of Aleister Crowley), a demon (such as Choronzon, which is Crowley's interpretation of the so-called 'Demon of the Abyss') is a useful metaphor for certain inner psychological processes (inner demons), though some may also regard it as an objectively real phenomenon. Some scholars believe that large portions of the demonology (see Asmodai) of Judaism, a key influence on Christianity and Islam, originated from a later form of Zoroastrianism, and were transferred to Judaism during the Persian era.
Both deities and demons can act as intermediaries to deliver messages to humans. Thus they share some resemblance to the Greek daimonion. The exact definition of "demon" in Egyptology posed a major problem for modern scholarship, since the borders between a deity and a demon are sometimes blurred and the ancient Egyptian language lacks a term for the modern English "demon". However, magical writings indicate that ancient Egyptians acknowledged the existence of malevolent demons by highlighting the demon names with red ink. Demons in this culture appeared to be subordinative and related to a specific deity, yet they may have occasionally acted independent from the divine will. The existence of demons can be related to the realm of chaos, beyond the created world. But even this negative connotation cannot be denied in light of the magical texts. The role of demons in relation to the human world remains ambivalent and largely depends on context.
Ancient Egyptian demons can be divided into two classes: "guardians" and "wanderers." "Guardians" are tied to a specific place; their demonic activity is topographically defined and their function can be benevolent towards those who have the secret knowledge to face them. Demons protecting the underworld may prevent human souls from entering paradise. Only by knowing right charms is the deceased able to enter the "Halls of Osiris". Here, the aggressive nature of the guardian demons is motivated by the need to protect their abodes and not by their evil essence. Accordingly, demons guarded sacred places or the gates to the netherworld. During the Ptolemaic and Roman period, the guardians shifted towards the role of Genius loci and they were the focus of local and private cults.
The "wanderers" are associated with possession, mental illness, death and plagues. Many of them serve as executioners for the major deities, such as Ra or Osiris, when ordered to punish humans on earth or in the netherworld. Wanderers can also be agents of chaos, arising from the world beyond creation to bring about misfortune and suffering without any divine instructions, led only by evil motivations. The influences of the wanderers can be warded off and kept at the borders on the human world by the use of magic, but they can never be destroyed. A sub-category of "wanderers" are nightmare demons, which were believed to cause nightmares by entering a human body.
The ancient Mesopotamians believed that the underworld was home to many demons, which are sometimes referred to as "offspring of "arali"". These demons could sometimes leave the underworld and terrorize mortals on earth. One class of demons that were believed to reside in the underworld were known as "galla"; their primary purpose appears to have been to drag unfortunate mortals back to Kur. They are frequently referenced in magical texts, and some texts describe them as being seven in number. Several extant poems describe the "galla" dragging the god Dumuzid into the underworld. Like other demons, however, "galla" could also be benevolent and, in a hymn from King Gudea of Lagash ( 2144 – 2124 BCE), a minor god named Ig-alima is described as "the great "galla" of Girsu".
Lamashtu was a demonic goddess with the "head of a lion, the teeth of a donkey, naked breasts, a hairy body, hands stained (with blood?), long fingers and fingernails, and the feet of Anzû." She was believed to feed on the blood of human infants and was widely blamed as the cause of miscarriages and cot deaths. Although Lamashtu has traditionally been identified as a demoness, the fact that she could cause evil on her own without the permission of other deities strongly indicates that she was seen as a goddess in her own right. Mesopotamian peoples protected against her using amulets and talismans. She was believed to ride in her boat on the river of the underworld and she was associated with donkeys. She was believed to be the daughter of An.
Pazuzu is a demonic god who was well-known to the Babylonians and Assyrians throughout the first millennium BCE. He is shown with "a rather canine face with abnormally bulging eyes, a scaly body, a snake-headed penis, the talons of a bird and usually wings." He was believed to be the son of the god Hanbi. He was usually regarded as evil, but he could also sometimes be a beneficent entity who protected against winds bearing pestilence and he was thought to be able to force Lamashtu back to the underworld. Amulets bearing his image were positioned in dwellings to protect infants from Lamashtu and pregnant women frequently wore amulets with his head on them as protection from her.
Šul-pa-e's name means "youthful brilliance", but he was not envisioned as youthful god. According to one tradition, he was the consort of Ninhursag, a tradition which contradicts the usual portrayal of Enki as Ninhursag's consort. In one Sumerian poem, offerings made to Šhul-pa-e in the underworld and, in later mythology, he was one of the demons of the underworld.
According to the Jewish Encyclopedia, "In Chaldean mythology the seven evil deities were known as "shedu", storm-demons, represented in ox-like form." They were represented as winged bulls, derived from the colossal bulls used as protective jinn of royal palaces.
As referring to the existence or non-existence of demons ("shedim" or "Se'irim") there are converse opinions in Judaism. There are "practically nil" roles assigned to demons in the Hebrew Bible. In Judaism today, beliefs in "demons" or "evil spirits" are either "midot hasidut" (Hebr. for "customs of the pious"), and therefore not halachah, or notions based on a superstition that are non-essential, non-binding parts of Judaism, and therefore not normative Jewish practice. That is to say, Jews are not obligated to believe in the existence of "shedim", as posek rabbi David Bar-Hayim points out.
The Tanakh mentions two classes of demonic spirits, the "se'irim" and the "shedim". The word "shedim" appears in two places in the Tanakh (, ). The "se'irim" are mentioned once in , probably a re-calling of Assyrian demons in shape of goats. The "shedim" in return are not pagan demigods, but the foreign gods themselves. Both entities appear in a scriptural context of animal or child sacrifice to "non-existent" false gods.
From Chaldea, the term "shedu" traveled to the Israelites. The writers of the Tanach applied the word as a dialogism to Canaanite deities.
There are indications that demons in popular Hebrew mythology were believed to come from the nether world. Various diseases and ailments were ascribed to them, particularly those affecting the brain and those of internal nature. Examples include catalepsy, headache, epilepsy and nightmares. There also existed a demon of blindness, "Shabriri" (lit. "dazzling glare") who rested on uncovered water at night and blinded those who drank from it.
Demons supposedly entered the body and caused the disease while overwhelming or "seizing" the victim. To cure such diseases, it was necessary to draw out the evil demons by certain incantations and talismanic performances, at which the Essenes excelled. Josephus, who spoke of demons as "spirits of the wicked which enter into men that are alive and kill them", but which could be driven out by a certain root, witnessed such a performance in the presence of the Emperor Vespasian and ascribed its origin to King Solomon. In mythology, there were few defences against Babylonian demons. The mythical mace Sharur had the power to slay demons such as Asag, a legendary gallu or edimmu of hideous strength.
In the Jerusalem Talmud notions of "shedim" ("demons" or "spirits") are almost unknown or occur only very rarely, whereas in the Babylon Talmud there are many references to "shedim" and magical incantations. The existence of "shedim" in general was not questioned by most of the Babylonian Talmudists. As a consequence of the rise of influence of the Babylonian Talmud over that of the Jerusalem Talmud, late rabbis in general took as fact the existence of "shedim", nor did most of the medieval thinkers question their reality. However, rationalists like Maimonides, Saadia Gaon and Abraham ibn Ezra and others explicitly denied their existence, and completely rejected concepts of demons, evil spirits, negative spiritual influences, attaching and possessing spirits. Their point of view eventually became mainstream Jewish understanding.
In Kabbalah demons are regarded a necessary part of the divine emanation in the material world and a byproduct of human sin (Qliphoth). However spirits such as the "shedim" may also be benevolent and were used in kabbalistic ceremonies (as with the "golem" of Rabbi Yehuda Loevy) and malevolent "shedim" ("Mazikin", from the root meaning "to damage") were often credited with possession.
Aggadic tales from the Persian tradition describe the "shedim", the " mazziḳim" ("harmers"), and the " ruḥin" ("spirits"). There were also "lilin" ("night spirits"), " ṭelane" ("shade", or "evening spirits"), " ṭiharire" ("midday spirits"), and " ẓafrire" ("morning spirits"), as well as the "demons that bring famine" and "such as cause storm and earthquake". According to some aggadic stories, demons were under the dominion of a king or chief, either Asmodai or, in the older Aggadah, Samael ("the angel of death"), who killed via poison. Stories in the fashion of this kind of folklore never became an essential feature of Jewish theology. Although occasionally an angel is called "satan" in the Babylon Talmud, this does not refer to a demon: "Stand not in the way of an ox when coming from the pasture, for Satan dances between his horns".
To the Qumran community during the Second Temple period this apotropaic prayer was assigned, stating: "And, I the Sage, declare the grandeur of his radiance in order to frighten and terri[fy] all the spirits of the ravaging angels and the bastard spirits, demons, Liliths, owls" ("Dead Sea Scrolls", "Songs of the Sage," Lines 4–5).
In the Dead Sea Scrolls, there exists a fragment entitled "Curses of Belial" ("Curses of Belial (Dead Sea Scrolls, 394, 4Q286(4Q287, fr. 6)=4QBerakhot)"). This fragment holds much rich language that reflects the sentiment shared between the Qumran towards Belial. In many ways this text shows how these people thought Belial influenced sin through the way they address him and speak of him. By addressing "Belial and all his guilty lot," (4Q286:2) they make it clear that he is not only impious, but also guilty of sins. Informing this state of uncleanliness are both his "hostile" and "wicked design" (4Q286:3,4). Through this design, Belial poisons the thoughts of those who are not necessarily sinners. Thus a dualism is born from those inclined to be wicked and those who aren't. It is clear that Belial directly influences sin by the mention of "abominable plots" and "guilty inclination" (4Q286:8,9). These are both mechanisms by which Belial advances his evil agenda that the Qumran have exposed and are calling upon God to protect them from. There is a deep sense of fear that Belial will "establish in their heart their evil devices" (4Q286:11,12). This sense of fear is the stimulus for this prayer in the first place. Without the worry and potential of falling victim to Belial's demonic sway, the Qumran people would never feel impelled to craft a curse. This very fact illuminates the power Belial was believed to hold over mortals, and the fact that sin proved to be a temptation that must stem from an impure origin.
In Jubilees 1:20, Belial's appearance continues to support the notion that sin is a direct product of his influence. Moreover, Belial's presence acts as a placeholder for all negative influences or those that would potentially interfere with God's will and a pious existence. Similarly to the "gentiles ... [who] cause them to sin against you" (Jubilees 1:19), Belial is associated with a force that drives one away from God. Coupled in this plea for protection against foreign rule, in this case the Egyptians, is a plea for protection from "the spirit of Belial" (Jubilees 1:19). Belial's tendency is to "ensnare [you] from every path of righteousness" (Jubilees 1:19). This phrase is intentionally vague, allowing room for interpretation. Everyone, in one way or another, finds themselves straying from the path of righteousness and by pawning this transgression off on Belial, he becomes a scapegoat for all misguidance, no matter what the cause. By associating Belial with all sorts of misfortune and negative external influence, the Qumran people are henceforth allowed to be let off for the sins they commit.
Belial's presence is found throughout the War Scrolls, located in the Dead Sea Scrolls, and is established as the force occupying the opposite end of the spectrum of God. In Col. I, verse 1, the very first line of the document, it is stated that "the first attack of the Sons of Light shall be undertaken against the forces of the Sons of Darkness, the army of Belial" (1Q33;1:1). This dichotomy sheds light on the negative connotations that Belial held at the time. Where God and his Sons of Light are forces that protect and promote piety, Belial and his Sons of Darkness cater to the opposite, instilling the desire to sin and encouraging destruction. This opposition is only reinforced later in the document; it continues to read that the "holy ones" will "strike a blow at wickedness", ultimately resulting in the "annihilation of the Sons of Darkness" (1Q33:1:13). This epic battle between good and evil described in such abstract terms, however it is also applicable to everyday life and serves as a lens through which the Qumran see the world. Every day is the Sons of Light battle evil and call upon God to help them overcome evil in ways small and large.
Belial's influence is not taken lightly. In Col. XI, verse 8, the text depicts God conquering the "hordes of Belial" (1Q33;11:8). This defeat is indicative of God's power over Belial and his forces of temptation. However the fact that Belial is the leader of hordes is a testament to how persuasive he can be. If Belial was obviously an arbiter of wrongdoing and was blatantly in the wrong, he wouldn't be able to amass an army. This fact serves as a warning message, reasserting God's strength, while also making it extremely clear the breadth of Belial's prowess. Belial's "council is to condemn and convict", so the Qumran feel strongly that their people are not only aware of his purpose, but also equipped to combat his influence (1Q33;13:11).
In the Damascus Document, Belial also makes a prominent appearance, being established as a source of evil and an origin of several types of sin. In Column 4, the first mention of Belial reads: "Belial shall be unleashed against Israel" (4Q266). This phrase is able to be interpreted myriad different ways. Belial is characterized in a wild and uncontrollable fashion, making him seem more dangerous and unpredictable. The notion of being unleashed is such that once he is free to roam; he is unstoppable and able to carry out his agenda uninhibited. The passage then goes to enumerate the "three nets" (4Q266;4:16) by which Belial captures his prey and forces them to sin. "Fornication ..., riches ..., [and] the profanation of the temple" (4Q266;4:17,18) make up the three nets. These three temptations were three agents by which people were driven to sin, so subsequently, the Qumran people crafted the nets of Belial to rationalize why these specific temptations were so toxic. Later in Column 5, Belial is mentioned again as one of "the removers of bound who led Israel astray" (4Q266;5:20). This statement is a clear display of Belial's influence over man regarding sin. The passage goes on to state: "they preached rebellion against ... God" (4Q266;5:21,22). Belial's purpose is to undermine the teachings of God, and he achieves this by imparting his nets on humans, or the incentive to sin.
In the "War of the Sons of Light Against the Sons of Darkness", Belial controls scores of demons, which are specifically allotted to him by God for the purpose of performing evil. Belial, despite his malevolent disposition, is considered an angel.
Demonic entities in the Old Testament of the Christian Bible are of two classes: the "satyrs" or "shaggy goats" (from Hebr. "se'irim" "hairy beings", "he-goats" or "fauns"; , ) and the "demons" (from Hebr. "shedim" first translated as "daimonion", "daemon"; , ).
The term "demon" (from the Koine Greek δαιμόνιον "daimonion") appears 63 times in the New Testament of the Christian Bible, mostly if not all relating to occurrences of possession of individuals and exorcism by Jesus.
Demons are sometimes included into biblical interpretation. In the story of Passover, the Bible tells the story as "the Lord struck down all the firstborn in Egypt" (Exodus 12:21–29). In the Book of Jubilees, which is considered canonical only by the Ethiopian Orthodox Church, this same event is told slightly differently: "All the powers of [the demon] Mastema had been let loose to slay all the first-born in the land of Egypt...And the powers of the Lord did everything according as the Lord commanded them" (Jubilees 49:2–4).
In the Genesis flood narrative the author explains how God was noticing "how corrupt the earth had become, for all the people on earth had corrupted their ways" (Genesis 6:12). In Jubilees the sins of man are attributed to "the unclean demons [who] began to lead astray the children of the sons of Noah, and to make to err and destroy them" (Jubilees 10:1). In Jubilees Mastema questions the loyalty of Abraham and tells God to "bid him offer him as a burnt offering on the altar, and Thou wilt see if he will do this command" (Jubilees 17:16). The discrepancy between the story in Jubilees and the story in Genesis 22 exists with the presence of Mastema. In Genesis, God tests the will of Abraham merely to determine whether he is a true follower, however; in Jubilees Mastema has an agenda behind promoting the sacrifice of Abraham's son, "an even more demonic act than that of the Satan in Job." In Jubilees, where Mastema, an angel tasked with the tempting of mortals into sin and iniquity, requests that God give him a tenth of the spirits of the children of the watchers, demons, in order to aid the process. These demons are passed into Mastema’s authority, where once again, an angel is in charge of demonic spirits.
The sources of demonic influence were thought to originate from the Watchers or Nephilim, who are first mentioned in Genesis 6 and are the focus of 1 Enoch Chapters 1–16, and also in Jubilees 10. The Nephilim were seen as the source of the sin and evil on earth because they are referenced in Genesis 6:4 before the story of the Flood. In Genesis 6:5, God sees evil in the hearts of men. The passage states, "the wickedness of humankind on earth was great", and that "Every inclination of the thoughts of their hearts was only continually evil" (Genesis 5). The mention of the Nephilim in the preceding sentence connects the spread of evil to the Nephilim. Enoch is a very similar story to Genesis 6:4–5, and provides further description of the story connecting the Nephilim to the corruption of humans. In Enoch, sin originates when angels descend from heaven and fornicate with women, birthing giants as tall as 300 cubits. The giants and the angels' departure of Heaven and mating with human women are also seen as the source of sorrow and sadness on Earth. The book of Enoch shows that these fallen angels can lead humans to sin through direct interaction or through providing forbidden knowledge. In Enoch, Semyaz leads the angels to mate with women. Angels mating with humans is against God's commands and is a cursed action, resulting in the wrath of God coming upon Earth. Azazel indirectly influences humans to sin by teaching them divine knowledge not meant for humans. Asael brings down the "stolen mysteries" (Enoch 16:3). Asael gives the humans weapons, which they use to kill each other. Humans are also taught other sinful actions such as beautification techniques, alchemy, astrology and how to make medicine (considered forbidden knowledge at the time). Demons originate from the evil spirits of the giants that are cursed by God to wander the earth. These spirits are stated in Enoch to "corrupt, fall, be excited, and fall upon the earth, and cause sorrow" (Enoch 15:11).
The Book of Jubilees conveys that sin occurs when Cainan accidentally transcribes astrological knowledge used by the Watchers (Jubilees 8). This differs from Enoch in that it does not place blame on the Angels. However, in Jubilees 10:4 the evil spirits of the Watchers are discussed as evil and still remain on earth to corrupt the humans. God binds only 90 percent of the Watchers and destroys them, leaving 10 percent to be ruled by Mastema. Because the evil in humans is great, only 10 percent would be needed to corrupt and lead humans astray. These spirits of the giants also referred to as "the bastards" in the Apotropaic prayer Songs of the Sage, which lists the names of demons the narrator hopes to expel.
In Christianity, demons are corrupted spirits carrying the execution of Satan's desires. They are generally regarded as three different types of spirits:
Often deities of other religions are interpreted or identified as such "demons" (from the Greek Old Testament δαιμόνιον "daimonion"). The evolution of the Christian Devil and pentagram are examples of early rituals and images that showcase evil qualities, as seen by the Christian churches.
Since Early Christianity, demonology has developed from a simple acceptance of demons to a complex study that has grown from the original ideas taken from Jewish demonology and Christian scriptures. Christian demonology is studied in depth within the Roman Catholic Church, although many other Christian churches affirm and discuss the existence of demons.
Building upon the few references to "daemons" in the New Testament, especially the poetry of the Book of Revelation, Christian writers of apocrypha from the 2nd century onwards created a more complicated tapestry of beliefs about "demons" that was largely independent of Christian scripture.
The contemporary Roman Catholic Church unequivocally teaches that angels and demons are real beings rather than just symbolic devices. The Catholic Church has a cadre of officially sanctioned exorcists which perform many exorcisms each year. The exorcists of the Catholic Church teach that demons attack humans continually but that afflicted persons can be effectively healed and protected either by the formal rite of exorcism, authorized to be performed only by bishops and those they designate, or by prayers of deliverance, which any Christian can offer for themselves or others.
At various times in Christian history, attempts have been made to classify demons according to various proposed demonic hierarchies.
In the Gospels, particularly the Gospel of Mark, Jesus cast out many demons from those afflicted with various ailments. He also lent this power to some of his disciples ().
Apuleius, by Augustine of Hippo, is ambiguous as to whether "daemons" had become "demonized" by the early 5th century:
He [Apulieus] also states that the blessed are called in Greek "eudaimones", because they are good souls, that is to say, good demons, confirming his opinion that the souls of men are demons.
Islam and Islam-related beliefs acknowledges the concept of evil spirits known as malevolent jinn, afarit and shayatin. Unlike the belief in angels, belief in demons is not obligated by the six articles of Islamic faith. However, the existence of several demonic spirits is generally assumed by Islamic theology and further elaborated beliefs persist in Islamic folklore. The Div, probably adapted under Zorastrian influences, became another prominent demonic creature in Islamic culture. Just like jinn, they are able to possess humans, but differ from jinn and shayatin in their physical strength thus also equated with Ogres or giants. They are in constant war with peri a benevolent spirit. Among Turks, the term "In" relating to demonic spirits, with characteristics comparable to jinn, are found and usually mentioned together. Nar as-samum ("fires of samum" or "poisonious fire") described in the Quran with hell, becomes associated with the minions of the Devil in tafsir.Furthermore the Quran mentions the "Zabaniyya", who torture the damned in hell, who may have originated from a class of Arabian demons. However, their execution of punishment is in accordance with God’s order, therefore they are not equalized with shayatin, that means they are not devils, who in turn are rebellious against the divine will.
Rather than demonic, jinn are depicted as similar to humans, as they live in societies and need dwelling places, food and water. Although their lifespan of multiple centuries exceeds those of humans, they still die and must procreate. As they are created from "smokeless fire," in contrast to humans made from "solid earth," the latter cannot see them. Similar to humans, jinn are subject to temptations of the shayatin and Satan. Therefore, they may either be good or evil. Evil jinn are comparable to demons, scaring or possessing humans. In folklore some Ghoul may also prey on lonely travelers to dissuade them from their paths and eat their corpses. Although not evil, a jinni may haunt a person, because it feels offended by him. Islam has no binding origin story of jinn, but Islamic beliefs commonly assume that the jinn were created on a Thursday thousands of years before mankind. Therefore, Islamic medieval narratives often called them "pre-Adamites". However, just like shayatin, jinn are held responsible for various diseases and possession. Both can be summoned and subjugated by magicians. Both are thought to lurk in dirty and desolated places.
Otherwise, the shayatin are the Islamic equivalent of "demons" in western usage.
Islam differs regarding the origin of demons. They may either be a class of heavenly creatures cast out of heaven or the descendants of Iblis. Unlike jinn and humans, shayatin are immortal and will meet their end when the world ceases to exist; however, prayers could dissolve or banish them. Unlike jinn and humans, shayatin can not attain salvation. If they attempt to reach heaven, they are chased away by angels and shooting stars. The shayatin usually do not possess people, but seduce them into committing falsehood and sin instead. This is done by whispering directly into humans' minds. These are called "waswās" and may enter the hearts of humans to amplify strong, negative emotions like depression or anger.
Another demonic spirit is called "ifrit" and although there are no descriptions regarding an iftrit's behavior found in Islamic canonical texts, Folk Islam often depicts them with traits of malevolent ghosts, returning after death or a subcategory of shayatin drawn the life-force of those who were murdered. Moreover, they are not exactly shayatin since they differ in their origin.
In the Bahá'í Faith, demons are not regarded as independent evil spirits as they are in some faiths. Rather, evil spirits described in various faiths' traditions, such as Satan, fallen angels, demons and jinn, are metaphors for the base character traits a human being may acquire and manifest when he turns away from God and follows his lower nature. Belief in the existence of ghosts and earthbound spirits is rejected and considered to be the product of superstition.
While some people fear demons, or attempt to exorcise them, others willfully attempt to summon them for knowledge, assistance, or power. The ceremonial magician usually consults a grimoire, which gives the names and abilities of demons as well as detailed instructions for conjuring and controlling them. Grimoires are not limited to demons – some give the names of angels or spirits which can be called, a process called theurgy. The use of ceremonial magic to call demons is also known as goetia, the name taken from a section in the famous grimoire known as the "Lesser Key of Solomon".
Hindu beliefs include numerous varieties of spirits such as Vetalas, Bhutas and Pishachas. Rakshasas and Asuras are often misunderstood to be demons.
"Asura", in the earliest hymns of the Rigveda, originally meant any supernatural spirit, either good or bad. Since the /s/ of the Indic linguistic branch is cognate with the /h/ of the Early Iranian languages, the word "Asura", representing a category of celestial beings. Ancient Hinduism tells that Devas (also called "suras") and Asuras are half-brothers, sons of the same father Kashyapa; although some of the Devas, such as Varuna, are also called Asuras. Later, during Puranic age, Asura and Rakshasa came to exclusively mean any of a race of anthropomorphic, powerful, possibly evil beings. Daitya (lit. sons of the mother "Diti"), Maya Danava, Rakshasa (lit. from "harm to be guarded against"), and Asura are incorrectly translated into English as "demon".
In post-Vedic Hindu scriptures, pious, highly enlightened Asuras, such as Prahlada and Vibhishana, are not uncommon. The Asura are not fundamentally against the gods, nor do they tempt humans to fall. Many people metaphorically interpret the Asura as manifestations of the ignoble passions in the human mind and as symbolic devices. There were also cases of power-hungry Asuras challenging various aspects of the gods, but only to be defeated eventually and seek forgiveness.
Hinduism advocates the reincarnation and transmigration of souls according to one's karma. Souls (Atman) of the dead are adjudged by the Yama and are accorded various purging punishments before being reborn. Humans that have committed extraordinary wrongs are condemned to roam as lonely, often mischief mongers, spirits for a length of time before being reborn. Many kinds of such spirits (Vetalas, Pishachas, Bhūta) are recognized in the later Hindu texts.
Evil spirits are the creation of the evil principle Ahriman in Zoroastrian cosmology, commonly referred to as Daeva. The first six archdemons are produced by Ahriman in direct opposition to the holy immortals created by Ahura Mazda the principle of good. This six archdemons (or seven if Ahriman is included) give existence to uncountable malevolent daeva; the Zorastrian demons. They are the embodiment of evil, causing moral imperfection, destroy, kill and torment the wicked souls in the afterlife. Some demons are related to specific vices. Humans in the state of such sin might be possessed by a corresponding demon:
In Manichaean mythology demons had a real existence, as they derived from the Kingdom of Darkness, they were not metaphors expressing the absence of good nor are they fallen angels, that means they are not originally good, but entities purely evil. The demons came into the world after the Prince of Darkness assaulted the Realm of Light. The demons ultimately failed their attack and ended up imprisoned in the structures and matter of the contemporary world. Lacking virtues and being in constant conflict with both the divine creatures and themselves, they are inferior to the divine entities and overcome by the divine beings at the end of time. They are not sophisticated or inventive creatures, but only driven by their urges.
Simultaneously, the Manichaean concept of demons remains abstract and is closely linked to ethical aspects of evil that many of them appear as personified evil qualities such as:
The Watcher, another group of demonic entities, known from the Enochian writings, appear in the canonical Book of Giants. The Watchers came into existence after the demons were chained up in the sky by Living Spirit. Later, outwitted by Third Messenger, they fall to earth, there they had intercourse with human women and beget the monstrous Nephilim. Thereupon they establish a tyrannical rule on earth, suppressing mankind, until they are defeated by the angels of punishment, setting an end to their rule.
The Algonquian people traditionally believe in a spirit called a wendigo. The spirit is believed to possess people who then become cannibals. In Athabaskan folklore, there is a belief in wechuge, a similar cannibal sprit.
According to Rosemary Ellen Guiley, "Demons are not courted or worshipped in contemporary Wicca and Paganism. The existence of negative energies is acknowledged."
Psychologist Wilhelm Wundt remarked that "among the activities attributed by myths all over the world to demons, the harmful predominate, so that in popular belief bad demons are clearly older than good ones." Sigmund Freud developed this idea and claimed that the concept of demons was derived from the important relation of the living to the dead: "The fact that demons are always regarded as the spirits of those who have died "recently" shows better than anything the influence of mourning on the origin of the belief in demons."
M. Scott Peck, an American psychiatrist, wrote two books on the subject, "People of the Lie: The Hope For Healing Human Evil" and "Glimpses of the Devil: A Psychiatrist's Personal Accounts of Possession, Exorcism, and Redemption". Peck describes in some detail several cases involving his patients. In "People of the Lie" he provides identifying characteristics of an evil person, whom he classified as having a character disorder. In "Glimpses of the Devil" Peck goes into significant detail describing how he became interested in exorcism in order to debunk the "myth" of possession by evil spirits – only to be convinced otherwise after encountering two cases which did not fit into any category known to psychology or psychiatry. Peck came to the conclusion that possession was a rare phenomenon related to evil and that possessed people are not actually evil; rather, they are doing battle with the forces of evil.
Although Peck's earlier work was met with widespread popular acceptance, his work on the topics of evil and possession has generated significant debate and derision. Much was made of his association with (and admiration for) the controversial Malachi Martin, a Roman Catholic priest and a former Jesuit, despite the fact that Peck consistently called Martin a liar and a manipulator. Richard Woods, a Roman Catholic priest and theologian, has claimed that Dr. Peck misdiagnosed patients based upon a lack of knowledge regarding dissociative identity disorder (formerly known as multiple personality disorder) and had apparently transgressed the boundaries of professional ethics by attempting to persuade his patients into accepting Christianity. Father Woods admitted that he has never witnessed a genuine case of demonic possession in all his years.
According to S. N. Chiu, God is shown sending a demon against Saul in 1 Samuel 16 and 18 in order to punish him for the failure to follow God's instructions, showing God as having the power to use demons for his own purposes, putting the demon under his divine authority. According to the "Britannica Concise Encyclopedia", demons, despite being typically associated with evil, are often shown to be under divine control, and not acting of their own devices. | https://en.wikipedia.org/wiki?curid=8280 |
Domino effect
A domino effect or chain reaction is the cumulative effect produced when one event sets off a chain of similar events. The term is best known as a mechanical effect and is used as an analogy to a falling row of dominoes. It typically refers to a linked sequence of events where the time between successive events is relatively small. It can be used literally (an observed series of actual collisions) or metaphorically (causal linkages within systems such as global finance or politics). The term "domino effect" is used both to imply that an event is inevitable or highly likely (as it has already started to happen), and conversely to imply that an event is impossible or highly unlikely (the one domino left standing).
The domino effect can easily be visualized by placing a row of dominoes upright, each separated by a small distance. Upon pushing the first domino, the next domino in line will be knocked over, and so on, thus firing a linear chain in which each domino's fall is triggered by the domino immediately preceding it. The effect is the same regardless of the length of the chain. The energy used in this chain reaction is the potential energy of the dominoes due to them being in a metastases state; when the first domino is toppled, the energy transferred by the fall is greater than the energy needed to knock over the following domino, and so on.
The domino effect is exploited in Rube Goldberg machines.
Relevant physical theory:
Mathematical theory
Political theory
Social | https://en.wikipedia.org/wiki?curid=8286 |
Diffusion pump
Diffusion pumps use a high speed jet of vapor to direct gas molecules in the pump throat down into the bottom of the pump and out the exhaust. They were the first type of high vacuum pumps operating in the regime of free molecular flow, where the movement of the gas molecules can be better understood as diffusion than by conventional fluid dynamics. Invented in 1915 by Wolfgang Gaede, he named it a "diffusion pump" since his design was based on the finding that gas cannot diffuse against the vapor stream, but will be carried with it to the exhaust. However, the principle of operation might be more precisely described as gas-jet pump, since diffusion plays a role also in other high vacuum pumps. In modern textbooks, the diffusion pump is categorized as a momentum transfer pump.
The diffusion pump is widely used in both industrial and research applications. Most modern diffusion pumps use silicone oil or polyphenyl ethers as the working fluid.
In the late 19th century, most vacuums were creating using a Sprengel pump, which had the advantage of being very simple to operate, and capable of achieving very good vacuum given enough time. Compared to later pumps, however, the pumping speed was very slow.
Following his invention of the molecular pump, the diffusion pump was invented in 1915 by Wolfgang Gaede, and originally used elemental mercury as the working fluid. After its invention, the design was quickly commercialized by Leybold.
It was then improved by Irving Langmuir and W. Crawford. Cecil Reginald Burch discovered the possibility of using silicone oil in 1928.
An oil diffusion pump is used to achieve higher vacuum (lower pressure) than is possible by use of positive displacement pumps alone. Although its use has been mainly associated within the high-vacuum range (down to 10−9 mbar), diffusion pumps today can produce pressures approaching 10−10 mbar when properly used with modern fluids and accessories. The features that make the diffusion pump attractive for high and ultra-high vacuum use are its high pumping speed for all gases and low cost per unit pumping speed when compared with other types of pump used in the same vacuum range. Diffusion pumps cannot discharge directly into the atmosphere, so a mechanical forepump is typically used to maintain an outlet pressure around 0.1 mbar.
The oil diffusion pump is operated with an oil of low vapor pressure. The high speed jet is generated by boiling the fluid and directing the vapor through a jet assembly. Note that the oil is gaseous when entering the nozzles. Within the nozzles, the flow changes from laminar to supersonic and molecular. Often, several jets are used in series to enhance the pumping action. The outside of the diffusion pump is cooled using either air flow or a water line. As the vapor jet hits the outer cooled shell of the diffusion pump, the working fluid condenses and is recovered and directed back to the boiler. The pumped gases continue flowing to the base of the pump at increased pressure, flowing out through the diffusion pump outlet, where they are compressed to ambient pressure by the secondary mechanical forepump and exhausted.
Unlike turbomolecular pumps and cryopumps, diffusion pumps have no moving parts and as a result are quite durable and reliable. They can function over pressure ranges of 10−10 to 10−2 mbar. They are driven only by convection and thus have a very low energy efficiency.
One major disadvantage of diffusion pumps is the tendency to backstream oil into the vacuum chamber. This oil can contaminate surfaces inside the chamber or upon contact with hot filaments or electrical discharges may result in carbonaceous or siliceous deposits. Due to backstreaming, oil diffusion pumps are not suitable for use with highly sensitive analytical equipment or other applications which require an extremely clean vacuum environment, but mercury diffusion pumps may be in the case of ultra high vacuum chambers used for metal deposition. Often cold traps and baffles are used to minimize backstreaming, although this results in some loss of pumping ability.
The oil of a diffusion pump cannot be exposed to the atmosphere when hot. If this occurs, the oil will burn and has to be replaced.
The least expensive diffusion pump oils are based on hydrocarbons which have been purified by double-distillation. Compared with the other fluids, they are have higher vapor pressure, so are usually limited to a pressure of 1 x 10-6 Torr. They are also the most likely to burn or explode if exposed to oxidizers.
The most common silicone oils used in diffusion pumps are trisiloxanes, which contain the chemical group Si-O-Si-O-Si, to which various phenyl groups or methyl groups are attached. These are available as the so called 702 and 703 blends, which were formerly manufactured by Dow Corning. These can be further separated into 704 and 705 oils, which are made up of the isomers of tetraphenyl tetramethyl trisiloxane and pentaphenyl trimethyl trisiloxane respectively.
For pumping reactive species, usually a polyphenyl ether based oil is used. These oils are the most chemical and heat resistant type of diffusion pump oil.
The steam ejector is a popular form of pump for vacuum distillation and freeze-drying. A jet of steam entrains the vapour that must be removed from the vacuum chamber. Steam ejectors can have single or multiple stages, with and without condensers in between the stages. While both steam ejectors and diffusion pumps use jets of vapor to entrain gas, they work on fundamentally different principles - steam ejectors rely on viscous flow and mixing to pump gas, whereas diffusion pumps use molecular diffusion. This has several consequences. In diffusion pumps, the inlet pressure can be much lower than the static pressure of jet, whereas in steam ejectors the two pressures are about the same. Also, diffusion pumps are capable of much higher compression ratios, and cannot discharge directly to atmosphere. | https://en.wikipedia.org/wiki?curid=8293 |
Doris Day
Doris Day (born Doris Mary Anne Kappelhoff; April 3, 1922 – May 13, 2019) was an American actress, singer, and animal welfare activist. She began her career as a big band singer in 1939, achieving commercial success in 1945 with two No. 1 recordings, "Sentimental Journey" and "My Dreams Are Getting Better All the Time" with Les Brown & His Band of Renown. She left Brown to embark on a solo career and recorded more than 650 songs from 1947 to 1967.
Day's film career began during the latter part of the Golden Age of Hollywood with the film "Romance on the High Seas" (1948), leading to a 20-year career as a motion picture actress. She starred in films of many genres, including musicals, comedies, dramas, and thrillers. She played the title role in "Calamity Jane" (1953) and starred in Alfred Hitchcock's "The Man Who Knew Too Much" (1956) with James Stewart. Her best-known films are those in which she co-starred with Rock Hudson, chief among them 1959's "Pillow Talk", for which she was nominated for the Academy Award for Best Actress. She also worked with James Garner on both "Move Over, Darling" (1963) and "The Thrill of It All" (1963), and starred alongside Clark Gable, Cary Grant, James Cagney, David Niven, Ginger Rogers, Jack Lemmon, Frank Sinatra, Kirk Douglas, Lauren Bacall, and Rod Taylor in various movies. After ending her film career in 1968, only briefly removed from the height of her popularity, she starred in her own sitcom "The Doris Day Show" (1968–1973).
Day became one of the biggest film stars in the 1950s-1960s era, and as of 2012 was one of eight performers to have been the top box-office earner in the United States four times. In 2011, she released her 29th studio album "My Heart" which contained new material and became a UK Top 10 album. She received the Grammy Lifetime Achievement Award and a Legend Award from the Society of Singers. In 1960, she was nominated for the Academy Award for Best Actress, and was given the Cecil B. DeMille Award for lifetime achievement in motion pictures in 1989. In 2004, she was awarded the Presidential Medal of Freedom; this was followed in 2011 by the Los Angeles Film Critics Association's Career Achievement Award.
Day was born Doris Mary Anne Kappelhoff on April 3, 1922 in Cincinnati, Ohio, the daughter of Alma Sophia ("née" Welz; 1895–1976) and William Joseph Kappelhoff (1892–1967). Her mother was a homemaker, and her father was a music teacher and choirmaster. Her maternal and paternal grandparents were German; her paternal grandfather Franz Joseph Wilhelm Kappelhoff immigrated to the United States in 1875 and settled in Cincinnati which had a large German community with its own churches, clubs, and German-language newspapers. For most of her life, Day stated she was born in 1924; it was not until her 95th birthday – when the Associated Press found her birth certificate, showing a 1922 date of birth – that she stated otherwise. It was common among actresses in Hollywood to state an age younger than they actually were in reality because youth was everything when it came to casting.
The youngest of three siblings, she had two older brothers: Richard (who died before her birth) and Paul, two to three years older. Due to her father's alleged infidelity, her parents separated. She developed an early interest in dance, and in the mid-1930s formed a dance duo with Jerry Doherty that performed locally in Cincinnati. A car accident on October 13, 1937 injured her right leg and curtailed her prospects as a professional dancer.
While recovering from her car accident, Kappelhoff started to sing along with the radio and discovered a talent she did not know she had. "During this long, boring period, I used to while away a lot of time listening to the radio, sometimes singing along with the likes of Benny Goodman, Duke Ellington, Tommy Dorsey, and Glenn Miller", she told A. E. Hotchner, one of Day's biographers. "But the one radio voice I listened to above others belonged to Ella Fitzgerald. There was a quality to her voice that fascinated me, and I'd sing along with her, trying to catch the subtle ways she shaded her voice, the casual yet clean way she sang the words."
Observing her daughter sing rekindled Alma's interest in show business, and she decided Doris must have singing lessons. She engaged a teacher, Grace Raine. After three lessons, Raine told Alma that young Doris had "tremendous potential"; Raine was so impressed that she gave Doris three lessons a week for the price of one. Years later, Day said that Raine had the biggest effect on her singing style and career.
During the eight months she was taking singing lessons, Kappelhoff had her first professional jobs as a vocalist, on the WLW radio program "Carlin's Carnival", and in a local restaurant, Charlie Yee's Shanghai Inn. During her radio performances, she first caught the attention of Barney Rapp, who was looking for a female vocalist and asked if she would like to audition for the job. According to Rapp, he had auditioned about 200 singers when Kappelhoff got the job.
While working for Rapp in 1939, she adopted the stage surname "Day", at Rapp's suggestion. Rapp felt that "Kappelhoff" was too long for marquees, and he admired her rendition of the song "Day After Day". After working with Rapp, Day worked with bandleaders Jimmy James, Bob Crosby, and Les Brown. In 1941, Day appeared as a singer in three Soundies with the Les Brown band.
While working with Brown, Day recorded her first hit recording, "Sentimental Journey", released in early 1945. It soon became an anthem of the desire of World War II demobilizing troops to return home. The song continues to be associated with Day, and she re-recorded it on several occasions, including a version in her 1971 television special. During 1945–46, Day (as vocalist with the Les Brown Band) had six other top ten hits on the "Billboard" chart: "My Dreams Are Getting Better All the Time", Tain't Me", "Till The End of Time", "You Won't Be Satisfied (Until You Break My Heart)", "The Whole World is Singing My Song", and "I Got the Sun in the Mornin. Les Brown said, "As a singer Doris belongs in the company of Bing Crosby and Frank Sinatra." (Aljean Harmetz (2019). "Wholesome Box-Office Star and Golden Voice of 'Que Sera, Sera' ". "The New York Times." p. 1 )
While singing with the Les Brown band and for nearly two years on Bob Hope's weekly radio program, she toured extensively across the United States.
Her performance of the song "Embraceable You" impressed songwriter Jule Styne and his partner, Sammy Cahn, and they recommended her for a role in "Romance on the High Seas" (1948). Day was cast for the role after auditioning for director Michael Curtiz. She was shocked at being offered the role in the film, and admitted to Curtiz that she was a singer without acting experience. But he said he liked that "she was honest", not afraid to admit it, and he wanted someone who "looked like the All-American Girl". Day was the discovery of which Curtiz was proudest during his career.
The film provided her with a hit recording as a soloist, "It's Magic", which followed by two months her first hit ("Love Somebody" in 1948) recorded as a duet with Buddy Clark. Day recorded "Someone Like You", before the film "My Dream Is Yours" (1949), which featured the song. In 1950, U.S. servicemen in Korea voted her their favorite star.
She continued to make minor and frequently nostalgic period musicals such as "On Moonlight Bay" (1951), "By the Light of the Silvery Moon" (1953), and "Tea For Two" (1950) for Warner Brothers.
Her most commercially successful film for Warner was "I'll See You in My Dreams" (1951), which broke box-office records of 20 years. The film is a musical biography of lyricist Gus Kahn. It was Day's fourth film directed by Curtiz. Day appeared as the title character in the comedic western-themed musical, "Calamity Jane" (1953). A song from the film, "Secret Love", won the Academy Award for Best Original Song and became Day's fourth No. 1 hit single in the United States.
Between 1950 and 1953, the albums from six of her movie musicals charted in the Top 10, three of them at No. 1. After filming "Lucky Me" (1954) with Bob Cummings and "Young at Heart" (1955) with Frank Sinatra, Day chose not to renew her contract with Warner Brothers.
During this period, Day also had her own radio program, "The Doris Day Show". It was broadcast on CBS in 1952–1953.
Having become primarily recognized as a musical-comedy actress, Day gradually took on more dramatic roles to broaden her range. Her dramatic star turn as singer Ruth Etting in "Love Me or Leave Me" (1955), with top billing above James Cagney, received critical and commercial success, becoming Day's biggest hit thus far. Cagney said she had "the ability to project the simple, direct statement of a simple, direct idea without cluttering it", comparing her to Laurette Taylor's Broadway performance in "The Glass Menagerie" (1945), one of the greatest performances by an American actor. Day said it was her best film performance. Producer Joe Pasternak said, "I was stunned that Doris did not get an Oscar nomination." The soundtrack album from that movie was a No. 1 hit.
Day starred in Alfred Hitchcock's suspense film "The Man Who Knew Too Much" (1956) with James Stewart. She sang two songs in the film, "Que Sera, Sera (Whatever Will Be, Will Be)" which won an Academy Award for Best Original Song, and "We'll Love Again". The film was Day's 10th movie to be in the Top 10 at the box office. Day played the title role in the thriller/noir "Julie" (also 1956) with Louis Jourdan.
After three successive dramatic films, Day returned to her musical/comedic roots in "The Pajama Game" (1957) with John Raitt. The film was based on the Broadway play of the same name. She worked with Paramount Pictures for the comedy "Teacher's Pet" (1958), alongside Clark Gable and Gig Young. She co-starred with Richard Widmark and Gig Young in the romantic comedy film "The Tunnel of Love" (also 1958), but found scant success opposite Jack Lemmon in "It Happened to Jane" (1959).
"Billboard" annual nationwide poll of disc jockeys had ranked Day as the No. 1 female vocalist nine times in ten years (1949 through 1958), but her success and popularity as a singer was now being overshadowed by her box-office appeal.
In 1959, Day entered her most successful phase as a film actress with a series of romantic comedies. This success began with "Pillow Talk" (1959), co-starring Rock Hudson who became a lifelong friend, and Tony Randall. Day received a nomination for an Academy Award for Best Actress. It was the only Oscar nomination she received in her career. Day, Hudson, and Randall made two more films together, "Lover Come Back" (1961) and "Send Me No Flowers" (1964).
Along with David Niven and Janis Paige, Day starred in "Please Don't Eat the Daisies" (1960) and with Cary Grant in the comedy "That Touch of Mink" (1962). During 1960 and the 1962 to 1964 period, she ranked number one at the box office, the second woman to be number one four times, an accomplishment equalled by no other actress except Shirley Temple. She set a record that has yet to be equaled, receiving seven consecutive Laurel Awards as the top female box office star.
Day teamed up with James Garner starting with "The Thrill of It All", followed by "Move Over, Darling" (both 1963). The film's theme song, "Move Over Darling", co-written by her son, reached in the UK. In between these comedic roles, Day co-starred with Rex Harrison in the movie thriller "Midnight Lace" (1960), an updating of the stage thriller "Gaslight".
By the late 1960s, the sexual revolution of the baby boomer generation had refocused public attitudes about sex. Times changed, but Day's films did not. Day's next film "Do Not Disturb" (1965) was popular with audiences, but her popularity soon waned. Critics and comics dubbed Day "The World's Oldest Virgin", and audiences began to shy away from her films. As a result, she slipped from the list of top box-office stars, last appearing in the top ten with the hit film "The Glass Bottom Boat" (1966). One of the roles she turned down was that of Mrs. Robinson in "The Graduate", a role that eventually went to Anne Bancroft. In her published memoirs, Day said she had rejected the part on moral grounds: she found the script "vulgar and offensive".
She starred in the western film "The Ballad of Josie" (1967). That same year, Day recorded "The Love Album", although it was not released until 1994. The following year (1968), she starred in the comedy film "Where Were You When the Lights Went Out?" which centers on the Northeast blackout of November 9, 1965. Her final feature, the comedy "With Six You Get Eggroll", was released in 1968.
From 1959 to 1970, Day received nine Laurel Award nominations (and won four times) for best female performance in eight comedies and one drama. From 1959 through 1969, she received six Golden Globe nominations for best female performance in three comedies, one drama ("Midnight Lace"), one musical ("Jumbo"), and her television series.
After her third husband Martin Melcher died on April 20, 1968, a shocked Day discovered that Melcher and his business partner and "adviser" Jerome Bernard Rosenthal had squandered her earnings, leaving her deeply in debt. Rosenthal had been her attorney since 1949, when he represented her in her uncontested divorce action against her second husband, saxophonist George W. Weidler. Day filed suit against Rosenthal in February 1969, won a successful decision in 1974, but did not receive compensation until a settlement in 1979.
Day also learned to her displeasure that Melcher had committed her to a television series, which became "The Doris Day Show".
Day hated the idea of performing on television, but felt obligated to do it. The first episode of "The Doris Day Show" aired on September 24, 1968, and, from 1968 to 1973, employed "Que Sera, Sera" as its theme song. Day persevered (she needed the work to help pay off her debts), but only after CBS ceded creative control to her and her son. The successful show enjoyed a five-year run, and functioned as a curtain raiser for the "Carol Burnett Show". It is remembered today for its abrupt season-to-season changes in casting and premise.
By the end of its run in 1973, public tastes had changed, as had those of the television industry, and her firmly established persona was regarded as passé. She largely retired from acting after "The Doris Day Show", but did complete two television specials, "The Doris Mary Anne Kappelhoff Special" (1971) and "Doris Day Today" (1975), and was a guest on various shows in the 1970s.
In the 1985–86 season, Day hosted her own television talk show, "Doris Day's Best Friends", on the Christian Broadcasting Network (CBN). The network canceled the show after 26 episodes, despite the worldwide publicity it received. Much of that attention came from the episode featuring Rock Hudson, in which Hudson was showing the first public symptoms of AIDS including severe weight loss and admitted fatigue; Hudson would die from the disease a year later. Day later said, "He was very sick. But I just brushed that off and I came out and put my arms around him and said, 'Am I glad to see you'."
Day's husband and agent, Martin Melcher, had Beverly Hills lawyer Jerome Rosenthal handle his wife's money since the 1940s. "During that period, Rosenthal committed breaches of professional ethics that are difficult to exaggerate", as one court put it.
In October 1985, the California Supreme Court rejected Rosenthal's appeal of the multimillion-dollar judgment against him for legal malpractice, and upheld conclusions of a trial court and a Court of Appeal that Rosenthal acted improperly. In April 1986, the U.S. Supreme Court refused to review the lower court's judgment. In June 1987, Rosenthal filed a $30 million lawsuit against lawyers he claimed cheated him out of millions of dollars in real estate investments. He named Day as a co-defendant, describing her as an "unwilling, involuntary plaintiff whose consent cannot be obtained". Rosenthal claimed that millions of dollars Day lost were in real estate sold after Melcher died in 1968, in which Rosenthal asserted that the attorneys gave Day bad advice, telling her to sell, at a loss, three hotels, in Palo Alto, California, Dallas, Texas, and Atlanta, Georgia, plus some oil leases in Kentucky and Ohio. He claimed he had made the investments under a long-term plan, and did not intend to sell them until they appreciated in value. Two of the hotels sold in 1970 for about $7 million, and their estimated worth in 1986 was $50 million.
Terry Melcher stated that his adoptive father's premature death saved Day from financial ruin. It remains unresolved whether Martin Melcher had himself also been duped. Day stated publicly that she believed her husband innocent of any deliberate wrongdoing, stating that he "simply trusted the wrong person". According to Day's autobiography, as told to A. E. Hotchner, the usually athletic and healthy Martin Melcher had an enlarged heart. Most of the interviews on the subject given to Hotchner (and included in Day's autobiography) paint an unflattering portrait of Melcher. Author David Kaufman asserts that one of Day's costars, actor Louis Jourdan, maintained that Day herself disliked her husband, but Day's public statements regarding Melcher appear to contradict that assertion.
Day was scheduled to present, along with Patrick Swayze and Marvin Hamlisch, the Best Original Score Oscar at the 61st Academy Awards in March 1989 but she suffered a deep leg cut and was unable to attend. She had been walking through the gardens of her hotel when she cut her leg on a sprinkler. The cut required stitches.
Day was inducted into the Ohio Women's Hall of Fame in 1981 and received the Cecil B. DeMille Award for career achievement in 1989. In 1994, Day's "Greatest Hits" album became another entry into the British charts. Her cover of "Perhaps, Perhaps, Perhaps" was included in the soundtrack of the Australian film "Strictly Ballroom."
Day participated in interviews and celebrations of her birthday with an annual Doris Day music marathon. In July 2008, she appeared on the Southern California radio show of longtime friend and newscaster George Putnam.
Day turned down a tribute offer from the American Film Institute and from the Kennedy Center Honors because they require attendance in person. In 2004, she was awarded the Presidential Medal of Freedom by President George W. Bush for her achievements in the entertainment industry and for her work on behalf of animals. President Bush stated:
Columnist Liz Smith and film critic Rex Reed mounted vigorous campaigns to gather support for an Honorary Academy Award for Day to herald her film career and her status as the top female box-office star of all time. According to "The Hollywood Reporter" in 2015, the Academy offered her the Honorary Oscar multiple times, but she declined as she saw the film industry as a part of her past life. Day received a Grammy for Lifetime Achievement in Music in 2008, albeit again in absentia.
She received three Grammy Hall of Fame Awards, in 1998, 1999 and 2012, for her recordings of "Sentimental Journey", "Secret Love", and "Que Sera, Sera", respectively. Day was inducted into the Hit Parade Hall of Fame in 2007, and in 2010 received the first Legend Award ever presented by the Society of Singers.
Day, aged 89, released "My Heart" in the United Kingdom on September 5, 2011, her first new album in nearly two decades since the release of "The Love Album", which, although recorded in 1967, was not released until 1994. The album is a compilation of previously unreleased recordings produced by Day's son, Terry Melcher, before his death in 2004. Tracks include the 1970s Joe Cocker hit "You Are So Beautiful", the Beach Boys' "Disney Girls" and jazz standards such as "My Buddy", which Day originally sang in the film "I'll See You in My Dreams" (1951).
After the disc was released in the United States it soon climbed to No. 12 on Amazon's bestseller list, and helped raise funds for the Doris Day Animal League. Day became the oldest artist to score a UK Top 10 with an album featuring new material.
In January 2012, the Los Angeles Film Critics Association presented Day with a Lifetime Achievement Award.
In April 2014, Day made an unexpected public appearance to attend the annual Doris Day Animal Foundation benefit. The benefit raises money for her Animal Foundation.
Clint Eastwood offered Day a role in a film he was planning to direct in 2015. Although she reportedly was in talks with Eastwood, her neighbor in Carmel, about a role in the film, she eventually declined.
Day granted ABC a telephone interview on her birthday in 2016, which was accompanied by photos of her life and career.
In a rare interview with "The Hollywood Reporter" on April 4, 2019, the day after her 97th birthday, Day talked about her work on the Doris Day Animal Foundation, founded in 1978. On the question of what her favorite film was, she answered "Calamity Jane": "I was such a tomboy growing up, and she was such a fun character to play. Of course, the music was wonderful, too—'Secret Love,' especially, is such a beautiful song."
To commemorate her birthday, her fans gathered each year to take part in a three-day party in her hometown of Carmel, California, in late March. The event was also a fundraiser for her Animal Foundation. During the 2019 event, there was a special screening of her film "Pillow Talk" (1959) to celebrate its 60th anniversary. About the film, Day stated in the same interview that she "had such fun working with my pal, Rock. We laughed our way through three films we made together and remained great friends. I miss him."
Day's interest in animal welfare and related issues apparently dated to her teen years. While recovering from an automobile accident, she took her dog Tiny for a walk without a leash. Tiny ran into the street and was killed by a passing car. Day later expressed guilt and loneliness about Tiny's untimely death. In 1971, she co-founded Actors and Others for Animals, and appeared in a series of newspaper advertisements denouncing the wearing of fur, alongside Mary Tyler Moore, Angie Dickinson, and Jayne Meadows.
In 1978, Day founded the Doris Day Pet Foundation, now the Doris Day Animal Foundation (DDAF). A non-profit 501(c)(3) grant-giving public charity, DDAF funds other non-profit causes throughout the US that share DDAF's mission of helping animals and the people who love them. The DDAF continues to operate independently.
To complement the Doris Day Animal Foundation, Day formed the Doris Day Animal League (DDAL) in 1987, a national non-profit citizen's lobbying organization whose mission is to reduce pain and suffering and protect animals through legislative initiatives. Day actively lobbied the United States Congress in support of legislation designed to safeguard animal welfare on a number of occasions and in 1995 she originated the annual Spay Day USA. The DDAL merged into The Humane Society of the United States (HSUS) in 2006. The HSUS now manages World Spay Day, the annual one-day spay/neuter event that Day originated.
A facility bearing her name, the Doris Day Horse Rescue and Adoption Center, which helps abused and neglected horses, opened in 2011 in Murchison, Texas, on the grounds of an animal sanctuary started by her late friend, author Cleveland Amory. Day contributed $250,000 towards the founding of the center.
A post-humous auction of 1,100 of Day's possessions in April 2020 generated $3 million for the Doris Day Animal Foundation.
After her retirement from films, Day lived in Carmel-by-the-Sea, California. She had many pets and adopted stray animals. She was a lifelong Republican. Her only child was music producer and songwriter Terry Melcher, who had a hit in the 1960s with "Hey Little Cobra" under the name the Rip Chords; he died of melanoma in November 2004. Since the 1980s Day owned a hotel in Carmel-by-the-Sea called the Cypress Inn which she originally co-owned with her son. It was an early pet–friendly hotel and was featured in Architectural Digest in 1999.
Day was married four times. From March 1941 to February 1943, she was married to trombonist Al Jorden, a violent schizophrenic who later took his own life, whom she met in Barney Rapp's Band. They had a son (Melcher, - christened Terence Paul Jorden at birth, 1942–2004). When Doris refused to have an abortion, he beat her in an attempt to force a miscarriage.
Her second marriage was to George William Weidler from March 30, 1946, to May 31, 1949, a saxophonist and the brother of actress Virginia Weidler. Weidler and Day met again several years later during a brief reconciliation, and he introduced her to Christian Science.
Day married American film producer Martin Melcher on April 3, 1951, her 29th birthday, and this marriage lasted until he died in April 1968. Melcher adopted Day's son Terry, who became a successful musician and record producer under the name Terry Melcher. Martin Melcher produced many of Day's movies. They were both Christian Scientists, resulting in her not seeing a doctor for some time for symptoms which suggested cancer.
Day's fourth marriage was to Barry Comden (1935–2009) from April 14, 1976, until April 2, 1982. He was the "maître d'hôtel" at one of Day's favorite restaurants. He knew of her great love of dogs and endeared himself to her by giving her a bag of meat scraps and bones on her way out of the restaurant. He later complained that she cared more for her "animal friends" than she did for him.
Day died on May 13, 2019, at the age of 97, after having contracted pneumonia. Her death was announced by her charity, the Doris Day Animal Foundation. Per Day's requests, the Foundation announced that there would be no funeral services, grave marker, or other public memorials.
"Source" | https://en.wikipedia.org/wiki?curid=8300 |
Distillation
Distillation is the process of separating the components or substances from a liquid mixture by using selective boiling and condensation. Distillation may result in essentially complete separation (nearly pure components), or it may be a partial separation that increases the concentration of selected components in the mixture. In either case, the process exploits differences in the relative volatility of the mixture's components. In industrial chemistry, distillation is a unit operation of practically universal importance, but it is a physical separation process, not a chemical reaction.
Distillation has many applications. For example:
An installation used for distillation, especially of distilled beverages, is a distillery. The distillation equipment itself is a still.
Early evidence of distillation was found on Akkadian tablets dated c. 1200 BC describing perfumery operations. The tablets provided textual evidence that an early primitive form of distillation was known to the Babylonians of ancient Mesopotamia. Early evidence of distillation was also found related to alchemists working in Alexandria in Roman Egypt in the 1st century.
Distilled water has been in use since at least c. 200, when Alexander of Aphrodisias described the process. Work on distilling other liquids continued in early Byzantine Egypt under Zosimus of Panopolis in the 3rd century. Distillation was practiced in the ancient Indian subcontinent, which is evident from baked clay retorts and receivers found at Taxila, Shaikhan Dheri, and Charsadda in modern Pakistan, dating to the early centuries of the Common Era. These "Gandhara stills" were only capable of producing very weak liquor, as there was no efficient means of collecting the vapors at low heat.
Distillation in China may have begun during the Eastern Han dynasty (1st–2nd centuries), but the distillation of beverages began in the Jin (12th–13th centuries) and Southern Song (10th–13th centuries) dynasties, according to archaeological evidence.
Clear evidence of the distillation of alcohol comes from the Arab chemist Al-Kindi in 9th-century Iraq., where it was described by the School of Salerno in the 12th century. Fractional distillation was developed by Tadeo Alderotti in the 13th century. A still was found in an archaeological site in Qinglong, Hebei province, in China, dating back to the 12th century. Distilled beverages were common during the Yuan dynasty (13th–14th centuries).
In 1500, German alchemist Hieronymus Braunschweig published "Liber de arte destillandi" ("The Book of the Art of Distillation"), the first book solely dedicated to the subject of distillation, followed in 1512 by a much expanded version. In 1651, John French published "The Art of Distillation", the first major English compendium on the practice, but it has been claimed that much of it derives from Braunschweig's work. This includes diagrams with people in them showing the industrial rather than bench scale of the operation.
As alchemy evolved into the science of chemistry, vessels called retorts became used for distillations. Both alembics and retorts are forms of glassware with long necks pointing to the side at a downward angle to act as air-cooled condensers to condense the distillate and let it drip downward for collection. Later, copper alembics were invented. Riveted joints were often kept tight by using various mixtures, for instance a dough made of rye flour. These alembics often featured a cooling system around the beak, using cold water, for instance, which made the condensation of alcohol more efficient. These were called pot stills. Today, the retorts and pot stills have been largely supplanted by more efficient distillation methods in most industrial processes. However, the pot still is still widely used for the elaboration of some fine alcohols, such as cognac, Scotch whisky, Irish whiskey, tequila, and some vodkas. Pot stills made of various materials (wood, clay, stainless steel) are also used by bootleggers in various countries. Small pot stills are also sold for use in the domestic production of flower water or essential oils.
Early forms of distillation involved batch processes using one vaporization and one condensation. Purity was improved by further distillation of the condensate. Greater volumes were processed by simply repeating the distillation. Chemists reportedly carried out as many as 500 to 600 distillations in order to obtain a pure compound.
In the early 19th century, the basics of modern techniques, including pre-heating and reflux, were developed. In 1822, Anthony Perrier developed one of the first continuous stills, and then, in 1826, Robert Stein improved that design to make his patent still. In 1830, Aeneas Coffey got a patent for improving the design even further. Coffey's continuous still may be regarded as the archetype of modern petrochemical units. The French engineer Armand Savalle developed his steam regulator around 1846. In 1877, Ernest Solvay was granted a U.S. Patent for a tray column for ammonia distillation, and the same and subsequent years saw developments in this theme for oils and spirits.
With the emergence of chemical engineering as a discipline at the end of the 19th century, scientific rather than empirical methods could be applied. The developing petroleum industry in the early 20th century provided the impetus for the development of accurate design methods, such as the McCabe–Thiele method by Ernest Thiele and the Fenske equation. The first industrial plant in the United States to use distillation as a means of ocean desalination opened in Freeport, Texas in 1961 with the hope of bringing water security to the region.
The availability of powerful computers has allowed direct computer simulations of distillation columns.
The application of distillation can roughly be divided into four groups: laboratory scale, industrial distillation, distillation of herbs for perfumery and medicinals (herbal distillate), and food processing. The latter two are distinctively different from the former two in that distillation is not used as a true purification method but more to transfer all volatiles from the source materials to the distillate in the processing of beverages and herbs.
The main difference between laboratory scale distillation and industrial distillation are that laboratory scale distillation is often performed on a batch basis, whereas industrial distillation often occurs continuously. In batch distillation, the composition of the source material, the vapors of the distilling compounds, and the distillate change during the distillation. In batch distillation, a still is charged (supplied) with a batch of feed mixture, which is then separated into its component fractions, which are collected sequentially from most volatile to less volatile, with the bottoms – remaining least or non-volatile fraction – removed at the end. The still can then be recharged and the process repeated.
In continuous distillation, the source materials, vapors, and distillate are kept at a constant composition by carefully replenishing the source material and removing fractions from both vapor and liquid in the system. This results in a more detailed control of the separation process.
The boiling point of a liquid is the temperature at which the vapor pressure of the liquid equals the pressure around the liquid, enabling bubbles to form without being crushed. A special case is the normal boiling point, where the vapor pressure of the liquid equals the ambient atmospheric pressure.
It is a misconception that in a liquid mixture at a given pressure, each component boils at the boiling point corresponding to the given pressure, allowing the vapors of each component to collect separately and purely. However, this does not occur, even in an idealized system. Idealized models of distillation are essentially governed by Raoult's law and Dalton's law and assume that vapor–liquid equilibria are attained.
Raoult's law states that the vapor pressure of a solution is dependent on 1) the vapor pressure of each chemical component in the solution and 2) the fraction of solution each component makes up, a.k.a. the mole fraction. This law applies to ideal solutions, or solutions that have different components but whose molecular interactions are the same as or very similar to pure solutions.
Dalton's law states that the total pressure is the sum of the partial pressures of each individual component in the mixture. When a multi-component liquid is heated, the vapor pressure of each component will rise, thus causing the total vapor pressure to rise. When the total vapor pressure reaches the pressure surrounding the liquid, boiling occurs and liquid turns to gas throughout the bulk of the liquid. A mixture with a given composition has one boiling point at a given pressure when the components are mutually soluble. A mixture of constant composition does not have multiple boiling points.
An implication of one boiling point is that lighter components never cleanly "boil first". At boiling point, all volatile components boil, but for a component, its percentage in the vapor is the same as its percentage of the total vapor pressure. Lighter components have a higher partial pressure and, thus, are concentrated in the vapor, but heavier volatile components also have a (smaller) partial pressure and necessarily vaporize also, albeit at a lower concentration in the vapor. Indeed, batch distillation and fractionation succeed by varying the composition of the mixture. In batch distillation, the batch vaporizes, which changes its composition; in fractionation, liquid higher in the fractionation column contains more lights and boils at lower temperatures. Therefore, starting from a given mixture, it appears to have a boiling range instead of a boiling point, although this is because its composition changes: each intermediate mixture has its own, singular boiling point.
The idealized model is accurate in the case of chemically similar liquids, such as benzene and toluene. In other cases, severe deviations from Raoult's law and Dalton's law are observed, most famously in the mixture of ethanol and water. These compounds, when heated together, form an azeotrope, which is when the vapor phase and liquid phase contain the same composition. Although there are computational methods that can be used to estimate the behavior of a mixture of arbitrary components, the only way to obtain accurate vapor–liquid equilibrium data is by measurement.
It is not possible to completely purify a mixture of components by distillation, as this would require each component in the mixture to have a zero partial pressure. If ultra-pure products are the goal, then further chemical separation must be applied. When a binary mixture is vaporized and the other component, e.g., a salt, has zero partial pressure for practical purposes, the process is simpler.
Heating an ideal mixture of two volatile substances, A and B, with A having the higher volatility, or lower boiling point, in a batch distillation setup (such as in an apparatus depicted in the opening figure) until the mixture is boiling results in a vapor above the liquid that contains a mixture of A and B. The ratio between A and B in the vapor will be different from the ratio in the liquid. The ratio in the liquid will be determined by how the original mixture was prepared, while the ratio in the vapor will be enriched in the more volatile compound, A (due to Raoult's Law, see above). The vapor goes through the condenser and is removed from the system. This, in turn, means that the ratio of compounds in the remaining liquid is now different from the initial ratio (i.e., more enriched in B than in the starting liquid).
The result is that the ratio in the liquid mixture is changing, becoming richer in component B. This causes the boiling point of the mixture to rise, which results in a rise in the temperature in the vapor, which results in a changing ratio of A : B in the gas phase (as distillation continues, there is an increasing proportion of B in the gas phase). This results in a slowly changing ratio of A : B in the distillate.
If the difference in vapour pressure between the two components A and B is large – generally expressed as the difference in boiling points – the mixture in the beginning of the distillation is highly enriched in component A, and when component A has distilled off, the boiling liquid is enriched in component B.
Continuous distillation is an ongoing distillation in which a liquid mixture is continuously (without interruption) fed into the process and separated fractions are removed continuously as output streams occur over time during the operation. Continuous distillation produces a minimum of two output fractions, including at least one volatile distillate fraction, which has boiled and been separately captured as a vapor and then condensed to a liquid. There is always a bottoms (or residue) fraction, which is the least volatile residue that has not been separately captured as a condensed vapor.
Continuous distillation differs from batch distillation in the respect that concentrations should not change over time. Continuous distillation can be run at a steady state for an arbitrary amount of time. For any source material of specific composition, the main variables that affect the purity of products in continuous distillation are the reflux ratio and the number of theoretical equilibrium stages, in practice determined by the number of trays or the height of packing. Reflux is a flow from the condenser back to the column, which generates a recycle that allows a better separation with a given number of trays. Equilibrium stages are ideal steps where compositions achieve vapor–liquid equilibrium, repeating the separation process and allowing better separation given a reflux ratio. A column with a high reflux ratio may have fewer stages, but it refluxes a large amount of liquid, giving a wide column with a large holdup. Conversely, a column with a low reflux ratio must have a large number of stages, thus requiring a taller column.
Both batch and continuous distillations can be improved by making use of a fractionating column on top of the distillation flask. The column improves separation by providing a larger surface area for the vapor and condensate to come into contact. This helps it remain at equilibrium for as long as possible. The column can even consist of small subsystems ('trays' or 'dishes') which all contain an enriched, boiling liquid mixture, all with their own vapor–liquid equilibrium.
There are differences between laboratory-scale and industrial-scale fractionating columns, but the principles are the same. Examples of laboratory-scale fractionating columns (in increasing efficiency) include
Laboratory scale distillations are almost exclusively run as batch distillations. The device used in distillation, sometimes referred to as a "still", consists at a minimum of a reboiler or "pot" in which the source material is heated, a condenser in which the heated vapor is cooled back to the liquid state, and a receiver in which the concentrated or purified liquid, called the distillate, is collected. Several laboratory scale techniques for distillation exist (see also ).
A completely sealed distillation apparatus could experience extreme and rapidly varying internal pressure, which could cause it to burst open at the joints. Therefore, some path is usually left open (for instance, at the receiving flask) to allow the internal pressure to equalize with atmospheric pressure. Alternatively, a vacuum pump may be used to keep the apparatus at a lower than atmospheric pressure. If the substances involved are air- or moisture-sensitive, the connection to the atmosphere can be made through one or more drying tubes packed with materials that scavenge the undesired air components, or through bubblers that provide a movable liquid barrier. Finally, the entry of undesired air components can be prevented by pumping a low but steady flow of suitable inert gas, like nitrogen, into the apparatus.
In simple distillation, the vapor is immediately channeled into a condenser. Consequently, the distillate is not pure but rather its composition is identical to the composition of the vapors at the given temperature and pressure. That concentration follows Raoult's law.
As a result, simple distillation is effective only when the liquid boiling points differ greatly (rule of thumb is 25 °C) or when separating liquids from non-volatile solids or oils. For these cases, the vapor pressures of the components are usually different enough that the distillate may be sufficiently pure for its intended purpose.
A cutaway schematic of a simple distillation operation is shown at left. The starting liquid 15 in the boiling flask 2 is heated by a combined hotplate and magnetic stirrer 13 via a silicone oil bath (orange, 14). The vapor flows through a short Vigreux column 3, then through a Liebig condenser 5, is cooled by water (blue) that circulates through ports 6 and 7. The condensed liquid drips into the receiving flask 8, sitting in a cooling bath (blue, 16). The adapter 10 has a connection 9 that may be fitted to a vacuum pump. The components are connected by ground glass joints (gray).
For many cases, the boiling points of the components in the mixture will be sufficiently close that Raoult's law must be taken into consideration. Therefore, fractional distillation must be used in order to separate the components by repeated vaporization-condensation cycles within a packed fractionating column. This separation, by successive distillations, is also referred to as rectification.
As the solution to be purified is heated, its vapors rise to the fractionating column. As it rises, it cools, condensing on the condenser walls and the surfaces of the packing material. Here, the condensate continues to be heated by the rising hot vapors; it vaporizes once more. However, the composition of the fresh vapors are determined once again by Raoult's law. Each vaporization-condensation cycle (called a "theoretical plate") will yield a purer solution of the more volatile component. In reality, each cycle at a given temperature does not occur at exactly the same position in the fractionating column; "theoretical plate" is thus a concept rather than an accurate description.
More theoretical plates lead to better separations. A spinning band distillation system uses a spinning band of Teflon or metal to force the rising vapors into close contact with the descending condensate, increasing the number of theoretical plates.
Like vacuum distillation, steam distillation is a method for distilling compounds which are heat-sensitive. The temperature of the steam is easier to control than the surface of a heating element, and allows a high rate of heat transfer without heating at a very high temperature. This process involves bubbling steam through a heated mixture of the raw material. By Raoult's law, some of the target compound will vaporize (in accordance with its partial pressure). The vapor mixture is cooled and condensed, usually yielding a layer of oil and a layer of water.
Steam distillation of various aromatic herbs and flowers can result in two products; an essential oil as well as a watery herbal distillate. The essential oils are often used in perfumery and aromatherapy while the watery distillates have many applications in aromatherapy, food processing and skin care.
Some compounds have very high boiling points. To boil such compounds, it is often better to lower the pressure at which such compounds are boiled instead of increasing the temperature. Once the pressure is lowered to the vapor pressure of the compound (at the given temperature), boiling and the rest of the distillation process can commence. This technique is referred to as vacuum distillation and it is commonly found in the laboratory in the form of the rotary evaporator.
This technique is also very useful for compounds which boil beyond their decomposition temperature at atmospheric pressure and which would therefore be decomposed by any attempt to boil them under atmospheric pressure.
Molecular distillation is vacuum distillation below the pressure of 0.01 torr. 0.01 torr is one order of magnitude above high vacuum, where fluids are in the free molecular flow regime, i.e. the mean free path of molecules is comparable to the size of the equipment. The gaseous phase no longer exerts significant pressure on the substance to be evaporated, and consequently, rate of evaporation no longer depends on pressure. That is, because the continuum assumptions of fluid dynamics no longer apply, mass transport is governed by molecular dynamics rather than fluid dynamics. Thus, a short path between the hot surface and the cold surface is necessary, typically by suspending a hot plate covered with a film of feed next to a cold plate with a line of sight in between. Molecular distillation is used industrially for purification of oils.
Some compounds have high boiling points as well as being air sensitive. A simple vacuum distillation system as exemplified above can be used, whereby the vacuum is replaced with an inert gas after the distillation is complete. However, this is a less satisfactory system if one desires to collect fractions under a reduced pressure. To do this a "cow" or "pig" adaptor can be added to the end of the condenser, or for better results or for very air sensitive compounds a Perkin triangle apparatus can be used.
The Perkin triangle, has means via a series of glass or Teflon taps to allows fractions to be isolated from the rest of the still, without the main body of the distillation being removed from either the vacuum or heat source, and thus can remain in a state of reflux. To do this, the sample is first isolated from the vacuum by means of the taps, the vacuum over the sample is then replaced with an inert gas (such as nitrogen or argon) and can then be stoppered and removed. A fresh collection vessel can then be added to the system, evacuated and linked back into the distillation system via the taps to collect a second fraction, and so on, until all fractions have been collected.
Short path distillation is a distillation technique that involves the distillate travelling a short distance, often only a few centimeters, and is normally done at reduced pressure. A classic example would be a distillation involving the distillate travelling from one glass bulb to another, without the need for a condenser separating the two chambers. This technique is often used for compounds which are unstable at high temperatures or to purify small amounts of compound. The advantage is that the heating temperature can be considerably lower (at reduced pressure) than the boiling point of the liquid at standard pressure, and the distillate only has to travel a short distance before condensing. A short path ensures that little compound is lost on the sides of the apparatus. The Kugelrohr is a kind of a short path distillation apparatus which often contain multiple chambers to collect distillate fractions.
Zone distillation is a distillation process in long container with partial melting of refined matter in moving liquid zone and condensation of vapor in the solid phase at condensate pulling in cold area. The process is worked in theory. When zone heater is moving from the top to the bottom of the container then solid condensate with irregular impurity distribution is forming. Then most pure part of the condensate may be extracted as product. The process may be iterated many times by moving (without turnover) the received condensate to the bottom part of the container on the place of refined matter. The irregular impurity distribution in the condensate (that is efficiency of purification) increases with the number of iterations.
Zone distillation is the distillation analog of zone recrystallization. Impurity distribution in the condensate is described by known equations of zone recrystallization – with the replacement of the distribution co-efficient k of crystallization - for the separation factor α of distillation.
The unit process of evaporation may also be called "distillation":
Other uses:
Interactions between the components of the solution create properties unique to the solution, as most processes entail nonideal mixtures, where Raoult's law does not hold. Such interactions can result in a constant-boiling azeotrope which behaves as if it were a pure compound (i.e., boils at a single temperature instead of a range). At an azeotrope, the solution contains the given component in the same proportion as the vapor, so that evaporation does not change the purity, and distillation does not effect separation. For example, ethyl alcohol and water form an azeotrope of 95.6% at 78.1 °C.
If the azeotrope is not considered sufficiently pure for use, there exist some techniques to break the azeotrope to give a pure distillate. This set of techniques are known as azeotropic distillation. Some techniques achieve this by "jumping" over the azeotropic composition (by adding another component to create a new azeotrope, or by varying the pressure). Others work by chemically or physically removing or sequestering the impurity. For example, to purify ethanol beyond 95%, a drying agent (or desiccant, such as potassium carbonate) can be added to convert the soluble water into insoluble water of crystallization. Molecular sieves are often used for this purpose as well.
Immiscible liquids, such as water and toluene, easily form azeotropes. Commonly, these azeotropes are referred to as a low boiling azeotrope because the boiling point of the azeotrope is lower than the boiling point of either pure component. The temperature and composition of the azeotrope is easily predicted from the vapor pressure of the pure components, without use of Raoult's law. The azeotrope is easily broken in a distillation set-up by using a liquid–liquid separator (a decanter) to separate the two liquid layers that are condensed overhead. Only one of the two liquid layers is refluxed to the distillation set-up.
High boiling azeotropes, such as a 20 weight percent mixture of hydrochloric acid in water, also exist. As implied by the name, the boiling point of the azeotrope is greater than the boiling point of either pure component.
To break azeotropic distillations and cross distillation boundaries, such as in the DeRosier Problem, it is necessary to increase the composition of the light key in the distillate.
The boiling points of components in an azeotrope overlap to form a band. By exposing an azeotrope to a vacuum or positive pressure, it's possible to bias the boiling point of one component away from the other by exploiting the differing vapor pressure curves of each; the curves may overlap at the azeotropic point, but are unlikely to be remain identical further along the pressure axis either side of the azeotropic point. When the bias is great enough, the two boiling points no longer overlap and so the azeotropic band disappears.
This method can remove the need to add other chemicals to a distillation, but it has two potential drawbacks.
Under negative pressure, power for a vacuum source is needed and the reduced boiling points of the distillates requires that the condenser be run cooler to prevent distillate vapors being lost to the vacuum source. Increased cooling demands will often require additional energy and possibly new equipment or a change of coolant.
Alternatively, if positive pressures are required, standard glassware can not be used, energy must be used for pressurization and there is a higher chance of side reactions occurring in the distillation, such as decomposition, due to the higher temperatures required to effect boiling.
A unidirectional distillation will rely on a pressure change in one direction, either positive or negative.
Pressure-swing distillation is essentially the same as the unidirectional distillation used to break azeotropic mixtures, but here both positive and negative pressures may be employed.
This improves the selectivity of the distillation and allows a chemist to optimize distillation by avoiding extremes of pressure and temperature that waste energy. This is particularly important in commercial applications.
One example of the application of pressure-swing distillation is during the industrial purification of ethyl acetate after its catalytic synthesis from ethanol.
Large scale industrial distillation applications include both batch and continuous fractional, vacuum, azeotropic, extractive, and steam distillation. The most widely used industrial applications of continuous, steady-state fractional distillation are in petroleum refineries, petrochemical and chemical plants and natural gas processing plants.
To control and optimize such industrial distillation, a standardized laboratory method, ASTM D86, is established. This test method extends to the atmospheric distillation of petroleum products using a laboratory batch distillation unit to quantitatively determine the boiling range characteristics of petroleum products.
Industrial distillation is typically performed in large, vertical cylindrical columns known as distillation towers or distillation columns with diameters ranging from about 65 centimeters to 16 meters and heights ranging from about 6 meters to 90 meters or more. When the process feed has a diverse composition, as in distilling crude oil, liquid outlets at intervals up the column allow for the withdrawal of different "fractions" or products having different boiling points or boiling ranges. The "lightest" products (those with the lowest boiling point) exit from the top of the columns and the "heaviest" products (those with the highest boiling point) exit from the bottom of the column and are often called the bottoms.
Industrial towers use reflux to achieve a more complete separation of products. Reflux refers to the portion of the condensed overhead liquid product from a distillation or fractionation tower that is returned to the upper part of the tower as shown in the schematic diagram of a typical, large-scale industrial distillation tower. Inside the tower, the downflowing reflux liquid provides cooling and condensation of the upflowing vapors thereby increasing the efficiency of the distillation tower. The more reflux that is provided for a given number of theoretical plates, the better the tower's separation of lower boiling materials from higher boiling materials. Alternatively, the more reflux that is provided for a given desired separation, the fewer the number of theoretical plates required. Chemical engineers must choose what combination of reflux rate and number of plates is both economically and physically feasible for the products purified in the distillation column.
Such industrial fractionating towers are also used in cryogenic air separation, producing liquid oxygen, liquid nitrogen, and high purity argon. Distillation of chlorosilanes also enables the production of high-purity silicon for use as a semiconductor.
Design and operation of a distillation tower depends on the feed and desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used. For a multi-component feed, simulation models are used both for design and operation. Moreover, the efficiencies of the vapor–liquid contact devices (referred to as "plates" or "trays") used in distillation towers are typically lower than that of a theoretical 100% efficient equilibrium stage. Hence, a distillation tower needs more trays than the number of theoretical vapor–liquid equilibrium stages. A variety of models have been postulated to estimate tray efficiencies.
In modern industrial uses, a packing material is used in the column instead of trays when low pressure drops across the column are required. Other factors that favor packing are: vacuum systems, smaller diameter columns, corrosive systems, systems prone to foaming, systems requiring low liquid holdup, and batch distillation. Conversely, factors that favor plate columns are: presence of solids in feed, high liquid rates, large column diameters, complex columns, columns with wide feed composition variation, columns with a chemical reaction, absorption columns, columns limited by foundation weight tolerance, low liquid rate, large turn-down ratio and those processes subject to process surges.
This packing material can either be random dumped packing (1–3" wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing and the vapors pass across this wetted surface, where mass transfer takes place. Unlike conventional tray distillation in which every tray represents a separate point of vapor–liquid equilibrium, the vapor–liquid equilibrium curve in a packed column is continuous. However, when modeling packed columns, it is useful to compute a number of "theoretical stages" to denote the separation efficiency of the packed column with respect to more traditional trays. Differently shaped packings have different surface areas and void space between packings. Both of these factors affect packing performance.
Another factor in addition to the packing shape and surface area that affects the performance of random or structured packing is the liquid and vapor distribution entering the packed bed. The number of theoretical stages required to make a given separation is calculated using a specific vapor to liquid ratio. If the liquid and vapor are not evenly distributed across the superficial tower area as it enters the packed bed, the liquid to vapor ratio will not be correct in the packed bed and the required separation will not be achieved. The packing will appear to not be working properly. The height equivalent to a theoretical plate (HETP) will be greater than expected. The problem is not the packing itself but the mal-distribution of the fluids entering the packed bed. Liquid mal-distribution is more frequently the problem than vapor. The design of the liquid distributors used to introduce the feed and reflux to a packed bed is critical to making the packing perform to it maximum efficiency. Methods of evaluating the effectiveness of a liquid distributor to evenly distribute the liquid entering a packed bed can be found in references. Considerable work has been done on this topic by Fractionation Research, Inc. (commonly known as FRI).
The goal of multi-effect distillation is to increase the energy efficiency of the process, for use in desalination, or in some cases one stage in the production of ultrapure water. The number of effects is inversely proportional to the kW·h/m3 of water recovered figure, and refers to the volume of water recovered per unit of energy compared with single-effect distillation. One effect is roughly 636 kW·h/m3.
There are many other types of multi-effect distillation processes, including one referred to as simply multi-effect distillation (MED), in which multiple chambers, with intervening heat exchangers, are employed.
Carbohydrate-containing plant materials are allowed to ferment, producing a dilute solution of ethanol in the process. Spirits such as whiskey and rum are prepared by distilling these dilute solutions of ethanol. Components other than ethanol, including water, esters, and other alcohols, are collected in the condensate, which account for the flavor of the beverage. Some of these beverages are then stored in barrels or other containers to acquire more flavor compounds and characteristic flavors. | https://en.wikipedia.org/wiki?curid=8301 |
David Hilbert
David Hilbert (; ; 23 January 1862 – 14 February 1943) was a German mathematician and one of the most influential and universal mathematicians of the 19th and early 20th centuries. Hilbert discovered and developed a broad range of fundamental ideas in many areas, including invariant theory, the calculus of variations, commutative algebra, algebraic number theory, the foundations of geometry, spectral theory of operators and its application to integral equations, mathematical physics, and foundations of mathematics (particularly proof theory).
Hilbert adopted and warmly defended Georg Cantor's set theory and transfinite numbers. A famous example of his leadership in mathematics is his 1900 presentation of a collection of problems that set the course for much of the mathematical research of the 20th century.
Hilbert and his students contributed significantly to establishing rigor and developed important tools used in modern mathematical physics. Hilbert is known as one of the founders of proof theory and mathematical logic.
Hilbert, the first of two children and only son of Otto and Maria Therese (Erdtmann) Hilbert, was born in the Province of Prussia, Kingdom of Prussia, either in Königsberg (according to Hilbert's own statement) or in Wehlau (known since 1946 as Znamensk) near Königsberg where his father worked at the time of his birth.
In late 1872, Hilbert entered the Friedrichskolleg Gymnasium ("Collegium fridericianum", the same school that Immanuel Kant had attended 140 years before); but, after an unhappy period, he transferred to (late 1879) and graduated from (early 1880) the more science-oriented Wilhelm Gymnasium. Upon graduation, in autumn 1880, Hilbert enrolled at the University of Königsberg, the "Albertina". In early 1882, Hermann Minkowski (two years younger than Hilbert and also a native of Königsberg but had gone to Berlin for three semesters), returned to Königsberg and entered the university. Hilbert developed a lifelong friendship with the shy, gifted Minkowski.
In 1884, Adolf Hurwitz arrived from Göttingen as an Extraordinarius (i.e., an associate professor). An intense and fruitful scientific exchange among the three began, and Minkowski and Hilbert especially would exercise a reciprocal influence over each other at various times in their scientific careers. Hilbert obtained his doctorate in 1885, with a dissertation, written under Ferdinand von Lindemann, titled "Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen" ("On the invariant properties of special binary forms, in particular the spherical harmonic functions").
Hilbert remained at the University of Königsberg as a "Privatdozent" (senior lecturer) from 1886 to 1895. In 1895, as a result of intervention on his behalf by Felix Klein, he obtained the position of Professor of Mathematics at the University of Göttingen. During the Klein and Hilbert years, Göttingen became the preeminent institution in the mathematical world. He remained there for the rest of his life.
Among Hilbert's students were Hermann Weyl, chess champion Emanuel Lasker, Ernst Zermelo, and Carl Gustav Hempel. John von Neumann was his assistant. At the University of Göttingen, Hilbert was surrounded by a social circle of some of the most important mathematicians of the 20th century, such as Emmy Noether and Alonzo Church.
Among his 69 Ph.D. students in Göttingen were many who later became famous mathematicians, including (with date of thesis): Otto Blumenthal (1898), Felix Bernstein (1901), Hermann Weyl (1908), Richard Courant (1910), Erich Hecke (1910), Hugo Steinhaus (1911), and Wilhelm Ackermann (1925). Between 1902 and 1939 Hilbert was editor of the "Mathematische Annalen", the leading mathematical journal of the time.
Around 1925, Hilbert developed pernicious anemia, a then-untreatable vitamin deficiency whose primary symptom is exhaustion; his assistant Eugene Wigner described him as subject to "enormous fatigue" and how he "seemed quite old", and that even after eventually being diagnosed and treated, he "was hardly a scientist after 1925, and certainly not a Hilbert."
Hilbert lived to see the Nazis purge many of the prominent faculty members at University of Göttingen in 1933. Those forced out included Hermann Weyl (who had taken Hilbert's chair when he retired in 1930), Emmy Noether and Edmund Landau. One who had to leave Germany, Paul Bernays, had collaborated with Hilbert in mathematical logic, and co-authored with him the important book "Grundlagen der Mathematik" (which eventually appeared in two volumes, in 1934 and 1939). This was a sequel to the Hilbert-Ackermann book "Principles of Mathematical Logic" from 1928. Hermann Weyl's successor was Helmut Hasse.
About a year later, Hilbert attended a banquet and was seated next to the new Minister of Education, Bernhard Rust. Rust asked whether "the "Mathematical Institute" really suffered so much because of the departure of the Jews". Hilbert replied,
"Suffered? It doesn't exist any longer, does it!"
By the time Hilbert died in 1943, the Nazis had nearly completely restaffed the university, as many of the former faculty had either been Jewish or married to Jews. Hilbert's funeral was attended by fewer than a dozen people, only two of whom were fellow academics, among them Arnold Sommerfeld, a theoretical physicist and also a native of Königsberg. News of his death only became known to the wider world six months after he died.
The epitaph on his tombstone in Göttingen consists of the famous lines he spoke at the conclusion of his retirement address to the Society of German Scientists and Physicians on 8 September 1930. The words were given in response to the Latin maxim: "Ignoramus et ignorabimus" or "We do not know, we shall not know":
In English:
The day before Hilbert pronounced these phrases at the 1930 annual meeting of the Society of German Scientists and Physicians, Kurt Gödel—in a round table discussion during the Conference on Epistemology held jointly with the Society meetings—tentatively announced the first expression of his incompleteness theorem. Gödel's incompleteness theorems show that even elementary axiomatic systems such as Peano arithmetic are either self-contradicting or contain logical propositions that are impossible to prove or disprove.
In 1892, Hilbert married Käthe Jerosch (1864–1945) from German Jewish family, "the daughter of a Königsberg merchant, an outspoken young lady with an independence of mind that matched his own". While at Königsberg they had their one child, Franz Hilbert (1893–1969).
Hilbert's son Franz suffered throughout his life from an undiagnosed mental illness. His inferior intellect was a terrible disappointment to his father and this misfortune was a matter of distress to the mathematicians and students at Göttingen.
Hilbert considered the mathematician Hermann Minkowski to be his "best and truest friend".
Hilbert was baptized and raised a Calvinist in the Prussian Evangelical Church. He later left the Church and became an agnostic. He also argued that mathematical truth was independent of the existence of God or other "a priori" assumptions. When Galileo Galilei was criticized for failing to stand up for his convictions on the Heliocentric theory, Hilbert objected: "But [Galileo] was not an idiot. Only an idiot could believe that scientific truth needs martyrdom; that may be necessary in religion, but scientific results prove themselves in due time."
Hilbert's first work on invariant functions led him to the demonstration in 1888 of his famous "finiteness theorem". Twenty years earlier, Paul Gordan had demonstrated the theorem of the finiteness of generators for binary forms using a complex computational approach. Attempts to generalize his method to functions with more than two variables failed because of the enormous difficulty of the calculations involved. In order to solve what had become known in some circles as "Gordan's Problem", Hilbert realized that it was necessary to take a completely different path. As a result, he demonstrated "Hilbert's basis theorem", showing the existence of a finite set of generators, for the invariants of quantics in any number of variables, but in an abstract form. That is, while demonstrating the existence of such a set, it was not a constructive proof — it did not display "an object" — but rather, it was an existence proof and relied on use of the law of excluded middle in an infinite extension.
Hilbert sent his results to the "Mathematische Annalen". Gordan, the house expert on the theory of invariants for the "Mathematische Annalen", could not appreciate the revolutionary nature of Hilbert's theorem and rejected the article, criticizing the exposition because it was insufficiently comprehensive. His comment was:
Klein, on the other hand, recognized the importance of the work, and guaranteed that it would be published without any alterations. Encouraged by Klein, Hilbert extended his method in a second article, providing estimations on the maximum degree of the minimum set of generators, and he sent it once more to the "Annalen". After having read the manuscript, Klein wrote to him, saying:
Later, after the usefulness of Hilbert's method was universally recognized, Gordan himself would say:
For all his successes, the nature of his proof stirred up more trouble than Hilbert could have imagined at the time. Although Kronecker had conceded, Hilbert would later respond to others' similar criticisms that "many different constructions are subsumed under one fundamental idea" — in other words (to quote Reid): "Through a proof of existence, Hilbert had been able to obtain a construction"; "the proof" (i.e. the symbols on the page) "was" "the object". Not all were convinced. While Kronecker would die soon afterwards, his constructivist philosophy would continue with the young Brouwer and his developing intuitionist "school", much to Hilbert's torment in his later years. Indeed, Hilbert would lose his "gifted pupil" Weyl to intuitionism — "Hilbert was disturbed by his former student's fascination with the ideas of Brouwer, which aroused in Hilbert the memory of Kronecker". Brouwer the intuitionist in particular opposed the use of the Law of Excluded Middle over infinite sets (as Hilbert had used it). Hilbert would respond:
The text "Grundlagen der Geometrie" (tr.: "Foundations of Geometry") published by Hilbert in 1899 proposes a formal set, called Hilbert's axioms, substituting for the traditional axioms of Euclid. They avoid weaknesses identified in those of Euclid, whose works at the time were still used textbook-fashion. It is difficult to specify the axioms used by Hilbert without referring to the publication history of the "Grundlagen" since Hilbert changed and modified them several times. The original monograph was quickly followed by a French translation, in which Hilbert added V.2, the Completeness Axiom. An English translation, authorized by Hilbert, was made by E.J. Townsend and copyrighted in 1902. This translation incorporated the changes made in the French translation and so is considered to be a translation of the 2nd edition. Hilbert continued to make changes in the text and several editions appeared in German. The 7th edition was the last to appear in Hilbert's lifetime. New editions followed the 7th, but the main text was essentially not revised.
Hilbert's approach signaled the shift to the modern axiomatic method. In this, Hilbert was anticipated by Moritz Pasch's work from 1882. Axioms are not taken as self-evident truths. Geometry may treat "things", about which we have powerful intuitions, but it is not necessary to assign any explicit meaning to the undefined concepts. The elements, such as point, line, plane, and others, could be substituted, as Hilbert is reported to have said to Schoenflies and Kötter, by tables, chairs, glasses of beer and other such objects. It is their defined relationships that are discussed.
Hilbert first enumerates the undefined concepts: point, line, plane, lying on (a relation between points and lines, points and planes, and lines and planes), betweenness, congruence of pairs of points (line segments), and congruence of angles. The axioms unify both the plane geometry and solid geometry of Euclid in a single system.
Hilbert put forth a most influential list of 23 unsolved problems at the International Congress of Mathematicians in Paris in 1900. This is generally reckoned as the most successful and deeply considered compilation of open problems ever to be produced by an individual mathematician.
After re-working the foundations of classical geometry, Hilbert could have extrapolated to the rest of mathematics. His approach differed, however, from the later 'foundationalist' Russell-Whitehead or 'encyclopedist' Nicolas Bourbaki, and from his contemporary Giuseppe Peano. The mathematical community as a whole could enlist in problems, which he had identified as crucial aspects of the areas of mathematics he took to be key.
The problem set was launched as a talk "The Problems of Mathematics" presented during the course of the Second International Congress of Mathematicians held in Paris. The introduction of the speech that Hilbert gave said:
He presented fewer than half the problems at the Congress, which were published in the acts of the Congress. In a subsequent publication, he extended the panorama, and arrived at the formulation of the now-canonical 23 Problems of Hilbert. See also Hilbert's twenty-fourth problem. The full text is important, since the exegesis of the questions still can be a matter of inevitable debate, whenever it is asked how many have been solved.
Some of these were solved within a short time. Others have been discussed throughout the 20th century, with a few now taken to be unsuitably open-ended to come to closure. Some even continue to this day to remain a challenge for mathematicians.
In an account that had become standard by the mid-century, Hilbert's problem set was also a kind of manifesto, that opened the way for the development of the formalist school, one of three major schools of mathematics of the 20th century. According to the formalist, mathematics is manipulation of symbols according to agreed upon formal rules. It is therefore an autonomous activity of thought. There is, however, room to doubt whether Hilbert's own views were simplistically formalist in this sense.
In 1920 he proposed explicitly a research project (in "metamathematics", as it was then termed) that became known as Hilbert's program. He wanted mathematics to be formulated on a solid and complete logical foundation. He believed that in principle this could be done, by showing that:
He seems to have had both technical and philosophical reasons for formulating this proposal. It affirmed his dislike of what had become known as the "ignorabimus", still an active issue in his time in German thought, and traced back in that formulation to Emil du Bois-Reymond.
This program is still recognizable in the most popular philosophy of mathematics, where it is usually called "formalism". For example, the Bourbaki group adopted a watered-down and selective version of it as adequate to the requirements of their twin projects of (a) writing encyclopedic foundational works, and (b) supporting the axiomatic method as a research tool. This approach has been successful and influential in relation with Hilbert's work in algebra and functional analysis, but has failed to engage in the same way with his interests in physics and logic.
Hilbert wrote in 1919:
Hilbert published his views on the foundations of mathematics in the 2-volume work Grundlagen der Mathematik.
Hilbert and the mathematicians who worked with him in his enterprise were committed to the project. His attempt to support axiomatized mathematics with definitive principles, which could banish theoretical uncertainties, ended in failure.
Gödel demonstrated that any non-contradictory formal system, which was comprehensive enough to include at least arithmetic, cannot demonstrate its completeness by way of its own axioms. In 1931 his incompleteness theorem showed that Hilbert's grand plan was impossible as stated. The second point cannot in any reasonable way be combined with the first point, as long as the axiom system is genuinely finitary.
Nevertheless, the subsequent achievements of proof theory at the very least "clarified" consistency as it relates to theories of central concern to mathematicians. Hilbert's work had started logic on this course of clarification; the need to understand Gödel's work then led to the development of recursion theory and then mathematical logic as an autonomous discipline in the 1930s. The basis for later theoretical computer science, in the work of Alonzo Church and Alan Turing, also grew directly out of this 'debate'.
Around 1909, Hilbert dedicated himself to the study of differential and integral equations; his work had direct consequences for important parts of modern functional analysis. In order to carry out these studies, Hilbert introduced the concept of an infinite dimensional Euclidean space, later called Hilbert space. His work in this part of analysis provided the basis for important contributions to the mathematics of physics in the next two decades, though from an unanticipated direction.
Later on, Stefan Banach amplified the concept, defining Banach spaces. Hilbert spaces are an important class of objects in the area of functional analysis, particularly of the spectral theory of self-adjoint linear operators, that grew up around it during the 20th century.
Until 1912, Hilbert was almost exclusively a "pure" mathematician. When planning a visit from Bonn, where he was immersed in studying physics, his fellow mathematician and friend Hermann Minkowski joked he had to spend 10 days in quarantine before being able to visit Hilbert. In fact, Minkowski seems responsible for most of Hilbert's physics investigations prior to 1912, including their joint seminar in the subject in 1905.
In 1912, three years after his friend's death, Hilbert turned his focus to the subject almost exclusively. He arranged to have a "physics tutor" for himself. He started studying kinetic gas theory and moved on to elementary radiation theory and the molecular theory of matter. Even after the war started in 1914, he continued seminars and classes where the works of Albert Einstein and others were followed closely.
By 1907, Einstein had framed the fundamentals of the theory of gravity, but then struggled for nearly 8 years with a confounding problem of putting the theory into final form. By early summer 1915, Hilbert's interest in physics had focused on general relativity, and he invited Einstein to Göttingen to deliver a week of lectures on the subject. Einstein received an enthusiastic reception at Göttingen. Over the summer, Einstein learned that Hilbert was also working on the field equations and redoubled his own efforts. During November 1915, Einstein published several papers culminating in "The Field Equations of Gravitation" (see Einstein field equations). Nearly simultaneously, David Hilbert published "The Foundations of Physics", an axiomatic derivation of the field equations (see Einstein–Hilbert action). Hilbert fully credited Einstein as the originator of the theory, and no public priority dispute concerning the field equations ever arose between the two men during their lives. See more at priority.
Additionally, Hilbert's work anticipated and assisted several advances in the mathematical formulation of quantum mechanics. His work was a key aspect of Hermann Weyl and John von Neumann's work on the mathematical equivalence of Werner Heisenberg's matrix mechanics and Erwin Schrödinger's wave equation, and his namesake Hilbert space plays an important part in quantum theory. In 1926, von Neumann showed that, if quantum states were understood as vectors in Hilbert space, they would correspond with both Schrödinger's wave function theory and Heisenberg's matrices.
Throughout this immersion in physics, Hilbert worked on putting rigor into the mathematics of physics. While highly dependent on higher mathematics, physicists tended to be "sloppy" with it. To a "pure" mathematician like Hilbert, this was both "ugly" and difficult to understand. As he began to understand physics and how physicists were using mathematics, he developed a coherent mathematical theory for what he found, most importantly in the area of integral equations. When his colleague Richard Courant wrote the now classic "Methoden der mathematischen Physik" (Methods of Mathematical Physics) including some of Hilbert's ideas, he added Hilbert's name as author even though Hilbert had not directly contributed to the writing. Hilbert said "Physics is too hard for physicists", implying that the necessary mathematics was generally beyond them; the Courant-Hilbert book made it easier for them.
Hilbert unified the field of algebraic number theory with his 1897 treatise "Zahlbericht" (literally "report on numbers"). He also resolved a significant number-theory problem formulated by Waring in 1770. As with the finiteness theorem, he used an existence proof that shows there must be solutions for the problem rather than providing a mechanism to produce the answers. He then had little more to publish on the subject; but the emergence of Hilbert modular forms in the dissertation of a student means his name is further attached to a major area.
He made a series of conjectures on class field theory. The concepts were highly influential, and his own contribution lives on in the names of the Hilbert class field and of the Hilbert symbol of local class field theory. Results were mostly proved by 1930, after work by Teiji Takagi.
Hilbert did not work in the central areas of analytic number theory, but his name has become known for the Hilbert–Pólya conjecture, for reasons that are anecdotal.
His collected works ("Gesammelte Abhandlungen") have been published several times. The original versions of his papers contained "many technical errors of varying degree"; when the collection was first published, the errors were corrected and it was found that this could be done without major changes in the statements of the theorems, with one exception—a claimed proof of the continuum hypothesis. The errors were nonetheless so numerous and significant that it took Olga Taussky-Todd three years to make the corrections. | https://en.wikipedia.org/wiki?curid=8302 |
Down syndrome
Down syndrome or Down's syndrome, also known as trisomy 21, is a genetic disorder caused by the presence of all or part of a third copy of chromosome 21. It is usually associated with physical growth delays, mild to moderate intellectual disability, and characteristic facial features. The average IQ of a young adult with Down syndrome is 50, equivalent to the mental ability of an 8- or 9-year-old child, but this can vary widely.
The parents of the affected individual are usually genetically normal. The probability increases from less than 0.1% in 20-year-old mothers to 3% in those of age 45. The extra chromosome is believed to occur by chance, with no known behavioral activity or environmental factor that changes the probability. Down syndrome can be identified during pregnancy by prenatal screening followed by diagnostic testing or after birth by direct observation and genetic testing. Since the introduction of screening, pregnancies with the diagnosis are often terminated. Regular screening for health problems common in Down syndrome is recommended throughout the person's life.
There is no cure for Down syndrome. Education and proper care have been shown to improve quality of life. Some children with Down syndrome are educated in typical school classes, while others require more specialized education. Some individuals with Down syndrome graduate from high school, and a few attend post-secondary education. In adulthood, about 20% in the United States do paid work in some capacity, with many requiring a sheltered work environment. Support in financial and legal matters is often needed. Life expectancy is around 50 to 60 years in the developed world with proper health care.
Down syndrome is the most common chromosome abnormality in humans. It occurs in about 1 in 1,000 babies born each year. In 2015, Down syndrome was present in 5.4 million individuals globally and resulted in 27,000 deaths, down from 43,000 deaths in 1990. It is named after British doctor John Langdon Down, who fully described the syndrome in 1866. Some aspects of the condition were described earlier by French psychiatrist Jean-Étienne Dominique Esquirol in 1838 and French physician Édouard Séguin in 1844. The genetic cause of Down syndrome was discovered in 1959.
Those with Down syndrome nearly always have physical and intellectual disabilities. As adults, their mental abilities are typically similar to those of an 8- or 9-year-old. They also typically have poor immune function and generally reach developmental milestones at a later age. They have an increased risk of a number of other health problems, including congenital heart defect, epilepsy, leukemia, thyroid diseases, and mental disorders.
People with Down syndrome may have some or all of these physical characteristics: a small chin, slanted eyes, poor muscle tone, a flat nasal bridge, a single crease of the palm, and a protruding tongue due to a small mouth and relatively large tongue. These airway changes lead to obstructive sleep apnea in around half of those with Down syndrome. Other common features include: a flat and wide face, a short neck, excessive joint flexibility, extra space between big toe and second toe, abnormal patterns on the fingertips and short fingers. Instability of the atlantoaxial joint occurs in about 20% and may lead to spinal cord injury in 1–2%. Hip dislocations may occur without trauma in up to a third of people with Down syndrome.
Growth in height is slower, resulting in adults who tend to have short stature—the average height for men is 154 cm (5 ft 1 in) and for women is 142 cm (4 ft 8 in). Individuals with Down syndrome are at increased risk for obesity as they age. Growth charts have been developed specifically for children with Down syndrome.
This syndrome causes about a third of cases of intellectual disability. Many developmental milestones are delayed with the ability to crawl typically occurring around 8 months rather than 5 months and the ability to walk independently typically occurring around 21 months rather than 14 months.
Most individuals with Down syndrome have mild (IQ: 50–69) or moderate (IQ: 35–50) intellectual disability with some cases having severe (IQ: 20–35) difficulties. Those with mosaic Down syndrome typically have IQ scores 10–30 points higher. As they age, people with Down syndrome typically perform worse than their same-age peers.
Commonly, individuals with Down syndrome have better language understanding than ability to speak. Between 10 and 45% have either a stutter or rapid and irregular speech, making it difficult to understand them. After reaching 30 years of age, some may lose their ability to speak.
They typically do fairly well with social skills. Behavior problems are not generally as great an issue as in other syndromes associated with intellectual disability. In children with Down syndrome, mental illness occurs in nearly 30% with autism occurring in 5–10%. People with Down syndrome experience a wide range of emotions. While people with Down syndrome are generally happy, symptoms of depression and anxiety may develop in early adulthood.
Children and adults with Down syndrome are at increased risk of epileptic seizures, which occur in 5–10% of children and up to 50% of adults. This includes an increased risk of a specific type of seizure called infantile spasms. Many (15%) who live 40 years or longer develop Alzheimer’s disease. In those who reach 60 years of age, 50–70% have the disease.
Hearing and vision disorders occur in more than half of people with Down syndrome.
Vision problems occur in 38 to 80%. Between 20 and 50% have strabismus, in which the two eyes do not move together. Cataracts (cloudiness of the lens of the eye) occur in 15%, and may be present at birth. Keratoconus (a thin, cone-shaped cornea) and glaucoma (increased eye pressure) are also more common, as are refractive errors requiring glasses or contacts. Brushfield spots (small white or grayish/brown spots on the outer part of the iris) are present in 38 to 85% of individuals.
Hearing problems are found in 50–90% of children with Down syndrome. This is often the result of otitis media with effusion which occurs in 50–70% and chronic ear infections which occur in 40 to 60%. Ear infections often begin in the first year of life and are partly due to poor eustachian tube function. Excessive ear wax can also cause hearing loss due to obstruction of the outer ear canal. Even a mild degree of hearing loss can have negative consequences for speech, language understanding, and academics. Additionally, it is important to rule out hearing loss as a factor in social and cognitive deterioration. Age-related hearing loss of the sensorineural type occurs at a much earlier age and affects 10–70% of people with Down syndrome.
The rate of congenital heart disease in newborns with Down syndrome is around 40%. Of those with heart disease, about 80% have an atrioventricular septal defect or ventricular septal defect with the former being more common. Mitral valve problems become common as people age, even in those without heart problems at birth. Other problems that may occur include tetralogy of Fallot and patent ductus arteriosus. People with Down syndrome have a lower risk of hardening of the arteries.
Although the overall risk of cancer in Down syndrome is not changed, the risk of testicular cancer and certain blood cancers, including acute lymphoblastic leukemia (ALL) and acute megakaryoblastic leukemia (AMKL) is increased while the risk of other non-blood cancers is decreased. People with Down syndrome are believed to have an increased risk of developing cancers derived from germ cells whether these cancers are blood or non-blood related.
Leukemia is 10 to 15 times more common in children with Down syndrome. In particular, acute lymphoblastic leukemia is 20 times more common and the megakaryoblastic form of acute myeloid leukemia (acute megakaryoblastic leukemia), is 500 times more common. Acute megakaryoblastic leukemia (AMKL) is a leukemia of megakaryoblasts, the precursors cells to megakaryocytes which form blood platelets. Acute lymphoblastic leukemia in Down syndrome accounts for 1–3% of all childhood cases of ALL. It occurs most often in those older than nine years or having a white blood cell count greater than 50,000 per microliter and is rare in those younger than one year old. ALL in Down syndrome tends to have poorer outcomes than other cases of ALL in people without Down syndrome.
In Down syndrome, AMKL is typically preceded by transient myeloproliferative disease (TMD), a disorder of blood cell production in which non-cancerous megakaryoblasts with a mutation in the "GATA1" gene rapidly divide during the later period of pregnancy. The condition affects 3–10% of babies with Down. While it often spontaneously resolves within three months of birth, it can cause serious blood, liver, or other complications. In about 10% of cases, TMD progresses to AMKL during the three months to five years following its resolution.
People with Down syndrome have a lower risk of all major solid cancers including those of lung, breast, cervix, with the lowest relative rates occurring in those aged 50 years or older. This low risk is thought due to an increase in the expression of tumor suppressor genes present on chromosome 21. One exception is testicular germ cell cancer which occurs at a higher rate in Down syndrome.
Problems of the thyroid gland occur in 20–50% of individuals with Down syndrome. Low thyroid is the most common form, occurring in almost half of all individuals. Thyroid problems can be due to a poorly or nonfunctioning thyroid at birth (known as congenital hypothyroidism) which occurs in 1% or can develop later due to an attack on the thyroid by the immune system resulting in Graves' disease or autoimmune hypothyroidism. Type 1 diabetes mellitus is also more common.
Constipation occurs in nearly half of people with Down syndrome and may result in changes in behavior. One potential cause is Hirschsprung's disease, occurring in 2–15%, which is due to a lack of nerve cells controlling the colon. Other frequent congenital problems include duodenal atresia, pyloric stenosis, Meckel diverticulum, and imperforate anus. Celiac disease affects about 7–20% and gastroesophageal reflux disease is also more common.
Individuals with Down syndrome tend to be more susceptible to gingivitis as well as early, severe periodontal disease, necrotising ulcerative gingivitis, and early tooth loss, especially in the lower front teeth. While plaque and poor oral hygiene are contributing factors, the severity of these periodontal diseases cannot be explained solely by external factors. Research suggests that the severity is likely a result of a weakened immune system. The weakened immune system also contributes to increased incidence of yeast infections in the mouth (from Candida albicans).
Individuals with Down syndrome also tend to have a more alkaline saliva resulting in a greater resistance to tooth decay, despite decreased quantities of saliva, less effective oral hygiene habits, and higher plaque indexes.
Higher rates of tooth wear and bruxism are also common. Other common oral manifestations of Down syndrome include enlarged hypotonic tongue, crusted and hypotonic lips, mouth breathing, narrow palate with crowded teeth, class III malocclusion with an underdeveloped maxilla and posterior crossbite, delayed exfoliation of baby teeth and delayed eruption of adult teeth, shorter roots on teeth, and often missing and malformed (usually smaller) teeth. Less common manifestations include cleft lip and palate and enamel hypocalcification (20% prevalence).
Males with Down syndrome usually do not father children, while females have lower rates of fertility relative to those who are unaffected. Fertility is estimated to be present in 30–50% of females. Menopause usually occurs at an earlier age. The poor fertility in males is thought to be due to problems with sperm development; however, it may also be related to not being sexually active. As of 2006, three instances of males with Down syndrome fathering children and 26 cases of females having children have been reported. Without assisted reproductive technologies, around half of the children of someone with Down syndrome will also have the syndrome.
Down syndrome is caused by having three copies of the genes on chromosome 21, rather than the usual two. The parents of the affected individual are typically genetically normal. Those who have one child with Down syndrome have about a 1% risk of having a second child with the syndrome, if both parents are found to have normal karyotypes.
The extra chromosome content can arise through several different ways. The most common cause (about 92–95% of cases) is a complete extra copy of chromosome 21, resulting in trisomy 21. In 1.0 to 2.5% of cases, some of the cells in the body are normal and others have trisomy 21, known as mosaic Down syndrome. The other common mechanisms that can give rise to Down syndrome include: a Robertsonian translocation, isochromosome, or ring chromosome. These contain additional material from chromosome 21 and occur in about 2.5% of cases. An isochromosome results when the two long arms of a chromosome separate together rather than the long and short arm separating together during egg or sperm development.
Trisomy 21 (also known by the karyotype 47,XX,+21 for females and 47,XY,+21 for males) is caused by a failure of the 21st chromosome to separate during egg or sperm development (nondisjunction). As a result, a sperm or egg cell is produced with an extra copy of chromosome 21; this cell thus has 24 chromosomes. When combined with a normal cell from the other parent, the baby has 47 chromosomes, with three copies of chromosome 21. About 88% of cases of trisomy 21 result from nonseparation of the chromosomes in the mother, 8% from nonseparation in the father, and 3% after the egg and sperm have merged.
The extra chromosome 21 material may also occur due to a Robertsonian translocation in 2–4% of cases. In this situation, the long arm of chromosome 21 is attached to another chromosome, often chromosome 14. In a male affected with Down syndrome, it results in a karyotype of 46XY,t(14q21q). This may be a new mutation or previously present in one of the parents. The parent with such a translocation is usually normal physically and mentally; however, during production of egg or sperm cells, a higher chance of creating reproductive cells with extra chromosome 21 material exists. This results in a 15% chance of having a child with Down syndrome when the mother is affected and a less than 5% probability if the father is affected. The probability of this type of Down syndrome is not related to the mother's age. Some children without Down syndrome may inherit the translocation and have a higher probability of having children of their own with Down syndrome. In this case it is sometimes known as familial Down syndrome.
The extra genetic material present in Down syndrome results in overexpression of a portion of the 310 genes located on chromosome 21. This overexpression has been estimated at around 50%, due to the third copy of the chromosome present. Some research has suggested the Down syndrome critical region is located at bands 21q22.1–q22.3, with this area including genes for amyloid, superoxide dismutase, and likely the ETS2 proto oncogene. Other research, however, has not confirmed these findings. microRNAs are also proposed to be involved.
The dementia that occurs in Down syndrome is due to an excess of amyloid beta peptide produced in the brain and is similar to Alzheimer's disease, which also involves amyloid beta build-up. Amyloid beta is processed from amyloid precursor protein, the gene for which is located on chromosome 21. Senile plaques and neurofibrillary tangles are present in nearly all by 35 years of age, though dementia may not be present. Those with Down syndrome also lack a normal number of lymphocytes and produce less antibodies which contributes to their increased risk of infection.
Down syndrome is associated with an increased risk of many chronic diseases that are typically associated with older age such as Alzheimer's disease. The accelerated aging suggest that trisomy 21 increases the biological age of tissues, but molecular evidence for this hypothesis is sparse. According to a biomarker of tissue age known as epigenetic clock, trisomy 21 increases the age of blood and brain tissue (on average by 6.6 years).
When screening tests predict a high risk of Down syndrome, a more invasive diagnostic test (amniocentesis or chorionic villus sampling) is needed to confirm the diagnosis. The false-positive rate with screening is about 2–5% (see section Screening below). Amniocentesis and chorionic villus sampling are more reliable tests, but they increase the risk of miscarriage between 0.5 and 1%. The risk of limb problems may be increased in the offspring if chorionic villus sampling is performed before 10 weeks. The risk from the procedure is greater the earlier it is performed, thus amniocentesis is not recommended before 15 weeks gestational age and chorionic villus sampling before 10 weeks gestational age.
About 92% of pregnancies in Europe with a diagnosis of Down syndrome are terminated. As a result, there is almost no one with Down's in Iceland and Denmark, where screening is commonplace. In the United States, the termination rate after diagnosis is around 75%, but varies from 61% to 93% depending on the population surveyed. Rates are lower among women who are younger and have decreased over time. When asked if they would have a termination if their fetus tested positive, 23–33% said yes, when high-risk pregnant women were asked, 46–86% said yes, and when women who screened positive are asked, 89–97% say yes.
The diagnosis can often be suspected based on the child's physical appearance at birth. An analysis of the child's chromosomes is needed to confirm the diagnosis, and to determine if a translocation is present, as this may help determine the risk of the child's parents having further children with Down syndrome. Parents generally wish to know the possible diagnosis once it is suspected and do not wish pity.
Guidelines recommend screening for Down syndrome to be offered to all pregnant women, regardless of age. A number of tests are used, with varying levels of accuracy. They are typically used in combination to increase the detection rate. None can be definitive, thus if screening is positive, either amniocentesis or chorionic villus sampling is required to confirm the diagnosis. Screening in both the first and second trimesters is better than just screening in the first trimester. The different screening techniques in use are able to pick up 90–95% of cases, with a false-positive rate of 2–5%. If Down syndrome occurs in one in 500 pregnancies and the test used has a 5% false-positive rate, this means, of 26 women who test positive on screening, only one will have Down syndrome confirmed. If the screening test has a 2% false-positive rate, this means one of eleven who test positive on screening have a fetus with Down syndrome.
Ultrasound imaging can be used to screen for Down syndrome. Findings that indicate increased risk when seen at 14 to 24 weeks of gestation include a small or no nasal bone, large ventricles, nuchal fold thickness, and an abnormal right subclavian artery, among others. The presence or absence of many markers is more accurate. Increased fetal nuchal translucency (NT) indicates an increased risk of Down syndrome picking up 75–80% of cases and being falsely positive in 6%.
Several blood markers can be measured to predict the risk of Down syndrome during the first or second trimester. Testing in both trimesters is sometimes recommended and test results are often combined with ultrasound results. In the second trimester, often two or three tests are used in combination with two or three of: α-fetoprotein, unconjugated estriol, total hCG, and free βhCG detecting about 60–70% of cases.
Testing of the mother's blood for fetal DNA is being studied and appears promising in the first trimester. The International Society for Prenatal Diagnosis considers it a reasonable screening option for those women whose pregnancies are at a high risk for trisomy 21. Accuracy has been reported at 98.6% in the first trimester of pregnancy. Confirmatory testing by invasive techniques (amniocentesis, CVS) is still required to confirm the screening result.
Efforts such as early childhood intervention, screening for common problems, medical treatment where indicated, a good family environment, and work-related training can improve the development of children with Down syndrome. Education and proper care can improve quality of life. Raising a child with Down syndrome is more work for parents than raising an unaffected child. Typical childhood vaccinations are recommended.
A number of health organizations have issued recommendations for screening those with Down syndrome for particular diseases. This is recommended to be done systematically.
At birth, all children should get an electrocardiogram and ultrasound of the heart. Surgical repair of heart problems may be required as early as three months of age. Heart valve problems may occur in young adults, and further ultrasound evaluation may be needed in adolescents and in early adulthood. Due to the elevated risk of testicular cancer, some recommend checking the person's testicles yearly.
Hearing aids or other amplification devices can be useful for language learning in those with hearing loss. Speech therapy may be useful and is recommended to be started around nine months of age. As those with Down syndrome typically have good hand-eye coordination, learning sign language may be possible. Augmentative and alternative communication methods, such as pointing, body language, objects, or pictures, are often used to help with communication. Behavioral issues and mental illness are typically managed with counseling or medications.
Education programs before reaching school age may be useful. School-age children with Down syndrome may benefit from inclusive education (whereby students of differing abilities are placed in classes with their peers of the same age), provided some adjustments are made to the curriculum. Evidence to support this, however, is not very strong. In the United States, the Individuals with Disabilities Education Act of 1975 requires public schools generally to allow attendance by students with Down syndrome.
Individuals with Down syndrome may learn better visually. Drawing may help with language, speech, and reading skills. Children with Down syndrome still often have difficulty with sentence structure and grammar, as well as developing the ability to speak clearly. Several types of early intervention can help with cognitive development. Efforts to develop motor skills include physical therapy, speech and language therapy, and occupational therapy. Physical therapy focuses specifically on motor development and teaching children to interact with their environment. Speech and language therapy can help prepare for later language. Lastly, occupational therapy can help with skills needed for later independence.
Tympanostomy tubes are often needed and often more than one set during the person's childhood. Tonsillectomy is also often done to help with sleep apnea and throat infections. Surgery, however, does not always address the sleep apnea and a continuous positive airway pressure (CPAP) machine may be useful. Physical therapy and participation in physical education may improve motor skills. Evidence to support this in adults, however, is not very good.
Efforts to prevent respiratory syncytial virus (RSV) infection with human monoclonal antibodies should be considered, especially in those with heart problems. In those who develop dementia there is no evidence for memantine, donepezil, rivastigmine, or galantamine.
Plastic surgery has been suggested as a method of improving the appearance and thus the acceptance of people with Down syndrome. It has also been proposed as a way to improve speech. Evidence, however, does not support a meaningful difference in either of these outcomes. Plastic surgery on children with Down syndrome is uncommon, and continues to be controversial. The U.S. National Down Syndrome Society views the goal as one of mutual respect and acceptance, not appearance.
Many alternative medical techniques are used in Down syndrome; however, they are poorly supported by evidence. These include: dietary changes, massage, animal therapy, chiropractic and naturopathy, among others. Some proposed treatments may also be harmful.
Between 5 and 15% of children with Down syndrome in Sweden attend regular school. Some graduate from high school; however, most do not. Of those with intellectual disability in the United States who attended high school about 40% graduated. Many learn to read and write and some are able to do paid work. In adulthood about 20% in the United States do paid work in some capacity. In Sweden, however, less than 1% have regular jobs. Many are able to live semi-independently, but they often require help with financial, medical, and legal matters. Those with mosaic Down syndrome usually have better outcomes.
Individuals with Down syndrome have a higher risk of early death than the general population. This is most often from heart problems or infections. Following improved medical care, particularly for heart and gastrointestinal problems, the life expectancy has increased. This increase has been from 12 years in 1912, to 25 years in the 1980s, to 50 to 60 years in the developed world in the 2000s. Currently between 4 and 12% die in the first year of life. The probability of long-term survival is partly determined by the presence of heart problems. In those with congenital heart problems, 60% survive to 10 years and 50% survive to 30 years of age. In those without heart problems, 85% survive to 10 years and 80% survive to 30 years of age. About 10% live to 70 years of age. The National Down Syndrome Society provide information regarding raising a child with Down syndrome.
Down syndrome is the most common chromosomal abnormality in humans. Globally, , Down syndrome occurs in about 1 per 1,000 births and results in about 17,000 deaths. More children are born with Down syndrome in countries where abortion is not allowed and in countries where pregnancy more commonly occurs at a later age. About 1.4 per 1,000 live births in the United States and 1.1 per 1,000 live births in Norway are affected. In the 1950s, in the United States, it occurred in 2 per 1000 live births with the decrease since then due to prenatal screening and abortions. The number of pregnancies with Down syndrome is more than two times greater with many spontaneously aborting. It is the cause of 8% of all congenital disorders.
Maternal age affects the chances of having a pregnancy with Down syndrome. At age 20, the chance is 1 in 1,441; at age 30, it is 1 in 959; at age 40, it is 1 in 84; and at age 50 it is 1 in 44. Although the probability increases with maternal age, 70% of children with Down syndrome are born to women 35 years of age and younger, because younger people have more children. The father's older age is also a risk factor in women older than 35, but not in women younger than 35, and may partly explain the increase in risk as women age.
English physician John Langdon Down first described Down syndrome in 1862, recognizing it as a distinct type of mental disability, and again in a more widely published report in 1866. Édouard Séguin described it as separate from cretinism in 1844. By the 20th century, Down syndrome had become the most recognizable form of mental disability.
In antiquity, many infants with disabilities were either killed or abandoned.
In June 2020, the earliest incidence of Down syndrome was found in genomic evidence from an infant that was buried before 3200 BC at Poulnabrone dolmen in Ireland.
Researchers believe that a number of historical pieces of art portray Down syndrome, including pottery from the pre-Columbian Tumaco-La Tolita culture in present-day Colombia and Ecuador, and the 16th-century painting "The Adoration of the Christ Child".
In the 20th century, many individuals with Down syndrome were institutionalized, few of the associated medical problems were treated, and most people died in infancy or early adulthood. With the rise of the eugenics movement, 33 of the then 48 U.S. states and several countries began programs of forced sterilization of individuals with Down syndrome and comparable degrees of disability. Action T4 in Nazi Germany made public policy of a program of systematic involuntary euthanization.
With the discovery of karyotype techniques in the 1950s it became possible to identify abnormalities of chromosomal number or shape. In 1959 Jérôme Lejeune reported the discovery that Down syndrome resulted from an extra chromosome. However, Lejeune's claim to the discovery has been disputed, and in 2014 the Scientific Council of the French Federation of Human Genetics unanimously awarded its Grand Prize to his colleague Marthe Gautier for her role in this discovery. The discovery took place in the laboratory of Raymond Turpin at the Hôpital Trousseau in Paris, France. Jérôme Lejeune and Marthe Gautier were both his students.
As a result of this discovery, the condition became known as trisomy 21. Even before the discovery of its cause, the presence of the syndrome in all races, its association with older maternal age, and its rarity of recurrence had been noticed. Medical texts had assumed it was caused by a combination of inheritable factors that had not been identified. Other theories had focused on injuries sustained during birth.
Due to his perception that children with Down syndrome shared facial similarities with those of Blumenbach's Mongolian race, John Langdon Down used the term "mongoloid". He felt that the existence of Down syndrome confirmed that all peoples were genetically related. In the 1950s with discovery of the underlying cause as being related to chromosomes, concerns about the race-based nature of the name increased.
In 1961, 19 scientists suggested that "mongolism" had "misleading connotations" and had become "an embarrassing term". The World Health Organization (WHO) dropped the term in 1965 after a request by the delegation from the Mongolian People's Republic. While the term mongoloid (also mongolism, Mongolian imbecility or idiocy) continued to be used until the early 1980s, it is now considered unacceptable and is no longer in common use.
In 1975, the United States National Institutes of Health (NIH) convened a conference to standardize the naming and recommended replacing the possessive form, "Down's syndrome" with "Down syndrome". However, both the possessive and nonpossessive forms remain in use by the general population. The term "trisomy 21" is also commonly used.
Most obstetricians argue that not offering screening for Down syndrome is unethical. As it is a medically reasonable procedure, per informed consent, people should at least be given information about it. It will then be the woman's choice, based on her personal beliefs, how much or how little screening she wishes. When results from testing become available, it is also considered unethical not to give the results to the person in question.
Some bioethicists deem it reasonable for parents to select a child who would have the highest well-being. One criticism of this reasoning is that it often values those with disabilities less. Some parents argue that Down syndrome shouldn't be prevented or cured and that eliminating Down syndrome amounts to genocide. The disability rights movement does not have a position on screening, although some members consider testing and abortion discriminatory. Some in the United States who are anti-abortion support abortion if the fetus is disabled, while others do not. Of a group of 40 mothers in the United States who have had one child with Down syndrome, half agreed to screening in the next pregnancy.
Within the US, some Protestant denominations see abortion as acceptable when a fetus has Down syndrome while Orthodox Christianity and Roman Catholicism do not. Some of those against screening refer to it as a form of "eugenics". Disagreement exists within Islam regarding the acceptability of abortion in those carrying a fetus with Down syndrome. Some Islamic countries allow abortion, while others do not. Women may face stigmatization whichever decision they make.
Advocacy groups for individuals with Down syndrome began to be formed after the Second World War. These were organizations advocating for the inclusion of people with Down syndrome into the general school system and for a greater understanding of the condition among the general population, as well as groups providing support for families with children living with Down syndrome. Before this individuals with Down syndrome were often placed in mental hospitals or asylums. Organizations included the Royal Society for Handicapped Children and Adults founded in the UK in 1946 by Judy Fryd, Kobato Kai founded in Japan in 1964, the National Down Syndrome Congress founded in the United States in 1973 by Kathryn McGee and others, and the National Down Syndrome Society founded in 1979 in the United States. The first Roman Catholic order of nuns for women with Down Syndrome, Little Sisters Disciples of the Lamb, was founded in 1985 in France.
The first World Down Syndrome Day was held on 21 March 2006. The day and month were chosen to correspond with 21 and trisomy, respectively. It was recognized by the United Nations General Assembly in 2011.
Efforts are underway to determine how the extra chromosome 21 material causes Down syndrome, as currently this is unknown, and to develop treatments to improve intelligence in those with the syndrome. Two efforts being studied are the use stem cells and gene therapy. Other methods being studied include the use of antioxidants, gamma secretase inhibition, adrenergic agonists, and memantine. Research is often carried out on an animal model, the Ts65Dn mouse.
Down syndrome may also occur in animals other than humans. In great apes chromosome 22 corresponds to the human chromosome 21 and thus trisomy 22 causes Down syndrome in apes. The condition was observed in a common chimpanzee in 1969 and a Bornean orangutan in 1979, but neither lived very long. The common chimpanzee Kanako (born around 1993, in Japan) has become the longest-lived known example of this condition. Kanako has some of the same symptoms that are common in human Down syndrome. It is unknown how common this condition is in chimps but it is plausible it could be roughly as common as Down syndrome is in humans. | https://en.wikipedia.org/wiki?curid=8303 |
Dyslexia
Dyslexia, also known as reading disorder, is characterized by trouble with reading despite normal intelligence. Different people are affected to varying degrees. Problems may include difficulties in spelling words, reading quickly, writing words, "sounding out" words in the head, pronouncing words when reading aloud and understanding what one reads. Often these difficulties are first noticed at school. When someone who previously could read loses their ability, it is known as "alexia". The difficulties are involuntary and people with this disorder have a normal desire to learn. People with dyslexia have higher rates of attention deficit hyperactivity disorder (ADHD), developmental language disorders, and difficulties with numbers.
Dyslexia is believed to be caused by the interaction of genetic and environmental factors. Some cases run in families. Dyslexia that develops due to a traumatic brain injury, stroke, or dementia is called "acquired dyslexia". The underlying mechanisms of dyslexia are problems within the brain's language processing. Dyslexia is diagnosed through a series of tests of memory, vision, spelling, and reading skills. Dyslexia is separate from reading difficulties caused by hearing or vision problems or by insufficient teaching or opportunity to learn.
Treatment involves adjusting teaching methods to meet the person's needs. While not curing the underlying problem, it may decrease the degree or impact of symptoms. Treatments targeting vision are not effective. Dyslexia is the most common learning disability and occurs in all areas of the world. It affects 3–7% of the population, however, up to 20% of the general population may have some degree of symptoms. While dyslexia is more often diagnosed in men, it has been suggested that it affects men and women equally. Some believe that dyslexia should be best considered as a different way of learning, with both benefits and downsides.
Dyslexia is divided into developmental and acquired forms. This article is primarily about "developmental dyslexia", i.e., dyslexia that begins in early childhood. Acquired dyslexia occurs subsequent to neurological insult, such as traumatic brain injury or stroke. People with acquired dyslexia exhibit some of the signs or symptoms of the developmental disorder, but requiring different assessment strategies and treatment approaches.
In early childhood, symptoms that correlate with a later diagnosis of dyslexia include delayed onset of speech and a lack of phonological awareness. A common myth closely associates dyslexia with mirror writing and reading letters or words backwards. These behaviors are seen in many children as they learn to read and write, and are not considered to be defining characteristics of dyslexia.
School-age children with dyslexia may exhibit signs of difficulty in identifying or generating rhyming words, or counting the number of syllables in words–both of which depend on phonological awareness. They may also show difficulty in segmenting words into individual sounds or may blend sounds when producing words, indicating reduced phonemic awareness. Difficulties with word retrieval or naming things is also associated with dyslexia. People with dyslexia are commonly poor spellers, a feature sometimes called dysorthographia or dysgraphia, which depends on orthographic coding.
Problems persist into adolescence and adulthood and may include difficulties with summarizing stories, memorization, reading aloud, or learning foreign languages. Adults with dyslexia can often read with good comprehension, though they tend to read more slowly than others without a learning difficulty and perform worse in spelling tests or when reading nonsense words–a measure of phonological awareness.
Dyslexia often co-occurs with other learning disorders, but the reasons for this comorbidity have not been clearly identified. These associated disabilities include:
Researchers have been trying to find the neurobiological basis of dyslexia since the condition was first identified in 1881. For example, some have tried to associate the common problem among people with dyslexia of not being able to see letters clearly to abnormal development of their visual nerve cells.
Neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), have shown a correlation between both functional and structural differences in the brains of children with reading difficulties. Some people with dyslexia show less electrical activation in parts of the left hemisphere of the brain involved with reading, such as the inferior frontal gyrus, inferior parietal lobule, and the middle and ventral temporal cortex. Over the past decade, brain activation studies using PET to study language have produced a breakthrough in the understanding of the neural basis of language. Neural bases for the visual lexicon and for auditory verbal short-term memory components have been proposed, with some implication that the observed neural manifestation of developmental dyslexia is task-specific (i.e., functional rather than structural). fMRIs of people with dyslexia indicate an interactive role of the cerebellum and cerebral cortex as well as other brain structures in reading.
The cerebellar theory of dyslexia proposes that impairment of cerebellum-controlled muscle movement affects the formation of words by the tongue and facial muscles, resulting in the fluency problems that some people with dyslexia experience. The cerebellum is also involved in the automatization of some tasks, such as reading. The fact that some children with dyslexia have motor task and balance impairments could be consistent with a cerebellar role in their reading difficulties. However, the cerebellar theory has not been supported by controlled research studies.
Research into potential genetic causes of dyslexia has its roots in post-autopsy examination of the brains of people with dyslexia. Observed anatomical differences in the language centers of such brains include microscopic cortical malformations known as ectopias, and more rarely, vascular micro-malformations, and microgyrus—a smaller than usual size for the gyrus. The previously cited studies and others suggest that abnormal cortical development, presumed to occur before or during the sixth month of fetal brain development, may have caused the abnormalities. Abnormal cell formations in people with dyslexia have also been reported in non-language cerebral and subcortical brain structures. Several genes have been associated with dyslexia, including DCDC2 and KIAA0319 on chromosome 6, and DYX1C1 on chromosome 15.
The contribution of gene–environment interaction to reading disability has been intensely studied using twin studies, which estimate the proportion of variance associated with a person's environment and the proportion associated with their genes. Both environmental and genetic factors appear to contribute to reading development. Studies examining the influence of environmental factors such as parental education and teaching quality have determined that genetics have greater influence in supportive, rather than less optimal, environments. However, more optimal conditions may just allow those genetic risk factors to account for more of the variance in outcome because the environmental risk factors have been minimized.
As environment plays a large role in learning and memory, it is likely that epigenetic modifications play an important role in reading ability. Measures of gene expression, histone modifications, and methylation in the human periphery are used to study epigenetic processes; however, all of these have limitations in the extrapolation of results for application to the human brain.
The orthographic complexity of a language directly affects how difficult it is to learn to read it. English and French have comparatively "deep" phonemic orthographies within the Latin alphabet writing system, with complex structures employing spelling patterns on several levels: letter-sound correspondence, syllables, and morphemes. Languages such as Spanish, Italian and Finnish have mostly alphabetic orthographies, which primarily employ letter-sound correspondence—so-called "shallow" orthographies—which makes them easier to learn for people with dyslexia. Logographic writing systems, such as Chinese characters, have extensive symbol use; and these also pose problems for dyslexic learners.
Most people who are right-hand dominant have the left hemisphere of their brain specialize more in language processing. In terms of the mechanism of dyslexia, fMRI studies suggest that this specialization may be less pronounced or even absent in cases with dyslexia. Additionally, anatomical differences in the corpus callosum, the bundle of nerve fibers that connects the left and right hemispheres, have been linked to dyslexia via different studies.
Data via diffusion tensor MRI indicate changes in connectivity or in gray matter density in areas related to reading/language. Finally, the left inferior frontal gyrus has shown differences in phonological processing in people with dyslexia. Neurophysiological and imaging procedures are being used to ascertain phenotypic characteristics in people with dyslexia thus identifying the effects of certain genes.
The dual-route theory of reading aloud was first described in the early 1970s. This theory suggests that two separate mental mechanisms, or cognitive routes, are involved in reading aloud. One mechanism is the lexical route, which is the process whereby skilled readers can recognize known words by sight alone, through a "dictionary" lookup procedure. The other mechanism is the nonlexical or sublexical route, which is the process whereby the reader can "sound out" a written word. This is done by identifying the word's constituent parts (letters, phonemes, graphemes) and applying knowledge of how these parts are associated with each other, for example, how a string of neighboring letters sound together. The dual-route system could explain the different rates of dyslexia occurrence between different languages (e.g., the consistency of phonological rules in the Spanish language could account for the fact that Spanish-speaking children show a higher level of performance in non-word reading, when compared to English-speakers).
Dyslexia is a heterogeneous, dimensional learning disorder that impairs accurate and fluent word reading and spelling. Typical—but not universal—features include difficulties with phonological awareness; inefficient and often inaccurate processing of sounds in oral language ("phonological processing"); and verbal working memory deficits.
Dyslexia is a neurodevelopmental disorder, subcategorized in diagnostic guides as a "learning disorder with impairment in reading" (ICD-11 prefixes "developmental" to "learning disorder"; DSM-5 uses "specific"). Dyslexia is not a problem with intelligence. Emotional problems often arise secondary to learning difficulties. The National Institute of Neurological Disorders and Stroke describes dyslexia as "difficulty with phonological processing (the manipulation of sounds), spelling, and/or rapid visual-verbal responding".
The British Dyslexia Association defines dyslexia as "a learning difficulty that primarily affects the skills involved in accurate and fluent word reading and spelling" and is characterized by "difficulties in phonological awareness, verbal memory and verbal processing speed". "Phonological awareness" enables one to identify, discriminate, remember (working memory), and mentally manipulate the sound structures of language—phonemes, onsite-rime segments, syllables, and words.
There is a wide range of tests that are used in clinical and educational settings to evaluate the possibility that a person might have dyslexia. If initial testing suggests that a person might have dyslexia, such tests are often followed up with a full diagnostic assessment to determine the extent and nature of the disorder. Some tests can be administered by a teacher or computer; others require specialized training and are given by psychologists. Some test results indicate how to carry out teaching strategies. Because a variety of different cognitive, behavioral, emotional, and environmental factors all could contribute to difficultly learning to read, a comprehensive evaluation should consider these different possibilities. These tests and observations can include:
Screening procedures seek to identify children who show signs of possible dyslexia. In the preschool years, a family history of dyslexia, particularly in biological parents and siblings, predicts an eventual dyslexia diagnosis better than any test. In primary school (ages 5–7), the ideal screening procedure consist of training primary school teachers to carefully observe and record their pupils' progress through the phonics curriculum, and thereby identify children progressing slowly. When teachers identify such students they can supplement their observations with screening tests such as the "Phonics screening check" used by United Kingdom schools during Year One.
In the medical setting, child and adolescent psychiatrist M. S. Thambirajah emphasizes that "[g]iven the high prevalence of developmental disorders in school-aged children, all children seen in clinics should be systematically screened for developmental disorders irrespective of the presenting problem/s." Thambirajah recommends screening for developmental disorders, including dyslexia, by conducting a brief developmental history, a preliminary psychosocial developmental examination, and obtaining a school report regarding academic and social functioning.
Through the use of compensation strategies, therapy and educational support, individuals with dyslexia can learn to read and write. There are techniques and technical aids that help to manage or conceal symptoms of the disorder. Reducing stress and anxiety can sometimes improve written comprehension. For dyslexia intervention with alphabet-writing systems, the fundamental aim is to increase a child's awareness of correspondences between graphemes (letters) and phonemes (sounds), and to relate these to reading and spelling by teaching how sounds blend into words. Reinforced collateral training focused on reading and spelling may yield longer-lasting gains than oral phonological training alone. Early intervention can be successful in reducing reading failure.
Research does not suggest that specially-tailored fonts (such as Dyslexie and OpenDyslexic) help with reading. Children with dyslexia read text set in a regular font such as Times New Roman and Arial just as quickly, and they show a preference for regular fonts over specially-tailored fonts. Some research has pointed to increased letter-spacing being beneficial.
There is currently no evidence showing that music education significantly improves the reading skills of adolescents with dyslexia.
Dyslexic children require special instruction for word analysis and spelling from an early age. The prognosis, generally speaking, is positive for individuals who are identified in childhood and receive support from friends and family. The New York educational system (NYED) indicates "a daily uninterrupted 90 minute block of instruction in reading", furthermore "instruction in phonemic awareness, phonics, vocabulary development, reading fluency" so as to improve the individual's reading ability.
The percentage of people with dyslexia is unknown, but it has been estimated to be as low as 5% and as high as 17% of the population. While it is diagnosed more often in males, some believe that it affects males and females equally.
There are different definitions of dyslexia used throughout the world, but despite significant differences in writing systems, dyslexia occurs in different populations. Dyslexia is not limited to difficulty in converting letters to sounds, and Chinese people with dyslexia may have difficulty converting Chinese characters into their meanings. The Chinese vocabulary uses logographic, monographic, non-alphabet writing where one character can represent an individual phoneme.
The phonological-processing hypothesis attempts to explain why dyslexia occurs in a wide variety of languages. Furthermore, the relationship between phonological capacity and reading appears to be influenced by orthography.
Dyslexia was clinically described by Oswald Berkhan in 1881, but the term "dyslexia" was coined in 1883 by Rudolf Berlin, an ophthalmologist in Stuttgart. He used the term to refer to the case of a young boy who had severe difficulty learning to read and write, despite showing typical intelligence and physical abilities in all other respects. In 1896, W. Pringle Morgan, a British physician from Seaford, East Sussex, published a description of a reading-specific learning disorder in a report to the "British Medical Journal" titled "Congenital Word Blindness". The distinction between phonological versus surface types of dyslexia is only descriptive, and without any etiological assumption as to the underlying brain mechanisms. However, studies have alluded to potential differences due to variation in performance.
As is the case with any disorder, society often makes an assessment based on incomplete information. Before the 1980s, dyslexia was thought to be a consequence of education, rather than a neurological disability. As a result, society often misjudges those with the disorder. There is also sometimes a workplace stigma and negative attitude towards those with dyslexia. If the instructors of a person with dyslexia lack the necessary training to support a child with the condition, there is often a negative effect on the student's learning participation.
Most dyslexia research relates to alphabetic writing systems, and especially to European languages. However, substantial research is also available regarding people with dyslexia who speak Arabic, Chinese, Hebrew, or other languages. The outward expression of individuals with reading disability and regular poor readers is the same in some respects. | https://en.wikipedia.org/wiki?curid=8305 |
Delft
Delft () is a city and municipality in the province of South Holland, Netherlands. It is located between Rotterdam, to the southeast, and The Hague, to the northwest. Together with them, it is part of both Rotterdam–The Hague metropolitan area and the Randstad.
Delft is a popular tourist destination in the country of The Netherlands. It is home to Delft University of Technology (TU Delft), regarded as center of technological research and development in the Netherlands, Delft Blue pottery and the currently reigning House of Orange-Nassau. Historically, Delft played a highly influential role in the Dutch Golden Age. Delft has a special place in the history of microbiology. In terms of science and technology, thanks to the pioneering contributions of Antonie van Leeuwenhoek and Martinus Beijerinck, Delft can be considered to be the true birthplace of microbiology, with its several sub-disciplines such as bacteriology, protozoology, and virology.
The city of Delft came into being beside a canal, the 'Delf', which comes from the word "delven", meaning to delve or dig, and this led to the name Delft. At the elevated place where this 'Delf' crossed the creek wall of the silted up river Gantel, a Count established his manor, probably around 1075. Partly because of this, Delft became an important market town, the evidence for which can be seen in the size of its central market square.
Having been a rural village in the early Middle Ages, Delft developed into a city, and on 15 April 1246, Count Willem II granted Delft its city charter. Trade and industry flourished. In 1389 the Delfshavensche Schie canal was dug through to the river Maas, where the port of Delfshaven was built, connecting Delft to the sea.
Until the 17th century, Delft was one of the major cities of the then county (and later province) of Holland. In 1400, for example, the city had 6,500 inhabitants, making it the third largest city after Dordrecht (8,000) and Haarlem (7,000). In 1560, Amsterdam, with 28,000 inhabitants, had become the largest city, followed by Delft, Leiden and Haarlem, which each had around 14,000 inhabitants.
In 1536, a large part of the city was destroyed by the great fire of Delft.
The town's association with the House of Orange started when William of Orange (Willem van Oranje), nicknamed William the Silent (Willem de Zwijger), took up residence in 1572 in the former Saint-Agatha convent (subsequently called the Prinsenhof). At the time he was the leader of growing national Dutch resistance against Spanish occupation, known as the Eighty Years' War. By then Delft was one of the leading cities of Holland and it was equipped with the necessary city walls to serve as a headquarters. In October, 1573, an attack by Spanish forces was repelled in the Battle of Delft.
After the Act of Abjuration was proclaimed in 1581, Delft became the "de facto" capital of the newly independent Netherlands, as the seat of the Prince of Orange.
When William was shot dead on July 10, 1584 by Balthazar Gerards in the hall of the Prinsenhof (now the Prinsenhof Museum), the family's traditional burial place in Breda was still in the hands of the Spanish. Therefore, he was buried in the Delft Nieuwe Kerk (New Church), starting a tradition for the House of Orange that has continued to the present day.
Around this time, Delft also occupied a prominent position in the field of printing.
A number of Italian glazed earthenware makers settled in the city and introduced a new style. The tapestry industry also flourished when famous manufacturer François Spierincx moved to the city. In the 17th century, Delft experienced a new heyday, thanks to the presence of an office of the Dutch East India Company (VOC) (opened in 1602) and the manufacture of Delft Blue china.
A number of notable artists based themselves in the city, including Leonard Bramer, Carel Fabritius, Pieter de Hoogh, Gerard Houckgeest, Emanuel de Witte, Jan Steen, and Johannes Vermeer.
Reinier de Graaf and Antonie van Leeuwenhoek received international attention for their scientific research.
The Delft Explosion, also known in history as the Delft Thunderclap, occurred on 12 October 1654 when a gunpowder store exploded, destroying much of the city. Over a hundred people were killed and thousands were injured.
About of gunpowder were stored in barrels in a magazine in a former Clarist convent in the Doelenkwartier district, where the Paardenmarkt is now located. Cornelis Soetens, the keeper of the magazine, opened the store to check a sample of the powder and a huge explosion followed. Luckily, many citizens were away, visiting a market in Schiedam or a fair in The Hague.
Today, the explosion is primarily remembered for killing Rembrandt's most promising pupil, Carel Fabritius, and destroying almost all of his works.
Delft artist Egbert van der Poel painted several pictures of Delft showing the devastation.
The gunpowder store was subsequently re-housed, a 'cannonball's distance away', outside the city, in a new building designed by architect Pieter Post.
The city centre retains a large number of monumental buildings, while in many streets there are canals of which the banks are connected by typical bridges, altogether making this city a notable tourist destination.
Historical buildings and other sights of interest include:
Delft is well known for the Delft pottery ceramic products which were styled on the imported Chinese porcelain of the 17th century. The city had an early start in this area since it was a home port of the Dutch East India Company. It can still be seen at the pottery factories De Koninklijke Porceleyne Fles (or Royal Delft) and De Delftse Pauw, while new ceramics and ceramic art can be found at the Gallery Terra Delft.
The painter Johannes Vermeer (1632–1675) was born in Delft. Vermeer used Delft streets and home interiors as the subject or background in his paintings.
Several other famous painters lived and worked in Delft at that time, such as Pieter de Hoogh, Carel Fabritius, Nicolaes Maes, Gerard Houckgeest and Hendrick Cornelisz. van Vliet. They were all members of the Delft School. The Delft School is known for its images of domestic life and views of households, church interiors, courtyards, squares and the streets of Delft. The painters also produced pictures showing historic events, flowers, portraits for patrons and the court as well as decorative pieces of art.
Delft supports creative arts companies. From 2001 the , a building that had been disused since 1951, began to house small companies in the creative arts sector. However, demolition of the building started in December 2009, making way for the construction of the new railway tunnel in Delft. The occupants of the building, as well as the name 'Bacinol', moved to another building in the city. The name Bacinol relates to Dutch penicillin research during WWII.
Delft University of Technology (TU Delft) is one of four universities of technology in the Netherlands. It was founded as an academy for civil engineering in 1842 by King William II. Today well over 21,000 students are enrolled.
The UNESCO-IHE Institute for Water Education, providing postgraduate education for people from developing countries, draws on the strong tradition in water management and hydraulic engineering of the Delft university.
In the local economic field essential elements are:
East of Delft lies a relatively large nature and recreation area called the "Delftse Hout" ("Delft Wood"). Through the forest lie bike, horse-riding and footpaths. It also includes a vast lake (suitable for swimming and windsurfing), narrow beaches, a restaurant, and community gardens, plus camping ground and other recreational and sports facilities. (There is also a facility for renting bikes from the station.)
Inside the city, apart from a central park, there are several smaller town parks, including "Nieuwe Plantage", "Agnetapark", "Kalverbos".
There is also the Botanical Garden of the TU and an arboretum in Delftse Hout.
Delft was the birthplace of:
Delft is twinned with:
Delft's longstanding connection with Rishon LeZion ended in 2016 after the supporting organizations shut down in both countries.
Trains stopping at these stations connect Delft with, among others, the nearby cities of Rotterdam and The Hague, as often as every five minutes, for most of the day.
There are several bus routes from Delft to similar destinations. Trams frequently travel between Delft and The Hague via special double tracks crossing the city. | https://en.wikipedia.org/wiki?curid=8308 |
Duesberg hypothesis
The Duesberg hypothesis is the claim, associated with University of California, Berkeley professor Peter Duesberg, that various noninfectious factors such as but not limited to, recreational and pharmaceutical drug use are the cause of AIDS, and that HIV (human immunodeficiency virus) is merely a harmless passenger virus. The scientific consensus is that the Duesberg hypothesis is incorrect and that HIV is the cause of AIDS. The most prominent supporters of this hypothesis are Duesberg himself, biochemist vitamin proponent David Rasnick, and journalist Celia Farber. The scientific community contends that Duesberg's arguments are the result of cherry-picking predominantly outdated scientific data and selectively ignoring evidence in favor of HIV's role in AIDS.
Duesberg argues that there is a statistical correlation between trends in recreational drug use and trends in AIDS cases. He argues that the epidemic of AIDS cases in the 1980s corresponds to a supposed epidemic of recreational drug use in the United States and Europe during the same time frame.
These claims are not supported by epidemiologic data. The average yearly increase in opioid-related deaths from 1990 to 2002 was nearly three times the yearly increase from 1979–90, with the greatest increase in 2000–02, yet AIDS cases and deaths fell dramatically during the mid-to-late-1990s. Duesberg's claim that recreational drug use, rather than HIV, was the cause of AIDS has been specifically examined and found to be false. Cohort studies have found that only HIV-positive drug users develop opportunistic infections; HIV-negative drug users do not develop such infections, indicating that HIV rather than drug use is the cause of AIDS.
Duesberg has also argued that nitrite inhalants were the cause of the epidemic of Kaposi sarcoma (KS) in gay men. However, this argument has been described as an example of the fallacy of a statistical confounding effect; it is now known that a herpesvirus, potentiated by HIV, is responsible for AIDS-associated KS.
Moreover, in addition to recreational drugs, Duesberg argues that anti-HIV drugs such as zidovudine (AZT) can cause AIDS. Duesberg's claim that antiviral medication causes AIDS is regarded as disproven by the scientific community. Placebo-controlled studies have found that AZT as a single agent produces modest and short-lived improvements in survival and delays the development of opportunistic infections; it certainly did not cause AIDS, which develops in both treated and untreated study patients. With the subsequent development of protease inhibitors and highly active antiretroviral therapy, numerous studies have documented the fact that anti-HIV drugs prevent the development of AIDS and substantially prolong survival, further disproving the claim that these drugs "cause" AIDS.
Several studies have specifically addressed Duesberg's claim that recreational drug abuse or sexual promiscuity were responsible for the manifestations of AIDS. An early study of his claims, published in "Nature" in 1993, found Duesberg's drug abuse-AIDS hypothesis to have "no basis in fact."
A large prospective study followed a group of 715 homosexual men in the Vancouver, Canada, area; approximately half were HIV-seropositive or became so during the follow-up period, and the remainder were HIV-seronegative. After more than 8 years of follow-up, despite similar rates of drug use, sexual contact, and other supposed risk factors in both groups, only the HIV-positive group suffered from opportunistic infections. Similarly, CD4 counts dropped in the patients who were HIV-infected, but remained stable in the HIV-negative patients, despite similar rates of risk behavior. The authors concluded that "the risk-AIDS hypothesis ... is clearly rejected by our data," and that "the evidence supports the hypothesis that HIV-1 has an integral role in the CD4 depletion and progressive immune dysfunction that characterise AIDS."
Similarly, the Multicenter AIDS Cohort Study (MACS) and the Women's Interagency HIV Study (WIHS)—which between them observed more than 8,000 Americans—demonstrated that "the presence of HIV infection is the only factor that is strongly and consistently associated with the conditions that define AIDS." A 2008 study found that recreational drug use (including cannabis, cocaine, poppers, and amphetamines) had no effect on CD4 or CD8 T-cell counts, providing further evidence against a role of recreational drugs as a cause of AIDS.
Duesberg argued in 1989 that a significant number of AIDS victims had died without proof of HIV infection. However, with the use of modern culture techniques and polymerase chain reaction testing, HIV can be demonstrated in virtually all patients with AIDS. Since AIDS is now defined partially by the presence of HIV, Duesberg claims it is impossible by definition to offer evidence that AIDS doesn't require HIV. However, the first definitions of AIDS mentioned no cause and the first AIDS diagnoses were made before HIV was discovered. The addition of HIV positivity to surveillance criteria as an absolutely necessary condition for case reporting occurred only in 1993, after a scientific consensus was established that HIV caused AIDS.
According to the Duesberg hypothesis, AIDS is not found in Africa. What Duesberg calls "the myth of an African AIDS epidemic," among people" exists for several reasons, including:
Duesberg states that African AIDS cases are "a collection of long-established, indigenous diseases, such as chronic fevers, weight loss, alias "slim disease," diarrhea, and tuberculosis" that result from malnutrition and poor sanitation. African AIDS cases, though, have increased in the last three decades as HIV's prevalence has increased but as malnutrition percentages and poor sanitation have declined in many African regions. In addition, while HIV and AIDS are more prevalent in urban than in rural settings in Africa, malnutrition and poor sanitation are found more commonly in rural than in urban settings.
According to Duesberg, common diseases are easily misdiagnosed as AIDS in Africa because "the diagnosis of African AIDS is arbitrary" and does not include HIV testing. A definition of AIDS agreed upon in 1985 by the World Health Organization in Bangui did not require a positive HIV test, but since 1985, many African countries have added positive HIV tests to the Bangui criteria for AIDS or changed their definitions to match those of the U.S. Centers for Disease Control. One of the reasons for using more HIV tests despite their expense is that, rather than overestimating AIDS as Duesberg suggests, the Bangui definition alone excluded nearly half of African AIDS patients."
Duesberg notes that diseases associated with AIDS differ between African and Western populations, concluding that the causes of immunodeficiency must be different. Tuberculosis is much more commonly diagnosed among AIDS patients in Africa than in Western countries, while PCP conforms to the opposite pattern. Tuberculosis, though, had higher prevalence in Africa than in the West before the spread of HIV. In Africa and the United States, HIV has spurred a similar percentage increase in tuberculosis cases. PCP may be underestimated in Africa: since machinery "required for accurate testing is relatively rare in many resource-poor areas, including large parts of Africa, PCP is likely to be underdiagnosed in Africa. Consistent with this hypothesis, studies that report the highest rates of PCP in Africa are those that use the most advanced diagnostic methods" Duesberg also claims that Kaposi's Sarcoma is "exclusively diagnosed in male homosexual risk groups using nitrite inhalants and other psychoactive drugs as aphrodisiacs", but the cancer is fairly common among heterosexuals in some parts of Africa, and is found in heterosexuals in the United States as well.
Because reported AIDS cases in Africa and other parts of the developing world include a larger proportion of people who do not belong to Duesberg's preferred risk groups of drug addicts and male homosexuals, Duesberg writes on his website that "There are no risk groups in Africa, like drug addicts and homosexuals." However, many studies have addressed the issue of risk groups in Africa and concluded that the risk of AIDS is not equally distributed. In addition, AIDS in Africa largely kills sexually active working-age adults.
South African president Thabo Mbeki accepted Duesberg's hypothesis and, through the mid-2000s, rejected offers of medical assistance to fight HIV infection, a policy of inaction that cost over 300,000 lives.
Duesberg argues that retroviruses like HIV must be harmless to survive: they do not kill cells and they do not cause cancer, he maintains. Duesberg writes, "retroviruses do not kill cells because they depend on viable cells for the replication of their RNA from viral DNA integrated into cellular DNA." Duesberg elsewhere states that "the typical virus reproduces by entering a living cell and commandeering the cell's resources in order to make new virus particles, a process that ends with the disintegration of the dead cell."
Duesberg also rejects the involvement of retroviruses and other viruses in cancer. To him, virus-associated cancers are "freak accidents of nature" that do not warrant research programs such as the war on cancer. Duesberg rejects a role in cancer for numerous viruses, including leukemia viruses, Epstein–Barr virus, human papilloma virus, hepatitis B, feline leukemia virus, and human T-lymphotropic virus.
Duesberg claims that the supposedly innocuous nature of all retroviruses is supported by what he considers to be their normal mode of proliferation: infection from mother to child "in utero". Duesberg does not suggest that HIV is an endogenous retrovirus, a virus integrated into the germline and genetically heritable:
The consensus in the scientific community is that the Duesberg hypothesis has been refuted by a large and growing mass of evidence showing that HIV causes AIDS, that the amount of virus in the blood correlates with disease progression, that a plausible mechanism for HIV's action has been proposed, and that anti-HIV medication decreases mortality and opportunistic infection in people with AIDS.
In the 9 December 1994 issue of "Science" (Vol. 266, No. 5191), Duesberg's methods and claims were evaluated in a group of articles. The authors concluded that
The vast majority of people with AIDS have never received antiretroviral drugs, including those in developed countries prior to the licensure of AZT (zidovudine) in 1987, and people in developing countries today where very few individuals have access to these medications.
The NIAID reports that "in the mid-1980s, clinical trials enrolling patients with AIDS found that AZT given as single-drug therapy conferred a modest survival advantage compared [with] placebo. Among HIV-infected patients who had not yet developed AIDS, placebo-controlled trials found that AZT given as single-drug therapy delayed, for a year or two, the onset of AIDS-related illnesses. Significantly, long-term follow-up of these trials did not show a prolonged benefit of AZT, but also did not indicate that the drug increased disease progression or mortality. The lack of excess AIDS cases and death in the AZT arms of these placebo-controlled trials in effect counters the argument that AZT causes AIDS. Subsequent clinical trials found that patients receiving two-drug combinations had up to 50 percent improvements in time to progression to AIDS and in survival when compared with people receiving single-drug therapy. In more recent years, three-drug combination therapies have produced another 50 to 80 percent improvement in progression to AIDS and in survival when compared with two-drug regimens in clinical trials." "Use of potent anti-HIV combination therapies has contributed to dramatic reductions in the incidence of AIDS and AIDS-related deaths in populations where these drugs are widely available, an effect which clearly would not be seen if antiretroviral drugs caused AIDS."
Duesberg claims as support for his idea that many drug-free HIV-positive people have not yet developed AIDS; HIV/AIDS scientists note that many drug-free HIV-positive people have developed AIDS, and that, in the absence of medical treatment or rare genetic factors postulated to delay disease progression, it is very likely that nearly all HIV-positive people will eventually develop AIDS. Scientists also note that HIV-negative drug users do not suffer from immune system collapse. | https://en.wikipedia.org/wiki?curid=8309 |
DSL (disambiguation)
DSL or digital subscriber line is a family of technologies that provide digital data transmission over the wires of a local telephone network.
DSL may also refer to: | https://en.wikipedia.org/wiki?curid=8310 |
Dinosaur
Dinosaurs are a diverse group of reptiles of the clade Dinosauria. They first appeared during the Triassic period, between 243 and 233.23 million years ago, although the exact origin and timing of the evolution of dinosaurs is the subject of active research. They became the dominant terrestrial vertebrates after the Triassic–Jurassic extinction event 201.3 million years ago; their dominance continued throughout the Jurassic and Cretaceous periods. The fossil record demonstrates that birds are modern feathered dinosaurs, having evolved from earlier theropods during the Late Jurassic epoch. As such, birds were the only dinosaur lineage to survive the Cretaceous–Paleogene extinction event approximately 66 million years ago. Dinosaurs can therefore be divided into avian dinosaurs, or birds; and non-avian dinosaurs, which are all dinosaurs other than birds.
Dinosaurs are a varied group of animals from taxonomic, morphological and ecological standpoints. Birds, at over 10,000 living species, are the most diverse group of vertebrates besides perciform fish. Using fossil evidence, paleontologists have identified over 500 distinct genera and more than 1,000 different species of non-avian dinosaurs. Dinosaurs are represented on every continent by both extant species (birds) and fossil remains. Through the first half of the 20th century, before birds were recognized to be dinosaurs, most of the scientific community believed dinosaurs to have been sluggish and cold-blooded. Most research conducted since the 1970s, however, has indicated that all dinosaurs were active animals with elevated metabolisms and numerous adaptations for social interaction. Some were herbivorous, others carnivorous. Evidence suggests that all dinosaurs were egg-laying; and that nest-building was a trait shared by many dinosaurs, both avian and non-avian.
While dinosaurs were ancestrally bipedal, many extinct groups included quadrupedal species, and some were able to shift between these stances. Elaborate display structures such as horns or crests are common to all dinosaur groups, and some extinct groups developed skeletal modifications such as bony armor and spines. While the dinosaurs' modern-day surviving avian lineage (birds) are generally small due to the constraints of flight, many prehistoric dinosaurs (non-avian and avian) were large-bodied—the largest sauropod dinosaurs are estimated to have reached lengths of and heights of and were the largest land animals of all time. Still, the idea that non-avian dinosaurs were uniformly gigantic is a misconception based in part on preservation bias, as large, sturdy bones are more likely to last until they are fossilized. Many dinosaurs were quite small: "Xixianykus", for example, was only about long.
Since the first dinosaur fossils were recognized in the early 19th century, mounted fossil dinosaur skeletons have been major attractions at museums around the world, and dinosaurs have become an enduring part of world culture. The large sizes of some dinosaur groups, as well as their seemingly monstrous and fantastic nature, have ensured dinosaurs' regular appearance in best-selling books and films, such as "Jurassic Park". Persistent public enthusiasm for the animals has resulted in significant funding for dinosaur science, and new discoveries are regularly covered by the media.
The taxon 'Dinosauria' was formally named in 1842 by paleontologist Sir Richard Owen, who used it to refer to the "distinct tribe or sub-order of Saurian Reptiles" that were then being recognized in England and around the world. The term is derived . Though the taxonomic name has often been interpreted as a reference to dinosaurs' teeth, claws, and other fearsome characteristics, Owen intended it merely to evoke their size and majesty.
Other prehistoric animals, including pterosaurs, mosasaurs, ichthyosaurs, plesiosaurs, and "Dimetrodon", while often popularly conceived of as dinosaurs, are not taxonomically classified as dinosaurs. Pterosaurs are distantly related to dinosaurs, being members of the clade Ornithodira. The other groups mentioned are, like dinosaurs and pterosaurs, members of Sauropsida (the reptile and bird clade), except "Dimetrodon" (which is a synapsid).
Under phylogenetic nomenclature, dinosaurs are usually defined as the group consisting of the most recent common ancestor (MRCA) of "Triceratops" and modern birds (Neornithes), and all its descendants. It has also been suggested that Dinosauria be defined with respect to the MRCA of "Megalosaurus" and "Iguanodon", because these were two of the three genera cited by Richard Owen when he recognized the Dinosauria. Both definitions result in the same set of animals being defined as dinosaurs: "Dinosauria = Ornithischia + Saurischia", encompassing ankylosaurians (armored herbivorous quadrupeds), stegosaurians (plated herbivorous quadrupeds), ceratopsians (herbivorous quadrupeds with horns and frills), ornithopods (bipedal or quadrupedal herbivores including "duck-bills"), theropods (mostly bipedal carnivores and birds), and sauropodomorphs (mostly large herbivorous quadrupeds with long necks and tails).
Birds are now recognized as being the sole surviving lineage of theropod dinosaurs. In traditional taxonomy, birds were considered a separate class that had evolved from dinosaurs, a distinct superorder. However, a majority of contemporary paleontologists concerned with dinosaurs reject the traditional style of classification in favor of phylogenetic taxonomy; this approach requires that, for a group to be natural, all descendants of members of the group must be included in the group as well. Birds are thus considered to be dinosaurs and dinosaurs are, therefore, not extinct. Birds are classified as belonging to the subgroup Maniraptora, which are coelurosaurs, which are theropods, which are saurischians, which are dinosaurs.
Research by Matthew G. Baron, David B. Norman, and Paul M. Barrett in 2017 suggested a radical revision of dinosaurian systematics. Phylogenetic analysis by Baron "et al." recovered the Ornithischia as being closer to the Theropoda than the Sauropodomorpha, as opposed to the traditional union of theropods with sauropodomorphs. They resurrected the clade Ornithoscelida to refer to the group containing Ornithischia and Theropoda. Dinosauria itself was re-defined as the last common ancestor of "Triceratops horridus", "Passer domesticus" and "Diplodocus carnegii", and all of its descendants, to ensure that sauropods and kin remain included as dinosaurs.
Using one of the above definitions, dinosaurs can be generally described as archosaurs with hind limbs held erect beneath the body. Many prehistoric animal groups are popularly conceived of as dinosaurs, such as ichthyosaurs, mosasaurs, plesiosaurs, pterosaurs, and pelycosaurs (especially "Dimetrodon"), but are not classified scientifically as dinosaurs, and none had the erect hind limb posture characteristic of true dinosaurs. Dinosaurs were the dominant terrestrial vertebrates of the Mesozoic Era, especially the Jurassic and Cretaceous periods. Other groups of animals were restricted in size and niches; mammals, for example, rarely exceeded the size of a domestic cat, and were generally rodent-sized carnivores of small prey.
Dinosaurs have always been an extremely varied group of animals; according to a 2006 study, over 500 non-avian dinosaur genera have been identified with certainty so far, and the total number of genera preserved in the fossil record has been estimated at around 1850, nearly 75% of which remain to be discovered. An earlier study predicted that about 3,400 dinosaur genera existed, including many that would not have been preserved in the fossil record. By September 17, 2008, 1,047 different species of dinosaurs had been named.
In 2016, the estimated number of dinosaur species that existed in the Mesozoic was estimated to be 1,543–2,468. Some are herbivorous, others carnivorous, including seed-eaters, fish-eaters, insectivores, and omnivores. While dinosaurs were ancestrally bipedal (as are all modern birds), some prehistoric species were quadrupeds, and others, such as "Anchisaurus" and "Iguanodon", could walk just as easily on two or four legs. Cranial modifications like horns and crests are common dinosaurian traits, and some extinct species had bony armor. Although known for large size, many Mesozoic dinosaurs were human-sized or smaller, and modern birds are generally small in size. Dinosaurs today inhabit every continent, and fossils show that they had achieved global distribution by at least the Early Jurassic epoch. Modern birds inhabit most available habitats, from terrestrial to marine, and there is evidence that some non-avian dinosaurs (such as "Microraptor") could fly or at least glide, and others, such as spinosaurids, had semiaquatic habits.
While recent discoveries have made it more difficult to present a universally agreed-upon list of dinosaurs' distinguishing features, nearly all dinosaurs discovered so far share certain modifications to the ancestral archosaurian skeleton, or are clear descendants of older dinosaurs showing these modifications. Although some later groups of dinosaurs featured further modified versions of these traits, they are considered typical for Dinosauria; the earliest dinosaurs had them and passed them on to their descendants. Such modifications, originating in the most recent common ancestor of a certain taxonomic group, are called the synapomorphies of such a group.
A detailed assessment of archosaur interrelations by Sterling Nesbitt confirmed or found the following twelve unambiguous synapomorphies, some previously known:
Nesbitt found a number of further potential synapomorphies and discounted a number of synapomorphies previously suggested. Some of these are also present in silesaurids, which Nesbitt recovered as a sister group to Dinosauria, including a large anterior trochanter, metatarsals II and IV of subequal length, reduced contact between ischium and pubis, the presence of a cnemial crest on the tibia and of an ascending process on the astragalus, and many others.
A variety of other skeletal features are shared by dinosaurs. However, because they are either common to other groups of archosaurs or were not present in all early dinosaurs, these features are not considered to be synapomorphies. For example, as diapsids, dinosaurs ancestrally had two pairs of Infratemporal fenestrae (openings in the skull behind the eyes), and as members of the diapsid group Archosauria, had additional openings in the snout and lower jaw. Additionally, several characteristics once thought to be synapomorphies are now known to have appeared before dinosaurs, or were absent in the earliest dinosaurs and independently evolved by different dinosaur groups. These include an elongated scapula, or shoulder blade; a sacrum composed of three or more fused vertebrae (three are found in some other archosaurs, but only two are found in "Herrerasaurus"); and a perforate acetabulum, or hip socket, with a hole at the center of its inside surface (closed in "Saturnalia tupiniquim", for example). Another difficulty of determining distinctly dinosaurian features is that early dinosaurs and other archosaurs from the Late Triassic epoch are often poorly known and were similar in many ways; these animals have sometimes been misidentified in the literature.
Dinosaurs stand with their hind limbs erect in a manner similar to most modern mammals, but distinct from most other reptiles, whose limbs sprawl out to either side. This posture is due to the development of a laterally facing recess in the pelvis (usually an open socket) and a corresponding inwardly facing distinct head on the femur. Their erect posture enabled early dinosaurs to breathe easily while moving, which likely permitted stamina and activity levels that surpassed those of "sprawling" reptiles. Erect limbs probably also helped support the evolution of large size by reducing bending stresses on limbs. Some non-dinosaurian archosaurs, including rauisuchians, also had erect limbs but achieved this by a "pillar-erect" configuration of the hip joint, where instead of having a projection from the femur insert on a socket on the hip, the upper pelvic bone was rotated to form an overhanging shelf.
Dinosaurs diverged from their archosaur ancestors during the Middle to Late Triassic epochs, roughly 20 million years after the devastating Permian–Triassic extinction event wiped out an estimated 96% of all marine species and 70% of terrestrial vertebrate species approximately 252 million years ago. Radiometric dating of the rock formation that contained fossils from the early dinosaur genus "Eoraptor" at 231.4 million years old establishes its presence in the fossil record at this time. Paleontologists think that "Eoraptor" resembles the common ancestor of all dinosaurs; if this is true, its traits suggest that the first dinosaurs were small, bipedal predators. The discovery of primitive, dinosaur-like ornithodirans such as "Marasuchus" and "Lagerpeton" in Argentinian Middle Triassic strata supports this view; analysis of recovered fossils suggests that these animals were indeed small, bipedal predators. Dinosaurs may have appeared as early as 243 million years ago, as evidenced by remains of the genus "Nyasasaurus" from that period, though known fossils of these animals are too fragmentary to tell if they are dinosaurs or very close dinosaurian relatives. Paleontologist Max C. Langer "et al." (2018) determined that "Staurikosaurus" from the Santa Maria Formation dates to 233.23 million years ago, making it older in geologic age than "Eoraptor".
When dinosaurs appeared, they were not the dominant terrestrial animals. The terrestrial habitats were occupied by various types of archosauromorphs and therapsids, like cynodonts and rhynchosaurs. Their main competitors were the pseudosuchia, such as aetosaurs, ornithosuchids and rauisuchians, which were more successful than the dinosaurs. Most of these other animals became extinct in the Triassic, in one of two events. First, at about 215 million years ago, a variety of basal archosauromorphs, including the protorosaurs, became extinct. This was followed by the Triassic–Jurassic extinction event (about 201 million years ago), that saw the end of most of the other groups of early archosaurs, like aetosaurs, ornithosuchids, phytosaurs, and rauisuchians. Rhynchosaurs and dicynodonts survived (at least in some areas) at least as late as early-mid Norian and late Norian or earliest Rhaetian stages, respectively, and the exact date of their extinction is uncertain. These losses left behind a land fauna of crocodylomorphs, dinosaurs, mammals, pterosaurians, and turtles. The first few lines of early dinosaurs diversified through the Carnian and Norian stages of the Triassic, possibly by occupying the niches of the groups that became extinct. Also notably, there was a heightened rate of extinction during the Carnian Pluvial Event.
Dinosaur evolution after the Triassic follows changes in vegetation and the location of continents. In the Late Triassic and Early Jurassic, the continents were connected as the single landmass Pangaea, and there was a worldwide dinosaur fauna mostly composed of coelophysoid carnivores and early sauropodomorph herbivores. Gymnosperm plants (particularly conifers), a potential food source, radiated in the Late Triassic. Early sauropodomorphs did not have sophisticated mechanisms for processing food in the mouth, and so must have employed other means of breaking down food farther along the digestive tract. The general homogeneity of dinosaurian faunas continued into the Middle and Late Jurassic, where most localities had predators consisting of ceratosaurians, spinosauroids, and carnosaurians, and herbivores consisting of stegosaurian ornithischians and large sauropods. Examples of this include the Morrison Formation of North America and Tendaguru Beds of Tanzania. Dinosaurs in China show some differences, with specialized sinraptorid theropods and unusual, long-necked sauropods like "Mamenchisaurus". Ankylosaurians and ornithopods were also becoming more common, but prosauropods had become extinct. Conifers and pteridophytes were the most common plants. Sauropods, like the earlier prosauropods, were not oral processors, but ornithischians were evolving various means of dealing with food in the mouth, including potential cheek-like organs to keep food in the mouth, and jaw motions to grind food. Another notable evolutionary event of the Jurassic was the appearance of true birds, descended from maniraptoran coelurosaurians.
By the Early Cretaceous and the ongoing breakup of Pangaea, dinosaurs were becoming strongly differentiated by landmass. The earliest part of this time saw the spread of ankylosaurians, iguanodontians, and brachiosaurids through Europe, North America, and northern Africa. These were later supplemented or replaced in Africa by large spinosaurid and carcharodontosaurid theropods, and rebbachisaurid and titanosaurian sauropods, also found in South America. In Asia, maniraptoran coelurosaurians like dromaeosaurids, troodontids, and oviraptorosaurians became the common theropods, and ankylosaurids and early ceratopsians like "Psittacosaurus" became important herbivores. Meanwhile, Australia was home to a fauna of basal ankylosaurians, hypsilophodonts, and iguanodontians. The stegosaurians appear to have gone extinct at some point in the late Early Cretaceous or early Late Cretaceous. A major change in the Early Cretaceous, which would be amplified in the Late Cretaceous, was the evolution of flowering plants. At the same time, several groups of dinosaurian herbivores evolved more sophisticated ways to orally process food. Ceratopsians developed a method of slicing with teeth stacked on each other in batteries, and iguanodontians refined a method of grinding with dental batteries, taken to its extreme in hadrosaurids. Some sauropods also evolved tooth batteries, best exemplified by the rebbachisaurid "Nigersaurus".
There were three general dinosaur faunas in the Late Cretaceous. In the northern continents of North America and Asia, the major theropods were tyrannosaurids and various types of smaller maniraptoran theropods, with a predominantly ornithischian herbivore assemblage of hadrosaurids, ceratopsians, ankylosaurids, and pachycephalosaurians. In the southern continents that had made up the now-splitting Gondwana, abelisaurids were the common theropods, and titanosaurian sauropods the common herbivores. Finally, in Europe, dromaeosaurids, rhabdodontid iguanodontians, nodosaurid ankylosaurians, and titanosaurian sauropods were prevalent. Flowering plants were greatly radiating, with the first grasses appearing by the end of the Cretaceous. Grinding hadrosaurids and shearing ceratopsians became extremely diverse across North America and Asia. Theropods were also radiating as herbivores or omnivores, with therizinosaurians and ornithomimosaurians becoming common.
The Cretaceous–Paleogene extinction event, which occurred approximately 66 million years ago at the end of the Cretaceous, caused the extinction of all dinosaur groups except for the neornithine birds. Some other diapsid groups, such as crocodilians, sebecosuchians, turtles, lizards, snakes, sphenodontians, and choristoderans, also survived the event.
The surviving lineages of neornithine birds, including the ancestors of modern ratites, ducks and chickens, and a variety of waterbirds, diversified rapidly at the beginning of the Paleogene period, entering ecological niches left vacant by the extinction of Mesozoic dinosaur groups such as the arboreal enantiornithines, aquatic hesperornithines, and even the larger terrestrial theropods (in the form of "Gastornis", eogruiids, bathornithids, ratites, geranoidids, mihirungs, and "terror birds"). It is often cited that mammals out-competed the neornithines for dominance of most terrestrial niches but many of these groups co-existed with rich mammalian faunas for most of the Cenozoic Era. Terror birds and bathornithids occupied carnivorous guilds alongside predatory mammals, and ratites are still fairly successful as mid-sized herbivores; eogruiids similarly lasted from the Eocene to Pliocene, only becoming extinct very recently after over 20 million years of co-existence with many mammal groups.
Dinosaurs belong to a group known as archosaurs, which also includes modern crocodilians. Within the archosaur group, dinosaurs are differentiated most noticeably by their gait. Dinosaur legs extend directly beneath the body, whereas the legs of lizards and crocodilians sprawl out to either side.
Collectively, dinosaurs as a clade are divided into two primary branches, Saurischia and Ornithischia. Saurischia includes those taxa sharing a more recent common ancestor with birds than with Ornithischia, while Ornithischia includes all taxa sharing a more recent common ancestor with "Triceratops" than with Saurischia. Anatomically, these two groups can be distinguished most noticeably by their pelvic structure. Early saurischians—"lizard-hipped", from the Greek "sauros" (σαῦρος) meaning "lizard" and "ischion" (ἰσχίον) meaning "hip joint"—retained the hip structure of their ancestors, with a pubis bone directed cranially, or forward. This basic form was modified by rotating the pubis backward to varying degrees in several groups ("Herrerasaurus", therizinosauroids, dromaeosaurids, and birds). Saurischia includes the theropods (exclusively bipedal and with a wide variety of diets) and sauropodomorphs (long-necked herbivores which include advanced, quadrupedal groups).
By contrast, ornithischians—"bird-hipped", from the Greek "ornitheios" (ὀρνίθειος) meaning "of a bird" and "ischion" (ἰσχίον) meaning "hip joint"—had a pelvis that superficially resembled a bird's pelvis: the pubic bone was oriented caudally (rear-pointing). Unlike birds, the ornithischian pubis also usually had an additional forward-pointing process. Ornithischia includes a variety of species that were primarily herbivores. (NB: the terms "lizard hip" and "bird hip" are misnomers – birds evolved from dinosaurs with "lizard hips".)
The following is a simplified classification of dinosaur groups based on their evolutionary relationships, and organized based on the list of Mesozoic dinosaur species provided by Holtz (2007). A more detailed version can be found at Dinosaur classification.
The dagger (†) is used to signify groups with no living members.
Knowledge about dinosaurs is derived from a variety of fossil and non-fossil records, including fossilized bones, feces, trackways, gastroliths, feathers, impressions of skin, internal organs and soft tissues. Many fields of study contribute to our understanding of dinosaurs, including physics (especially biomechanics), chemistry, biology, and the Earth sciences (of which paleontology is a sub-discipline). Two topics of particular interest and study have been dinosaur size and behavior.
Current evidence suggests that dinosaur average size varied through the Triassic, Early Jurassic, Late Jurassic and Cretaceous. Predatory theropod dinosaurs, which occupied most terrestrial carnivore niches during the Mesozoic, most often fall into the category when sorted by estimated weight into categories based on order of magnitude, whereas recent predatory carnivoran mammals peak in the category. The mode of Mesozoic dinosaur body masses is between . This contrasts sharply with the average size of Cenozoic mammals, estimated by the National Museum of Natural History as about .
The sauropods were the largest and heaviest dinosaurs. For much of the dinosaur era, the smallest sauropods were larger than anything else in their habitat, and the largest were an order of magnitude more massive than anything else that has since walked the Earth. Giant prehistoric mammals such as "Paraceratherium" (the largest land mammal ever) were dwarfed by the giant sauropods, and only modern whales approach or surpass them in size. There are several proposed advantages for the large size of sauropods, including protection from predation, reduction of energy use, and longevity, but it may be that the most important advantage was dietary. Large animals are more efficient at digestion than small animals, because food spends more time in their digestive systems. This also permits them to subsist on food with lower nutritive value than smaller animals. Sauropod remains are mostly found in rock formations interpreted as dry or seasonally dry, and the ability to eat large quantities of low-nutrient browse would have been advantageous in such environments.
Scientists will probably never be certain of the largest and smallest dinosaurs to have ever existed. This is because only a tiny percentage of animals were ever fossilized and most of these remain buried in the earth. Few of the specimens that are recovered are complete skeletons, and impressions of skin and other soft tissues are rare. Rebuilding a complete skeleton by comparing the size and morphology of bones to those of similar, better-known species is an inexact art, and reconstructing the muscles and other organs of the living animal is, at best, a process of educated guesswork.
The tallest and heaviest dinosaur known from good skeletons is "Giraffatitan brancai" (previously classified as a species of "Brachiosaurus"). Its remains were discovered in Tanzania between 1907 and 1912. Bones from several similar-sized individuals were incorporated into the skeleton now mounted and on display at the Museum für Naturkunde in Berlin; this mount is tall and long, and would have belonged to an animal that weighed between and kilograms ( and lb). The longest complete dinosaur is the long "Diplodocus", which was discovered in Wyoming in the United States and displayed in Pittsburgh's Carnegie Museum of Natural History in 1907. The longest dinosaur known from good fossil material is the "Patagotitan": the skeleton mount in the American Museum of Natural History in New York is long. The Museo Municipal Carmen Funes in Plaza Huincul, Argentina, has an "Argentinosaurus" reconstructed skeleton mount long.
There were larger dinosaurs, but knowledge of them is based entirely on a small number of fragmentary fossils. Most of the largest herbivorous specimens on record were discovered in the 1970s or later, and include the massive "Argentinosaurus", which may have weighed to kilograms (90 to 110 short tons) and reached length of ; some of the longest were the long "Diplodocus hallorum" (formerly "Seismosaurus"), the long "Supersaurus" and long "Patagotitan"; and the tallest, the tall "Sauroposeidon", which could have reached a sixth-floor window. The heaviest and longest dinosaur may have been "Maraapunisaurus", known only from a now lost partial vertebral neural arch described in 1878. Extrapolating from the illustration of this bone, the animal may have been long and weighed kg ( lb). However, as no further evidence of sauropods of this size has been found, and the discoverer, Edward Drinker Cope, had made typographic errors before, it is likely to have been an extreme overestimation.
The largest carnivorous dinosaur was "Spinosaurus", reaching a length of , and weighing . Other large carnivorous theropods included "Giganotosaurus", "Carcharodontosaurus" and "Tyrannosaurus". "Therizinosaurus" and "Deinocheirus" were among the tallest of the theropods. The largest ornithischian dinosaur was probably the hadrosaurid "Shantungosaurus giganteus" which measured . The largest individuals may have weighed as much as .
The smallest dinosaur known is the bee hummingbird, with a length of only and mass of around . The smallest known non-avialan dinosaurs were about the size of pigeons and were those theropods most closely related to birds. For example, "Anchiornis huxleyi" is currently the smallest non-avialan dinosaur described from an adult specimen, with an estimated weight of 110 grams and a total skeletal length of . The smallest herbivorous non-avialan dinosaurs included "Microceratus" and "Wannanosaurus", at about long each.
Many modern birds are highly social, often found living in flocks. There is general agreement that some behaviors that are common in birds, as well as in crocodiles (birds' closest living relatives), were also common among extinct dinosaur groups. Interpretations of behavior in fossil species are generally based on the pose of skeletons and their habitat, computer simulations of their biomechanics, and comparisons with modern animals in similar ecological niches.
The first potential evidence for herding or flocking as a widespread behavior common to many dinosaur groups in addition to birds was the 1878 discovery of 31 "Iguanodon bernissartensis", ornithischians that were then thought to have perished together in Bernissart, Belgium, after they fell into a deep, flooded sinkhole and drowned. Other mass-death sites have been discovered subsequently. Those, along with multiple trackways, suggest that gregarious behavior was common in many early dinosaur species. Trackways of hundreds or even thousands of herbivores indicate that duck-billed (hadrosaurids) may have moved in great herds, like the American bison or the African Springbok. Sauropod tracks document that these animals traveled in groups composed of several different species, at least in Oxfordshire, England, although there is no evidence for specific herd structures. Congregating into herds may have evolved for defense, for migratory purposes, or to provide protection for young. There is evidence that many types of slow-growing dinosaurs, including various theropods, sauropods, ankylosaurians, ornithopods, and ceratopsians, formed aggregations of immature individuals. One example is a site in Inner Mongolia that has yielded the remains of over 20 "Sinornithomimus", from one to seven years old. This assemblage is interpreted as a social group that was trapped in mud. The interpretation of dinosaurs as gregarious has also extended to depicting carnivorous theropods as pack hunters working together to bring down large prey. However, this lifestyle is uncommon among modern birds, crocodiles, and other reptiles, and the taphonomic evidence suggesting mammal-like pack hunting in such theropods as "Deinonychus" and "Allosaurus" can also be interpreted as the results of fatal disputes between feeding animals, as is seen in many modern diapsid predators.
The crests and frills of some dinosaurs, like the marginocephalians, theropods and lambeosaurines, may have been too fragile to be used for active defense, and so they were likely used for sexual or aggressive displays, though little is known about dinosaur mating and territorialism. Head wounds from bites suggest that theropods, at least, engaged in active aggressive confrontations.
From a behavioral standpoint, one of the most valuable dinosaur fossils was discovered in the Gobi Desert in 1971. It included a "Velociraptor" attacking a "Protoceratops", providing evidence that dinosaurs did indeed attack each other. Additional evidence for attacking live prey is the partially healed tail of an "Edmontosaurus", a hadrosaurid dinosaur; the tail is damaged in such a way that shows the animal was bitten by a tyrannosaur but survived. Cannibalism amongst some species of dinosaurs was confirmed by tooth marks found in Madagascar in 2003, involving the theropod "Majungasaurus".
Comparisons between the scleral rings of dinosaurs and modern birds and reptiles have been used to infer daily activity patterns of dinosaurs. Although it has been suggested that most dinosaurs were active during the day, these comparisons have shown that small predatory dinosaurs such as dromaeosaurids, "Juravenator", and "Megapnosaurus" were likely nocturnal. Large and medium-sized herbivorous and omnivorous dinosaurs such as ceratopsians, sauropodomorphs, hadrosaurids, ornithomimosaurs may have been cathemeral, active during short intervals throughout the day, although the small ornithischian "Agilisaurus" was inferred to be diurnal.
Based on current fossil evidence from dinosaurs such as "Oryctodromeus", some ornithischian species seem to have led a partially fossorial (burrowing) lifestyle. Many modern birds are arboreal (tree climbing), and this was also true of many Mesozoic birds, especially the enantiornithines. While some early bird-like species may have already been arboreal as well (including dromaeosaurids such as "Microraptor") most non-avialan dinosaurs seem to have relied on land-based locomotion. A good understanding of how dinosaurs moved on the ground is key to models of dinosaur behavior; the science of biomechanics, pioneered by Robert McNeill Alexander, has provided significant insight in this area. For example, studies of the forces exerted by muscles and gravity on dinosaurs' skeletal structure have investigated how fast dinosaurs could run, whether diplodocids could create sonic booms via whip-like tail snapping, and whether sauropods could float.
Modern birds are known to communicate using visual and auditory signals, and the wide diversity of visual display structures among fossil dinosaur groups, such as horns, frills, crests, sails and feathers, suggests that visual communication has always been important in dinosaur biology. Reconstruction of the plumage color of "Anchiornis huxleyi", suggest the importance of color in visual communication in non-avian dinosaurs. The evolution of dinosaur vocalization is less certain. Paleontologist Phil Senter suggests that non-avian dinosaurs relied mostly on visual displays and possibly non-vocal acoustic sounds like hissing, jaw grinding or clapping, splashing and wing beating (possible in winged maniraptoran dinosaurs). He states they were unlikely to have been capable of vocalizing since their closest relatives, crocodilians and birds, use different means to vocalize, the former via the larynx and the latter through the unique syrinx, suggesting they evolved independently and their common ancestor was mute.
The earliest remains of a syrinx, which has enough mineral content for fossilization, was found in a specimen of the duck-like "Vegavis iaai" dated 69-66 million year ago, and this organ is unlikely to have existed in non-avian dinosaurs. However, in contrast to Senter, the researchers have suggested that dinosaurs could vocalize and that the syrinx-based vocal system of birds evolved from a larynx-based one, rather than the two systems evolving independently. A 2016 study suggests that dinosaurs produced closed mouth vocalizations like cooing, which occur in both crocodilians and birds as well as other reptiles. Such vocalizations evolved independently in extant archosaurs numerous times, following increases in body size. The crests of the Lambeosaurini and nasal chambers of ankylosaurids have been suggested to function in vocal resonance, though Senter states that the presence of resonance chambers in some dinosaurs is not necessarily evidence of vocalization as modern snakes have such chambers which intensify their hisses.
All dinosaurs laid amniotic eggs with hard shells made mostly of calcium carbonate. Dinosaur eggs were usually laid in a nest. Most species create somewhat elaborate nests which can be cups, domes, plates, beds scrapes, mounds, or burrows. Some species of modern bird have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. Primitive birds and many non-avialan dinosaurs often lay eggs in communal nests, with males primarily incubating the eggs. While modern birds have only one functional oviduct and lay one egg at a time, more primitive birds and dinosaurs had two oviducts, like crocodiles. Some non-avialan dinosaurs, such as "Troodon", exhibited iterative laying, where the adult might lay a pair of eggs every one or two days, and then ensured simultaneous hatching by delaying brooding until all eggs were laid.
When laying eggs, females grow a special type of bone between the hard outer bone and the marrow of their limbs. This medullary bone, which is rich in calcium, is used to make eggshells. A discovery of features in a "Tyrannosaurus rex" skeleton provided evidence of medullary bone in extinct dinosaurs and, for the first time, allowed paleontologists to establish the sex of a fossil dinosaur specimen. Further research has found medullary bone in the carnosaur "Allosaurus" and the ornithopod "Tenontosaurus". Because the line of dinosaurs that includes "Allosaurus" and "Tyrannosaurus" diverged from the line that led to "Tenontosaurus" very early in the evolution of dinosaurs, this suggests that the production of medullary tissue is a general characteristic of all dinosaurs.
Another widespread trait among modern birds (but see below in regards to fossil groups and extant megapodes) is parental care for young after hatching. Jack Horner's 1978 discovery of a "Maiasaura" ("good mother lizard") nesting ground in Montana demonstrated that parental care continued long after birth among ornithopods. A specimen of the Mongolian oviraptorid "Citipati osmolskae" was discovered in a chicken-like brooding position in 1993, which may indicate that they had begun using an insulating layer of feathers to keep the eggs warm. A dinosaur embryo (pertaining to the prosauropod "Massospondylus") was found without teeth, indicating that some parental care was required to feed the young dinosaurs. Trackways have also confirmed parental behavior among ornithopods from the Isle of Skye in northwestern Scotland.
However, there is ample evidence of precociality or superprecociality among many dinosaur species, particularly theropods. For instance, non-ornithuromorph birds have been abundantly demonstrated to have had slow growth rates, megapode-like egg burying behavior and the ability to fly soon after birth. Both "Tyrannosaurus rex" and "Troodon formosus" display juveniles with clear superprecociality and likely occupying different ecological niches than the adults. Superprecociality has been inferred for sauropods.
Because both modern crocodilians and birds have four-chambered hearts (albeit modified in crocodilians), it is likely that this is a trait shared by all archosaurs, including all dinosaurs. While all modern birds have high metabolisms and are "warm-blooded" (endothermic), a vigorous debate has been ongoing since the 1960s regarding how far back in the dinosaur lineage this trait extends. Scientists disagree as to whether non-avian dinosaurs were endothermic, ectothermic, or some combination of both.
After non-avian dinosaurs were discovered, paleontologists first posited that they were ectothermic. This supposed "cold-bloodedness" was used to imply that the ancient dinosaurs were relatively slow, sluggish organisms, even though many modern reptiles are fast and light-footed despite relying on external sources of heat to regulate their body temperature. The idea of dinosaurs as ectothermic remained a prevalent view until Robert T. "Bob" Bakker, an early proponent of dinosaur endothermy, published an influential paper on the topic in 1968.
Modern evidence indicates that some non-avian dinosaurs thrived in cooler temperate climates and that some early species must have regulated their body temperature by internal biological means (aided by the animals' bulk in large species and feathers or other body coverings in smaller species). Evidence of endothermy in Mesozoic dinosaurs includes the discovery of polar dinosaurs in Australia and Antarctica as well as analysis of blood-vessel structures within fossil bones that are typical of endotherms. Scientific debate continues regarding the specific ways in which dinosaur temperature regulation evolved.
In saurischian dinosaurs, higher metabolisms were supported by the evolution of the avian respiratory system, characterized by an extensive system of air sacs that extended the lungs and invaded many of the bones in the skeleton, making them hollow. Early avian-style respiratory systems with air sacs may have been capable of sustaining higher activity levels than those of mammals of similar size and build. In addition to providing a very efficient supply of oxygen, the rapid airflow would have been an effective cooling mechanism, which is essential for animals that are active but too large to get rid of all the excess heat through their skin.
Like other reptiles, dinosaurs are primarily uricotelic, that is, their kidneys extract nitrogenous wastes from their bloodstream and excrete it as uric acid instead of urea or ammonia via the ureters into the intestine. In most living species, uric acid is excreted along with feces as a semisolid waste. However, at least some modern birds (such as hummingbirds) can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. This material, as well as the output of the intestines, emerges from the cloaca. In addition, many species regurgitate pellets, and fossil pellets that may have come from dinosaurs are known from as long ago as the Cretaceous.
The possibility that dinosaurs were the ancestors of birds was first suggested in 1868 by Thomas Henry Huxley. After the work of Gerhard Heilmann in the early 20th century, the theory of birds as dinosaur descendants was abandoned in favor of the idea of their being descendants of generalized thecodonts, with the key piece of evidence being the supposed lack of clavicles in dinosaurs. However, as later discoveries showed, clavicles (or a single fused wishbone, which derived from separate clavicles) were not actually absent; they had been found as early as 1924 in "Oviraptor", but misidentified as an interclavicle. In the 1970s, John Ostrom revived the dinosaur–bird theory, which gained momentum in the coming decades with the advent of cladistic analysis, and a great increase in the discovery of small theropods and early birds. Of particular note have been the fossils of the Yixian Formation, where a variety of theropods and early birds have been found, often with feathers of some type. Birds share over a hundred distinct anatomical features with theropod dinosaurs, which are now generally accepted to have been their closest ancient relatives. They are most closely allied with maniraptoran coelurosaurs. A minority of scientists, most notably Alan Feduccia and Larry Martin, have proposed other evolutionary paths, including revised versions of Heilmann's basal archosaur proposal, or that maniraptoran theropods are the ancestors of birds but themselves are not dinosaurs, only convergent with dinosaurs.
Feathers are one of the most recognizable characteristics of modern birds, and a trait that was shared by all other dinosaur groups. Based on the current distribution of fossil evidence, it appears that feathers were an ancestral dinosaurian trait, though one that may have been selectively lost in some species. Direct fossil evidence of feathers or feather-like structures has been discovered in a diverse array of species in many non-avian dinosaur groups, both among saurischians and ornithischians. Simple, branched, feather-like structures are known from heterodontosaurids, primitive neornithischians and theropods, and primitive ceratopsians. Evidence for true, vaned feathers similar to the flight feathers of modern birds has been found only in the theropod subgroup Maniraptora, which includes oviraptorosaurs, troodontids, dromaeosaurids, and birds. Feather-like structures known as pycnofibres have also been found in pterosaurs, suggesting the possibility that feather-like filaments may have been common in the bird lineage and evolved before the appearance of dinosaurs themselves. Research into the genetics of American alligators has also revealed that crocodylian scutes do possess feather-keratins during embryonic development, but these keratins are not expressed by the animals before hatching.
"Archaeopteryx" was the first fossil found that revealed a potential connection between dinosaurs and birds. It is considered a transitional fossil, in that it displays features of both groups. Brought to light just two years after Charles Darwin's seminal "On the Origin of Species" (1859), its discovery spurred the nascent debate between proponents of evolutionary biology and creationism. This early bird is so dinosaur-like that, without a clear impression of feathers in the surrounding rock, at least one specimen was mistaken for "Compsognathus". Since the 1990s, a number of additional feathered dinosaurs have been found, providing even stronger evidence of the close relationship between dinosaurs and modern birds. Most of these specimens were unearthed in the lagerstätte of the Yixian Formation, Liaoning, northeastern China, which was part of an island continent during the Cretaceous. Though feathers have been found in only a few locations, it is possible that non-avian dinosaurs elsewhere in the world were also feathered. The lack of widespread fossil evidence for feathered non-avian dinosaurs may be because delicate features like skin and feathers are not often preserved by fossilization and thus are absent from the fossil record.
The description of feathered dinosaurs has not been without controversy; perhaps the most vocal critics have been Alan Feduccia and Theagarten Lingham-Soliar, who have proposed that some purported feather-like fossils are the result of the decomposition of collagenous fiber that underlaid the dinosaurs' skin, and that maniraptoran dinosaurs with vaned feathers were not actually dinosaurs, but convergent with dinosaurs. However, their views have for the most part not been accepted by other researchers, to the point that the scientific nature of Feduccia's proposals has been questioned.
In 2016, it was reported that a dinosaur tail with feathers had been found enclosed in amber. The fossil is about 99 million years old.
Because feathers are often associated with birds, feathered dinosaurs are often touted as the missing link between birds and dinosaurs. However, the multiple skeletal features also shared by the two groups represent another important line of evidence for paleontologists. Areas of the skeleton with important similarities include the neck, pubis, wrist (semi-lunate carpal), arm and pectoral girdle, furcula (wishbone), and breast bone. Comparison of bird and dinosaur skeletons through cladistic analysis strengthens the case for the link.
Large meat-eating dinosaurs had a complex system of air sacs similar to those found in modern birds, according to a 2005 investigation led by Patrick M. O'Connor. The lungs of theropod dinosaurs (carnivores that walked on two legs and had bird-like feet) likely pumped air into hollow sacs in their skeletons, as is the case in birds. "What was once formally considered unique to birds was present in some form in the ancestors of birds", O'Connor said. In 2008, scientists described "Aerosteon riocoloradensis", the skeleton of which supplies the strongest evidence to date of a dinosaur with a bird-like breathing system. CT scanning of "Aerosteon"'s fossil bones revealed evidence for the existence of air sacs within the animal's body cavity.
Fossils of the troodonts "Mei" and "Sinornithoides" demonstrate that some dinosaurs slept with their heads tucked under their arms. This behavior, which may have helped to keep the head warm, is also characteristic of modern birds. Several deinonychosaur and oviraptorosaur specimens have also been found preserved on top of their nests, likely brooding in a bird-like manner. The ratio between egg volume and body mass of adults among these dinosaurs suggest that the eggs were primarily brooded by the male, and that the young were highly precocial, similar to many modern ground-dwelling birds.
Some dinosaurs are known to have used gizzard stones like modern birds. These stones are swallowed by animals to aid digestion and break down food and hard fibers once they enter the stomach. When found in association with fossils, gizzard stones are called gastroliths.
The discovery that birds are a type of dinosaur showed that dinosaurs in general are not, in fact, extinct as is commonly stated. However, all non-avian dinosaurs, estimated to have been 628-1078 species, as well as many groups of birds did suddenly become extinct approximately 66 million years ago. It has been suggested that because small mammals, squamata and birds occupied the ecological niches suited for small body size, non-avian dinosaurs never evolved a diverse fauna of small-bodied species, which led to their downfall when large-bodied terrestrial tetrapods were hit by the mass extinction event. Many other groups of animals also became extinct at this time, including ammonites (nautilus-like mollusks), mosasaurs, plesiosaurs, pterosaurs, and many groups of mammals. Significantly, the insects suffered no discernible population loss, which left them available as food for other survivors. This mass extinction is known as the Cretaceous–Paleogene extinction event. The nature of the event that caused this mass extinction has been extensively studied since the 1970s; at present, several related theories are supported by paleontologists. Though the consensus is that an impact event was the primary cause of dinosaur extinction, some scientists cite other possible causes, or support the idea that a confluence of several factors was responsible for the sudden disappearance of dinosaurs from the fossil record.
The asteroid impact hypothesis, which was brought to wide attention in 1980 by Walter Alvarez and colleagues, links the extinction event at the end of the Cretaceous to a bolide impact approximately 66 million years ago. Alvarez "et al." proposed that a sudden increase in iridium levels, recorded around the world in the period's rock stratum, was direct evidence of the impact. The bulk of the evidence now suggests that a bolide wide hit in the vicinity of the Yucatán Peninsula (in southeastern Mexico), creating the approximately Chicxulub crater and triggering the mass extinction. Scientists are not certain whether dinosaurs were thriving or declining before the impact event. Some scientists propose that the meteorite impact caused a long and unnatural drop in Earth's atmospheric temperature, while others claim that it would have instead created an unusual heat wave. The consensus among scientists who support this hypothesis is that the impact caused extinctions both directly (by heat from the meteorite impact) and also indirectly (via a worldwide cooling brought about when matter ejected from the impact crater reflected thermal radiation from the sun). Although the speed of extinction cannot be deduced from the fossil record alone, various models suggest that the extinction was extremely rapid, being down to hours rather than years. In 2019, scientists drilling into the seafloor off Mexico extracted a unique geologic record of what they believe to be the day a city-sized asteroid smashed into the planet. | https://en.wikipedia.org/wiki?curid=8311 |
Diamagnetism
Diamagnetic materials are repelled by a magnetic field; an applied magnetic field creates an induced magnetic field in them in the opposite direction, causing a repulsive force. In contrast, paramagnetic and ferromagnetic materials are attracted by a magnetic field. Diamagnetism is a quantum mechanical effect that occurs in all materials; when it is the only contribution to the magnetism, the material is called diamagnetic. In paramagnetic and ferromagnetic substances, the weak diamagnetic force is overcome by the attractive force of magnetic dipoles in the material. The magnetic permeability of diamagnetic materials is less than the permeability of vacuum, μ0. In most materials, diamagnetism is a weak effect which can only be detected by sensitive laboratory instruments, but a superconductor acts as a strong diamagnet because it repels a magnetic field entirely from its interior.
Diamagnetism was first discovered when Anton Brugmans observed in 1778 that bismuth was repelled by magnetic fields. In 1845, Michael Faraday demonstrated that it was a property of matter and concluded that every material responded (in either a diamagnetic or paramagnetic way) to an applied magnetic field. On a suggestion by William Whewell, Faraday first referred to the phenomenon as "diamagnetic" (the prefix "dia-" meaning "through" or "across"), then later changed it to "diamagnetism".
A simple rule of thumb is used in chemistry to determine whether a particle (atom, ion, or molecule) is paramagnetic or diamagnetic: If all electrons in the particle are paired, then the substance made of this particle is diamagnetic; If it has unpaired electrons, then the substance is paramagnetic.
Diamagnetism is a property of all materials, and always makes a weak contribution to the material's response to a magnetic field. However, other forms of magnetism (such as ferromagnetism or paramagnetism) are so much stronger that, when multiple different forms of magnetism are present in a material, the diamagnetic contribution is usually negligible. Substances where the diamagnetic behaviour is the strongest effect are termed diamagnetic materials, or diamagnets. Diamagnetic materials are those that some people generally think of as "non-magnetic", and include water, wood, most organic compounds such as petroleum and some plastics, and many metals including copper, particularly the heavy ones with many core electrons, such as mercury, gold and bismuth. The magnetic susceptibility values of various molecular fragments are called Pascal's constants.
Diamagnetic materials, like water, or water-based materials, have a relative magnetic permeability that is less than or equal to 1, and therefore a magnetic susceptibility less than or equal to 0, since susceptibility is defined as . This means that diamagnetic materials are repelled by magnetic fields. However, since diamagnetism is such a weak property, its effects are not observable in everyday life. For example, the magnetic susceptibility of diamagnets such as water is . The most strongly diamagnetic material is bismuth, , although pyrolytic carbon may have a susceptibility of in one plane. Nevertheless, these values are orders of magnitude smaller than the magnetism exhibited by paramagnets and ferromagnets. Note that because χv is derived from the ratio of the internal magnetic field to the applied field, it is a dimensionless value.
In rare cases, the diamagnetic contribution can be stronger than paramagnetic contribution. This is the case for gold, which has a magnetic susceptibility less than 0 (and is thus by definition a diamagnetic material), but when measured carefully with X-ray magnetic circular dichroism, has an extremely weak paramagnetic contribution that is overcome by a stronger diamagnetic contribution.
Superconductors may be considered perfect diamagnets (), because they expel all magnetic fields (except in a thin surface layer) due to the Meissner effect.
If a powerful magnet (such as a supermagnet) is covered with a layer of water (that is thin compared to the diameter of the magnet) then the field of the magnet significantly repels the water. This causes a slight dimple in the water's surface that may be seen by its reflection.
Diamagnets may be levitated in stable equilibrium in a magnetic field, with no power consumption. Earnshaw's theorem seems to preclude the possibility of static magnetic levitation. However, Earnshaw's theorem applies only to objects with positive susceptibilities, such as ferromagnets (which have a permanent positive moment) and paramagnets (which induce a positive moment). These are attracted to field maxima, which do not exist in free space. Diamagnets (which induce a negative moment) are attracted to field minima, and there can be a field minimum in free space.
A thin slice of pyrolytic graphite, which is an unusually strong diamagnetic material, can be stably floated in a magnetic field, such as that from rare earth permanent magnets. This can be done with all components at room temperature, making a visually effective demonstration of diamagnetism.
The Radboud University Nijmegen, the Netherlands, has conducted experiments where water and other substances were successfully levitated. Most spectacularly, a live frog (see figure) was levitated.
In September 2009, NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California announced it had successfully levitated mice using a superconducting magnet, an important step forward since mice are closer biologically to humans than frogs. JPL said it hopes to perform experiments regarding the effects of microgravity on bone and muscle mass.
Recent experiments studying the growth of protein crystals have led to a technique using powerful magnets to allow growth in ways that counteract Earth's gravity.
A simple homemade device for demonstration can be constructed out of bismuth plates and a few permanent magnets that levitate a permanent magnet.
The electrons in a material generally settle in orbitals, with effectively zero resistance and act like current loops. Thus it might be imagined that diamagnetism effects in general would be common, since any applied magnetic field would generate currents in these loops that would oppose the change, in a similar way to superconductors, which are essentially perfect diamagnets. However, since the electrons are rigidly held in orbitals by the charge of the protons and are further constrained by the Pauli exclusion principle, many materials exhibit diamagnetism, but typically respond very little to the applied field.
The Bohr–van Leeuwen theorem proves that there cannot be any diamagnetism or paramagnetism in a purely classical system. However, the classical theory of Langevin for diamagnetism gives the same prediction as the quantum theory. The classical theory is given below.
Paul Langevin's theory of diamagnetism (1905) applies to materials containing atoms with closed shells (see dielectrics). A field with intensity , applied to an electron with charge and mass , gives rise to Larmor precession with frequency . The number of revolutions per unit time is , so the current for an atom with electrons is (in SI units)
The magnetic moment of a current loop is equal to the current times the area of the loop. Suppose the field is aligned with the axis. The average loop area can be given as formula_2, where formula_3 is the mean square distance of the electrons perpendicular to the axis. The magnetic moment is therefore
If the distribution of charge is spherically symmetric, we can suppose that the distribution of coordinates are independent and identically distributed. Then formula_5, where formula_6 is the mean square distance of the electrons from the nucleus. Therefore, formula_7. If formula_8 is the number of atoms per unit volume, the volume diamagnetic susceptibility in SI units is
The Langevin theory is not the full picture for metals because there are also non-localized electrons. The theory that describes diamagnetism in a free electron gas is called Landau diamagnetism, named after Lev Landau, and instead considers the weak counteracting field that forms when the electrons' trajectories are curved due to the Lorentz force. Landau diamagnetism, however, should be contrasted with Pauli paramagnetism, an effect associated with the polarization of delocalized electrons' spins. For the bulk case of a 3D system and low magnetic fields, the (volume) diamagnetic susceptibility can be calculated using Landau quantization, which in SI units is
where formula_11 is the Fermi energy. This is equivalent to formula_12, exactly formula_13 times Pauli paramagnetic susceptibility, where formula_14 is the Bohr magneton and formula_15 is the density of states (number of states per energy per volume). This formula takes into account the spin degeneracy of the carriers (spin ½ electrons).
In doped semiconductors the ratio between Landau and Pauli susceptibilities may change due to the effective mass of the charge carriers differing from the electron mass in vacuum, increasing the diamagnetic contribution. The formula presented here only applies for the bulk; in confined systems like quantum dots, the description is altered due to quantum confinement. Additionally, for strong magnetic fields, the susceptibility of delocalized electrons oscillates as a function of the field strength, a phenomenon known as the de Haas–van Alphen effect, also first described theoretically by Landau. | https://en.wikipedia.org/wiki?curid=8315 |
Duke of Marlborough (title)
Duke of Marlborough (pronounced ) is a title in the Peerage of England. It was created by Queen Anne in 1702 for John Churchill, 1st Earl of Marlborough (1650–1722), the noted military leader. In historical texts, it is often to him that an unqualified use of the title refers. The name of the dukedom refers to Marlborough in Wiltshire.
The earldom of Marlborough was held by the family of Ley from its creation 1626 until its extinction with the death of the 4th earl in 1679. The title was recreated 10 years later for John Churchill (in 1689).
Churchill had been made "Lord Churchill of Eyemouth" (1682) in the Peerage of Scotland, and "Baron Churchill" of Sandridge (1685) and "Earl of Marlborough" (1689) in the Peerage of England. Shortly after her accession to the throne in 1702, Queen Anne made Churchill the first "Duke of Marlborough" and granted him the subsidiary title "Marquess of Blandford".
In 1678, Churchill married Sarah Jennings (1660–1744), a courtier and influential favourite of the queen. They had seven children, of whom four daughters married into some of the most important families in Great Britain; one daughter and one son died in infancy. He was pre-deceased by his son, John Churchill, Marquess of Blandford, in 1703; so, to prevent the extinction of the titles, a special Act of Parliament was passed. When the 1st Duke of Marlborough died in 1722 his title as "Lord Churchill of Eyemouth" in the Peerage of Scotland became extinct and the Marlborough titles passed, according to the Act, to his eldest daughter Henrietta (1681–1733), the 2nd Duchess of Marlborough. She was married to the 2nd Earl of Godolphin and had a son who predeceased her.
When Henrietta died in 1733, the Marlborough titles passed to her nephew Charles Spencer (1706–1758), the third son of her late sister Anne (1683–1716), who had married the 3rd Earl of Sunderland in 1699. After his older brother's death in 1729, Charles Spencer had already inherited the Spencer family estates and the titles of "Earl of Sunderland" (1643) and "Baron Spencer" of Wormleighton (1603), all in the Peerage of England. Upon his maternal aunt Henrietta's death in 1733, Charles Spencer succeeded to the Marlborough family estates and titles and became the 3rd Duke. When he died in 1758, his titles passed to his eldest son George (1739–1817), who was succeeded by his eldest son George, the 5th Duke (1766–1840). In 1815, Francis Spencer (the younger son of the 4th Duke) was created "Baron Churchill" in the Peerage of the United Kingdom. In 1902, his grandson, the 3rd Baron Churchill, was created Viscount Churchill.
In 1817, the 5th Duke obtained permission to assume and bear the surname of Churchill in addition to his surname of Spencer, to perpetuate the name of his illustrious great-great-grandfather. At the same time he received Royal Licence to quarter the coat of arms of Churchill with his paternal arms of Spencer. The modern Dukes thus originally bore the surname "Spencer": the double-barrelled surname of "Spencer-Churchill" as used since 1817 remains in the family, although many members have preferred to style themselves simply as "Churchill".
The 7th Duke was the paternal grandfather of the British Prime Minister Sir Winston Churchill, born at Blenheim Palace on 30 November 1874.
The 11th Duke, John Spencer-Churchill died in 2014, having assumed the title in 1972. The 12th and present Duke is Charles James Spencer-Churchill.
The family seat is Blenheim Palace in Woodstock, Oxfordshire.
After his leadership in the victory against the French in the Battle of Blenheim on 13 August 1704, the 1st Duke was honoured by Queen Anne granting him the royal manor of Woodstock, and building him a house at her expense to be called Blenheim. Construction started in 1705 and the house was completed in 1722, the year of the 1st Duke's death. Blenheim Palace has since remained in the Churchill and Spencer-Churchill family.
With the exception of the 10th Duke and his first wife, the Dukes and Duchesses of Marlborough are buried in Blenheim Palace's chapel. Most other members of the Spencer-Churchill family are interred in St. Martin's parish churchyard at Bladon, a short distance from the palace.
The dukedom can theoretically pass through a female line. However, unlike the remainder to heirs general found in most other peerages that allow male-preference primogeniture, the grant does not allow for abeyance and follows a more restrictive Semi-Salic formula designed to keep succession wherever possible in the male line. The succession is as follows:
Succession to the title under the first and second contingencies have lapsed; holders of the title from the 3rd Duke trace their status from the third contingency.
It is now very unlikely that the dukedom will be passed to a woman or through a woman, since all the male-line descendants of the 1st Duke's second daughter Anne Spencer, Countess of Sunderland—including the lines of the Viscounts Churchill and Barons Churchill of Wychwood and of the Earl Spencer and of the entire Spencer-Churchill and Spencer family—would have to become extinct.
If that were to happen, the Churchill titles would pass to the Earl of Jersey (and merge with the earldom as long as it is extant), the heir-male of the 1st Duke's granddaughter Anne Villiers (born Egerton), Countess of Jersey, daughter of Elizabeth Egerton, Duchess of Bridgewater, the third daughter of the first Duke.
The next heir would be the Duke of Buccleuch, the heir-male of the 1st Duke's great-granddaughter Elizabeth Montagu, Duchess of Buccleuch, the daughter of Mary Montagu, Duchess of Montagu (1766 creation), the daughter of the 1st Duke's youngest daughter Mary, Duchess of Montagu (1705 creation).
The fourth surviving line is represented by the Earl of Chichester and his family, the heir-male of the 1st Duke's most senior great-great-granddaughter Mary Henrietta Osborne, Countess of Chichester, daughter of Francis Osborne, 5th Duke of Leeds, only child of Mary Godolphin, Duchess of Leeds, daughter of the 1st Duke's eldest daughter Henrietta Godolphin, 2nd Duchess of Marlborough, by her husband Francis Godolphin, 2nd Earl of Godolphin.
The Duke holds subsidiary titles: "Marquess of Blandford" (created in 1702 for John Churchill), "Earl of Sunderland" (created in 1643 for the Spencer family), "Earl of Marlborough" (created in 1689 for John Churchill), "Baron Spencer" of Wormleighton (created in 1603 for the Spencer family), and "Baron Churchill" of Sandridge (created in 1685 for John Churchill), all in the Peerage of England.
The title "Marquess of Blandford" is used as the courtesy title for the Duke's eldest son and heir. The Duke's eldest son's eldest son can use the courtesy title "Earl of Sunderland", and the duke's eldest son's eldest son's eldest son (not necessarily the eldest great-grandson) the title "Lord Spencer of Wormleighton" (not to be confused with Earl Spencer).
The title of "Earl of Marlborough", created for John Churchill in 1689, had previously been created for James Ley, in 1626, becoming extinct in 1679.
The 1st Duke was honoured with land and titles in the Holy Roman Empire: Emperor Leopold I created him a Prince in 1704, and in 1705, his successor Emperor Joseph I gave him the principality of Mindelheim (once the lordship of the noted soldier Georg von Frundsberg). He was obliged to surrender Mindelheim in 1714 by the Treaty of Utrecht, which returned it to Bavaria. He tried to obtain Nellenburg in Austria in exchange, which at that time was only a county ('Landgrafschaft'), but this failed, partially because Austrian law did not allow for Nellenburg to be converted into a sovereign principality. The 1st Duke's princely title of Mindelheim became extinct either on the return of the land to Bavaria or on his death, as the Empire operated Salic Law, which prevented female succession.
The original arms of Sir Winston Churchill (1620–1688), father of the 1st Duke of Marlborough, were simple and in use by his own father in 1619. The shield was Sable a lion rampant Argent, debruised by a bendlet Gules. The addition of a canton of Saint George (see below) rendered the distinguishing mark of the bendlet unnecessary.
The Churchill crest is blazoned as a lion couchant guardant Argent, supporting with its dexter forepaw a banner Gules, charged with a dexter hand appaumée of the first, staff Or.
In recognition of Sir Winston's services to King Charles I as Captain of the Horse, and his loyalty to King Charles II as a Member of Parliament, he was awarded an augmentation of honour to his arms around 1662. This rare mark of royal favour took the form of a canton of Saint George. At the same time, he was authorised to omit the bendlet, which had served the purpose of distinguishing this branch of the Churchill family from others which bore an undifferenced lion.
Sir Winston's shield and crest were inherited by his son John Churchill, 1st Duke of Marlborough. Minor modifications reflected the bearer's social rise: the helm was now shown in profile and had a closed grille to signify the bearer's rank as a peer, and there were now supporters placed on either side of the shield. They were the mythical Griffin (part lion, part eagle) and Wyvern (a dragon without hind legs). The supporters were derived from the arms of the family of the 1st Duke's mother, Drake of Ash (Argent, a wyvern gules; these arms can be seen on the monument in Musbury Church to Sir Bernard Drake, d.1586).
The motto was "Fiel pero desdichado" (Spanish for "Faithful but unfortunate"). The 1st Duke was also entitled to a coronet indicating his rank.
When the 1st Duke was made a Prince of the Holy Roman Empire in 1705, two unusual features were added: the Imperial Eagle and a Princely Coronet. His estates in Germany, such as Mindelheim, were represented in his arms by additional quarterings.
In 1817, the 5th Duke received Royal Licence to place the quarter of Churchill ahead of his paternal arms of Spencer. The shield of the Spencer family arms is: quarterly Argent and Gules, in the second and third quarters a fret Or, over all on a bend Sable three escallops of the first. The Spencer crest is: out of a ducal coronet Or, a griffin's head between two wings expanded Argent, gorged with a collar gemel and armed Gules. Paul Courtenay observes that "It would be normal in these circumstances for the paternal arms (Spencer) to take precedence over the maternal (Churchill), but because the Marlborough dukedom was senior to the Sunderland earldom, the procedure was reversed in this case."
Also in 1817, a further augmentation of honour was added to his armorial achievement. This incorporated the bearings from the standard of the Manor of Woodstock and was borne on an escutcheon, displayed over all in the centre chief point, as follows: Argent a cross of Saint George surmounted by an inescutcheon Azure, charged with three fleurs-de-lys Or, two over one. This inescutcheon represents the royal arms of France.
These quartered arms, incorporating the two augmentations of honour, have been the arms of all subsequent Dukes of Marlborough.
The motto "Fiel pero desdichado" is Spanish for "Faithful though Joyless". "Desdichado" means without happiness or without joy, alluding to the first Duke's father, Winston, who was a royalist and faithful supporter of the king during the English Civil War but was not compensated for his losses after the restoration. Charles II knighted Winston Churchill and other Civil War royalists but did not compensate them for their wartime losses, thereby inducing Winston to adopt the motto. It is unusual for the motto of an Englishman of the era to be in Spanish rather than Latin, and it is not known why this is the case.
The earldom of Marlborough was held by the family of Ley from 1626 to 1679. James Ley, the 1st Earl (c. 1550 – 1629), was lord chief justice of the King’s Bench in Ireland and then in England; he was an English member of parliament and was lord high treasurer from 1624 to 1628. In 1624 he was created Baron Ley and in 1626 Earl of Marlborough. The 3rd earl was his grandson James (1618–1665), a naval officer who was killed in action with the Dutch. James was succeeded by his uncle William, a younger son of the 1st earl, on whose death in 1679 the earldom became extinct.
The heir apparent to the dukedom is George John Godolphin Spencer-Churchill, Marquess of Blandford (b. 1992), eldest son of the 12th Duke. | https://en.wikipedia.org/wiki?curid=8317 |
Difference engine
A difference engine, first created by Charles Babbage, is an automatic mechanical calculator designed to tabulate polynomial functions. Its name is derived from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients. Most mathematical functions commonly used by engineers, scientists and navigators, including logarithmic and trigonometric functions, can be approximated by polynomials, so a difference engine can compute many useful tables of numbers.
The historical difficulty in producing error-free tables by teams of mathematicians and human "computers" spurred Charles Babbage's desire to build a mechanism to automate the process.
The notion of a mechanical calculator for mathematical functions can be traced back to the Antikythera mechanism of the 2nd century BC, while early modern examples are attributed to Pascal and Leibniz in the 17th century.
In 1784 J. H. Müller, an engineer in the Hessian army, devised and built an adding machine and described the basic principles of a difference machine in a book published in 1786 (the first written reference to a difference machine is dated to 1784), but he was unable to obtain funding to progress with the idea.
Charles Babbage began to construct a small difference engine in c. 1819 and had completed it by 1822 (Difference Engine 0). He announced his invention on 14 June 1822, in a paper to the Royal Astronomical Society, entitled "Note on the application of machinery to the computation of astronomical and mathematical tables". This machine used the decimal number system and was powered by cranking a handle. The British government was interested, since producing tables was time-consuming and expensive and they hoped the difference engine would make the task more economical.
In 1823, the British government gave Babbage £1700 to start work on the project. Although Babbage's design was feasible, the metalworking techniques of the era could not economically make parts in the precision and quantity required. Thus the implementation proved to be much more expensive and doubtful of success than the government's initial estimate. In 1832, Babbage and Joseph Clement produced a small working model (one-seventh of the calculating section of Difference Engine No. 1, which was intended to operate on 20-digit numbers and sixth-order differences) which operated on 6-digit numbers and second-order differences. Lady Byron described seeing the working prototype in 1833: "We both went to see the thinking machine (or so it seems) last Monday. It raised several Nos. to the 2nd and 3rd powers, and extracted the root of a Quadratic equation." Work on the larger engine was suspended in 1833.
By the time the government abandoned the project in 1842, Babbage had received and spent over £17,000 on development, which still fell short of achieving a working engine. The government valued only the machine's output (economically produced tables), not the development (at unknown and unpredictable cost to complete) of the machine itself. Babbage did not, or was unwilling to, recognize that predicament. Meanwhile, Babbage's attention had moved on to developing an analytical engine, further undermining the government's confidence in the eventual success of the difference engine. By improving the concept as an analytical engine, Babbage had made the difference engine concept obsolete, and the project to implement it an utter failure in the view of the government.
The incomplete Difference Engine No. 1 was put on display to the public at the 1862 International Exhibition in South Kensington, London.
Babbage went on to design his much more general analytical engine, but later produced an improved "Difference Engine No. 2" design (31-digit numbers and seventh-order differences), between 1846 and 1849. Babbage was able to take advantage of ideas developed for the analytical engine to make the new difference engine calculate more quickly while using fewer parts.
Inspired by Babbage's difference engine in 1834, Per Georg Scheutz built several experimental models. In 1837 his son Edward proposed to construct a working model in metal, and in 1840 finished the calculating part, capable of calculating series with 5-digit numbers and first-order differences, which was later extended to third-order (1842). In 1843, after adding the printing part, the model was completed.
In 1851, funded by the government, construction of the larger and improved (15-digit numbers and fourth-order differences) machine began, and finished in 1853. The machine was demonstrated at the World's Fair in Paris, 1855 and then sold in 1856 to the Dudley Observatory in Albany, New York. Delivered in 1857, it was the first printing calculator sold. In 1857 the British government ordered the next Scheutz's difference machine, which was built in 1859. It had the same basic construction as the previous one, weighing about .
Martin Wiberg improved Scheutz's construction (c. 1859, his machine has the same capacity as Scheutz's - 15-digit and fourth-order) but used his device only for producing and publishing printed tables (interest tables in 1860, and logarithmic tables in 1875).
Alfred Deacon of London in c. 1862 produced a small difference engine (20-digit numbers and third-order differences).
American George B. Grant started working on his calculating machine in 1869, unaware of the works of Babbage and Scheutz (Schentz). One year later (1870) he learned about difference engines and proceed to design one himself, describing his construction in 1871. In 1874 the Boston Thursday Club raised a subscription for the construction of a large-scale model, which was built in 1876. It could be expanded to enhance precision and weighed about .
Christel Hamann built one machine (16-digit numbers and second-order differences) in 1909 for the "Tables of Bauschinger and Peters" ("Logarithmic-Trigonometrical Tables with eight decimal places"), which was first published in Leipzig in 1910. It weighed about .
Burroughs Corporation in about 1912 built a machine for the Nautical Almanac Office which was used as a difference engine of second-order. It was later replaced in 1929 by a Burroughs Class 11 (13-digit numbers and second-order differences, or 11-digit numbers and [at least up to] fifth-order differences).
Alexander John Thompson about 1927 built "integrating and differencing machine" (13-digit numbers and fifth-order differences) for his table of logarithms "Logarithmetica britannica". This machine was composed of four modified Triumphator calculators.
Leslie Comrie in 1928 described how to use the Brunsviga-Dupla calculating machine as a difference engine of second-order (15-digit numbers). He also noted in 1931 that National Accounting Machine Class 3000 could be used as a difference engine of sixth-order.
During the 1980s, Allan G. Bromley, an associate professor at the University of Sydney, Australia, studied Babbage's original drawings for the Difference and Analytical Engines at the Science Museum library in London. This work led the Science Museum to construct a working calculating section of difference engine No. 2 from 1985 to 1991, under Doron Swade, the then Curator of Computing. This was to celebrate the 200th anniversary of Babbage's birth in 1999. In 2002, the printer which Babbage originally designed for the difference engine was also completed. The conversion of the original design drawings into drawings suitable for engineering manufacturers' use revealed some minor errors in Babbage's design (possibly introduced as a protection in case the plans were stolen), which had to be corrected. Once completed, both the engine and its printer worked flawlessly, and still do. The difference engine and printer were constructed to tolerances achievable with 19th-century technology, resolving a long-standing debate as to whether Babbage's design would have worked. (One of the reasons formerly advanced for the non-completion of Babbage's engines had been that engineering methods were insufficiently developed in the Victorian era.)
The printer's primary purpose is to produce stereotype plates for use in printing presses, which it does by pressing type into soft plaster to create a flong. Babbage intended that the Engine's results be conveyed directly to mass printing, having recognized that many errors in previous tables were not the result of human calculating mistakes but from error in the manual typesetting process. The printer's paper output is mainly a means of checking the engine's performance.
In addition to funding the construction of the output mechanism for the Science Museum's difference engine, Nathan Myhrvold commissioned the construction of a second complete Difference Engine No. 2, which was on exhibit at the Computer History Museum in Mountain View, California from 10 May 2008 until 31 January 2016. | https://en.wikipedia.org/wiki?curid=8324 |
Divergence
In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the outward flux of a vector field from an infinitesimal volume around a given point.
As an example, consider air as it is heated or cooled. The velocity of the air at each point defines a vector field. While air is heated in a region, it expands in all directions, and thus the velocity field points outward from that region. The divergence of the velocity field in that region would thus have a positive value. While the air is cooled and thus contracting, the divergence of the velocity has a negative value.
In physical terms, the divergence of a vector field is the extent to which the vector field flux behaves like a source at a given point. It is a local measure of its "outgoingness" – the extent to which there is more of the field vectors exiting an infinitesimal region of space than entering it. A point at which the flux is outgoing has positive divergence, and is often called a "source" of the field. A point at which the flux is directed inward has negative divergence, and is often called a "sink" of the field. The greater the flux of field through a small surface enclosing a given point, the greater the value of divergence at that point. A point at which there is zero flux through an enclosing surface has zero divergence.
The divergence of a vector field is often illustrated using the example of the velocity field of a fluid, a liquid or gas. A moving gas has a velocity, a speed and direction, at each point which can be represented by a vector, so the velocity of the gas forms a vector field. If a gas is heated, it will expand. This will cause a net motion of gas particles outward in all directions. Any closed surface in the gas will enclose gas which is expanding, so there will be an outward flux of gas through the surface. So the velocity field will have positive divergence everywhere. Similarly, if the gas is cooled, it will contract. There will be more room for gas particles in any volume, so the external pressure of the fluid will cause a net flow of gas volume inward through any closed surface. Therefore the velocity field has negative divergence everywhere. In contrast in an unheated gas with a constant density, the gas may be moving, but the volume rate of gas flowing into any closed surface must equal the volume rate flowing out, so the "net" flux of fluid through any closed surface is zero. Thus the gas velocity has zero divergence everywhere. A field which has zero divergence everywhere is called solenoidal.
If the fluid is heated only at one point or small region, or a small tube is introduced which supplies a source of additional fluid at one point, the fluid there will expand, pushing fluid particles around it outward in all directions. This will cause an outward velocity field throughout the fluid, centered on the heated point. Any closed surface enclosing the heated point will have a flux of fluid particles passing out of it, so there is positive divergence at that point. However any closed surface "not" enclosing the point will have a constant density of fluid inside, so just as many fluid particles are entering as leaving the volume, thus the net flux out of the volume is zero. Therefore the divergence at any other point is zero.
The divergence of a vector field at a point is defined as the limit of the ratio of the surface integral of out of the surface of a closed volume enclosing to the volume of , as shrinks to zero
where is the volume of , is the boundary of , and is the outward unit normal to that surface. It can be shown that the above limit always converges to the same value for any sequence of volumes that contain and approach zero volume. The result, , is a scalar function of .
Since this definition is coordinate-free, it shows that the divergence is the same in any coordinate system. However it is not often used practically to calculate divergence; when the vector field is given in a coordinate system the coordinate definitions below are much simpler to use.
A vector field with zero divergence everywhere is called "solenoidal" – in which case any closed surface has no net flux across it.
In three-dimensional Cartesian coordinates, the divergence of a continuously differentiable vector field formula_1 is defined as the scalar-valued function:
Although expressed in terms of coordinates, the result is invariant under rotations, as the physical interpretation suggests. This is because the trace of the Jacobian matrix of an -dimensional vector field in -dimensional space is invariant under any invertible linear transformation.
The common notation for the divergence is a convenient mnemonic, where the dot denotes an operation reminiscent of the dot product: take the components of the operator (see del), apply them to the corresponding components of , and sum the results. Because applying an operator is different from multiplying the components, this is considered an abuse of notation.
For a vector expressed in local unit cylindrical coordinates as
where is the unit vector in direction , the divergence is
The use of local coordinates is vital for the validity of the expression. If we consider the position vector and the functions formula_5, formula_6, and formula_7, which assign the corresponding global cylindrical coordinate to a vector, in general formula_8, formula_9, and formula_10. In particular, if we consider the identity function formula_11, we find that:
In spherical coordinates, with the angle with the axis and the rotation around the axis, and formula_13 again written in local unit coordinates, the divergence is
Let formula_15 be continuously differentiable second-order tensor field defined as follows:
the divergence in cartesian coordinate system is a first-order tensor field and can be defined in two ways:
and
We have
We should note that if tensor is symmetric formula_20 then formula_21 and this cause that often in literature this two definitions (and symbols formula_22 and formula_23 ) are switched and interchangeably used (especially in mechanics equations where tensor symmetry is assumed).
Expressions of formula_24 in cylindrical and spherical coordinates are given in the article del in cylindrical and spherical coordinates.
Using Einstein notation we can consider the divergence in general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the number of the coordinate or component, so refers to the second component, and not the quantity squared. The index variable is used to refer to an arbitrary element, such as . The divergence can then be written via the Voss-Weyl formula, as:
where formula_26 is the local coefficient of the volume element and are the components of with respect to the local unnormalized covariant basis (sometimes written as formula_27). The Einstein notation implies summation over , since it appears as both an upper and lower index.
The volume coefficient formula_26 is a function of position which depends on the coordinate system. In Cartesian, cylindrical and spherical coordinates, using the same conventions as before, we have formula_29, formula_30 and formula_31, respectively. It can also be expressed as formula_32, where formula_33 is the metric tensor. Since the determinant is a scalar quantity which doesn't depend on the indices, we can suppress them and simply write formula_34. Another expression comes from computing the determinant of the Jacobian for transforming from Cartesian coordinates, which for gives formula_35
Some conventions expect all local basis elements to be normalized to unit length, as was done in the previous sections. If we write formula_36 for the normalized basis, and formula_37 for the components of with respect to it, we have that
using one of the properties of the metric tensor. By dotting both sides of the last equality with the contravariant element formula_39, we can conclude that formula_40. After substituting, the formula becomes:
See "" for further discussion.
The following properties can all be derived from the ordinary differentiation rules of calculus. Most importantly, the divergence is a linear operator, i.e.,
for all vector fields and and all real numbers and .
There is a product rule of the following type: if is a scalar-valued function and is a vector field, then
or in more suggestive notation
Another product rule for the cross product of two vector fields and in three dimensions involves the curl and reads as follows:
or
The Laplacian of a scalar field is the divergence of the field's gradient:
The divergence of the curl of any vector field (in three dimensions) is equal to zero:
If a vector field with zero divergence is defined on a ball in , then there exists some vector field on the ball with . For regions in more topologically complicated than this, the latter statement might be false (see Poincaré lemma). The degree of "failure" of the truth of the statement, measured by the homology of the chain complex
serves as a nice quantification of the complicatedness of the underlying region . These are the beginnings and main motivations of de Rham cohomology.
It can be shown that any stationary flux that is twice continuously differentiable in and vanishes sufficiently fast for can be decomposed uniquely into an "irrotational part" and a "source-free part" . Moreover, these parts are explicitly determined by the respective "source densities" (see above) and "circulation densities" (see the article Curl):
For the irrotational part one has
with
The source-free part, , can be similarly written: one only has to replace the "scalar potential" by a "vector potential" and the terms by , and the source density
by the circulation density .
This "decomposition theorem" is a by-product of the stationary case of electrodynamics. It is a special case of the more general Helmholtz decomposition, which works in dimensions greater than three as well.
One can express the divergence as a particular case of the exterior derivative, which takes a 2-form to a 3-form in . Define the current two-form as
It measures the amount of "stuff" flowing through a surface per unit time in a "stuff fluid" of density moving with local velocity . Its exterior derivative is then given by
Thus, the divergence of the vector field can be expressed as:
Here the superscript is one of the two musical isomorphisms, and is the Hodge star operator. Working with the current two-form and the exterior derivative is usually easier than working with the vector field and divergence, because unlike the divergence, the exterior derivative commutes with a change of (curvilinear) coordinate system.
The divergence of a vector field can be defined in any number of dimensions. If
in a Euclidean coordinate system with coordinates , define
The appropriate expression is more complicated in curvilinear coordinates.
In the case of one dimension, reduces to a regular function, and the divergence reduces to the derivative.
For any , the divergence is a linear operator, and it satisfies the "product rule"
for any scalar-valued function .
The divergence of a vector field extends naturally to any differentiable manifold of dimension that has a volume form (or density) , e.g. a Riemannian or Lorentzian manifold. Generalising the construction of a two-form for a vector field on , on such a manifold a vector field defines an -form obtained by contracting with . The divergence is then the function defined by
Standard formulas for the Lie derivative allow us to reformulate this as
This means that the divergence measures the rate of expansion of a volume element as we let it flow with the vector field.
On a pseudo-Riemannian manifold, the divergence with respect to the metric volume form can be computed in terms of the Levi-Civita connection :
where the second expression is the contraction of the vector field valued 1-form with itself and the last expression is the traditional coordinate expression from Ricci calculus.
An equivalent expression without using connection is
where is the metric and denotes the partial derivative with respect to coordinate .
Divergence can also be generalised to tensors. In Einstein notation, the divergence of a contravariant vector is given by
where denotes the covariant derivative.
Equivalently, some authors define the divergence of a mixed tensor by using the musical isomorphism : if is a -tensor ( for the contravariant vector and for the covariant one), then we define the "divergence of " to be the -tensor
that is, we take the trace over the "first two" covariant indices of the covariant derivative | https://en.wikipedia.org/wiki?curid=8328 |
Decision problem
In computability theory and computational complexity theory, a decision problem is a problem that can be posed as a yes-no question of the input values. An example of a decision problem is deciding whether a given natural number is prime. Another is the problem "given two numbers "x" and "y", does "x" evenly divide "y"?". The answer is either 'yes' or 'no' depending upon the values of "x" and "y". A method for solving a decision problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the decision problem "given two numbers "x" and "y", does "x" evenly divide "y"?" would give the steps for determining whether "x" evenly divides "y". One such algorithm is long division. If the remainder is zero the answer is 'yes', otherwise it is 'no'. A decision problem which can be solved by an algorithm is called "decidable".
Decision problems typically appear in mathematical questions of decidability, that is, the question of the existence of an effective method to determine the existence of some object or its membership in a set; some of the most important problems in mathematics are undecidable.
The field of computational complexity categorizes "decidable" decision problems by how difficult they are to solve. "Difficult", in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of recursion theory, meanwhile, categorizes "undecidable" decision problems by Turing degree, which is a measure of the noncomputability inherent in any solution.
A "decision problem" is a yes-or-no question on an infinite set of inputs. It is traditional to define the decision problem as the set of possible inputs together with the set of inputs for which the answer is "yes".
These inputs can be natural numbers, but can also be values of some other kind, like binary strings or strings over some other alphabet. The subset of strings for which the problem returns "yes" is a formal language, and often decision problems are defined as formal languages.
Using an encoding such as Gödel numberings, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers.
A classic example of a decidable decision problem is the set of prime numbers. It is possible to effectively decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any effective method is enough to establish decidability.
A decision problem "A" is "decidable" or "effectively solvable" if "A" is a recursive set. A problem is "partially decidable", "semidecidable", "solvable", or "provable" if "A" is a recursively enumerable set. Problems that are not decidable are "undecidable". For those it is not possible to create an algorithm, efficient or otherwise, that solves them.
The halting problem is an important undecidable decision problem; for more examples, see list of undecidable problems.
Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions. A decision problem "P" is said to be "complete" for a set of decision problems "S" if "P" is a member of "S" and every problem in "S" can be reduced to "P". Complete decision problems are used in computational complexity theory to characterize complexity classes of decision problems. For example, the Boolean satisfiability problem is complete for the class NP of decision problems under polynomial-time reducibility.
Decision problems are closely related to function problems, which can have answers that are more complex than a simple 'yes' or 'no'. A corresponding function problem is "given two numbers "x" and "y", what is "x" divided by "y"?".
A function problem consists of a partial function "f"; the informal "problem" is to compute the values of "f" on the inputs for which it is defined.
Every function problem can be turned into a decision problem; the decision problem is just the graph of the associated function. (The graph of a function "f" is the set of pairs ("x","y") such that "f"("x") = "y".) If this decision problem were effectively solvable then the function problem would be as well. This reduction does not respect computational complexity, however. For example, it is possible for the graph of a function to be decidable in polynomial time (in which case running time is computed as a function of the pair ("x","y") ) when the function is not computable in polynomial time (in which case running time is computed as a function of "x" alone). The function "f"("x") = "2""x" has this property.
Every decision problem can be converted into the function problem of computing the characteristic function of the set associated to the decision problem. If this function is computable then the associated decision problem is decidable. However, this reduction is more liberal than the standard reduction used in computational complexity (sometimes called polynomial-time many-one reduction); for example, the complexity of the characteristic functions of an NP-complete problem and its co-NP-complete complement is exactly the same even though the underlying decision problems may not be considered equivalent in some typical models of computation.
Unlike decision problems, for which there is only one correct answer for each input, optimization problems are concerned with finding the "best" answer to a particular input. Optimization problems arise naturally in many applications, such as the traveling salesman problem and many questions in linear programming.
There are standard techniques for transforming function and optimization problems into decision problems. For example, in the traveling salesman problem, the optimization problem is to produce a tour with minimal weight. The associated decision problem is: for each "N", to decide whether the graph has any tour with weight less than "N". By repeatedly answering the decision problem, it is possible to find the minimal weight of a tour.
Because the theory of decision problems is very well developed, research in complexity theory has typically focused on decision problems. Optimization problems themselves are still of interest in computability theory, as well as in fields such as operations research. | https://en.wikipedia.org/wiki?curid=8336 |
Domain Name System
The Domain Name System (DNS) is a hierarchical and decentralized naming system for computers, services, or other resources connected to the Internet or a private network. It associates various information with domain names assigned to each of the participating entities. Most prominently, it translates more readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices with the underlying network protocols. By providing a worldwide, distributed directory service, the Domain Name System has been an essential component of the functionality of the Internet since 1985.
The Domain Name System delegates the responsibility of assigning domain names and mapping those names to Internet resources by designating authoritative name servers for each domain. Network administrators may delegate authority over sub-domains of their allocated name space to other name servers. This mechanism provides distributed and fault-tolerant service and was designed to avoid a single large central database.
The Domain Name System also specifies the technical functionality of the database service that is at its core. It defines the DNS protocol, a detailed specification of the data structures and data communication exchanges used in the DNS, as part of the Internet Protocol Suite.
The Internet maintains two principal namespaces, the domain name hierarchy and the Internet Protocol (IP) address spaces. The Domain Name System maintains the domain name hierarchy and provides translation services between it and the address spaces. Internet name servers and a communication protocol implement the Domain Name System. A DNS name server is a server that stores the DNS records for a domain; a DNS name server responds with answers to queries against its database.
The most common types of records stored in the DNS database are for Start of Authority (SOA), IP addresses (A and AAAA), SMTP mail exchangers (MX), name servers (NS), pointers for reverse DNS lookups (PTR), and domain name aliases (CNAME). Although not intended to be a general purpose database, DNS has been expanded over time to store records for other types of data for either automatic lookups, such as DNSSEC records, or for human queries such as "responsible person" (RP) records. As a general purpose database, the DNS has also been used in combating unsolicited email (spam) by storing a real-time blackhole list (RBL). The DNS database is traditionally stored in a structured text file, the zone file, but other database systems are common.
An often-used analogy to explain the Domain Name System is that it serves as the phone book for the Internet by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.example.com translates to the addresses (IPv4) and (IPv6). The DNS can be quickly and transparently updated, allowing a service's location on the network to change without affecting the end users, who continue to use the same hostname. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs) and e-mail addresses without having to know how the computer actually locates the services.
An important and ubiquitous function of DNS is its central role in distributed Internet services such as cloud services and content delivery networks. When a user accesses a distributed Internet service using a URL, the domain name of the URL is translated to the IP address of a server that is proximal to the user. The key functionality of DNS exploited here is that different users can "simultaneously" receive different translations for the "same" domain name, a key point of divergence from a traditional phone-book view of the DNS. This process of using the DNS to assign proximal servers to users is key to providing faster and more reliable responses on the Internet and is widely used by most major Internet services.
The DNS reflects the structure of administrative responsibility in the Internet. Each subdomain is a zone of administrative autonomy delegated to a manager. For zones operated by a registry, administrative information is often complemented by the registry's RDAP and WHOIS services. That data can be used to gain insight on, and track responsibility for, a given host on the Internet.
Using a simpler, more memorable name in place of a host's numerical address dates back to the ARPANET era. The Stanford Research Institute (now SRI International) maintained a text file named HOSTS.TXT that mapped host names to the numerical addresses of computers on the ARPANET. Elizabeth Feinler developed and maintained the first ARPANET directory. Maintenance of numerical addresses, called the Assigned Numbers List, was handled by Jon Postel at the University of Southern California's Information Sciences Institute (ISI), whose team worked closely with SRI.
Addresses were assigned manually. Computers, including their hostnames and addresses, were added to the primary file by contacting the SRI's Network Information Center (NIC), directed by Elizabeth Feinler, by telephone during business hours. Later, Feinler set up a WHOIS directory on a server in the NIC for retrieval of information about resources, contacts, and entities. She and her team developed the concept of domains. Feinler suggested that domains should be based on the location of the physical address of the computer. Computers at educational institutions would have the domain "edu", for example. She and her team managed the Host Naming Registry from 1972 to 1989.
By the early 1980s, maintaining a single, centralized host table had become slow and unwieldy and the emerging network required an automated naming system to address technical and personnel issues. Postel directed the task of forging a compromise between five competing proposals of solutions to Paul Mockapetris. Mockapetris instead created the Domain Name System in 1983.
The Internet Engineering Task Force published the original specifications in RFC 882 and RFC 883 in November 1983.
In 1984, four UC Berkeley students, Douglas Terry, Mark Painter, David Riggle, and Songnian Zhou, wrote the first Unix name server implementation for the Berkeley Internet Name Domain, commonly referred to as BIND. In 1985, Kevin Dunlap of DEC substantially revised the DNS implementation. Mike Karels, Phil Almquist, and Paul Vixie have maintained BIND since then. In the early 1990s, BIND was ported to the Windows NT platform. It was widely distributed, especially on Unix systems, and is still the most widely used DNS software on the Internet.
In November 1987, RFC 1034 and RFC 1035 superseded the 1983 DNS specifications. Several additional Request for Comments have proposed extensions to the core DNS protocols.
The domain name space consists of a tree data structure. Each node or leaf in the tree has a "label" and zero or more "resource records" (RR), which hold information associated with the domain name. The domain name itself consists of the label, concatenated with the name of its parent node on the right, separated by a dot.
The tree sub-divides into "zones" beginning at the root zone. A DNS zone may consist of only one domain, or may consist of many domains and sub-domains, depending on the administrative choices of the zone manager. DNS can also be partitioned according to "class" where the separate classes can be thought of as an array of parallel namespace trees.
Administrative responsibility for any zone may be divided by creating additional zones. Authority over the new zone is said to be "delegated" to a designated name server. The parent zone ceases to be authoritative for the new zone.
The definitive descriptions of the rules for forming domain names appear in RFC 1035, RFC 1123, RFC 2181, and RFC 5892. A domain name consists of one or more parts, technically called "labels", that are conventionally concatenated, and delimited by dots, such as example.com.
The right-most label conveys the top-level domain; for example, the domain name www.example.com belongs to the top-level domain "com".
The hierarchy of domains descends from right to left; each label to the left specifies a subdivision, or subdomain of the domain to the right. For example, the label "example" specifies a subdomain of the "com" domain, and "www" is a subdomain of example.com. This tree of subdivisions may have up to 127 levels.
A label may contain zero to 63 characters. The null label, of length zero, is reserved for the root zone. The full domain name may not exceed the length of 253 characters in its textual representation. In the internal binary representation of the DNS the maximum length requires 255 octets of storage, as it also stores the length of the name.
Although no technical limitation exists to use any character in domain name labels which are representable by an octet, hostnames use a preferred format and character set. The characters allowed in labels are a subset of the ASCII character set, consisting of characters "a" through "z", "A" through "Z", digits "0" through "9", and hyphen. This rule is known as the "LDH rule" (letters, digits, hyphen). Domain names are interpreted in case-independent manner. Labels may not start or end with a hyphen. An additional rule requires that top-level domain names should not be all-numeric.
The limited set of ASCII characters permitted in the DNS prevented the representation of names and words of many languages in their native alphabets or scripts. To make this possible, ICANN approved the Internationalizing Domain Names in Applications (IDNA) system, by which user applications, such as web browsers, map Unicode strings into the valid DNS character set using Punycode. In 2009 ICANN approved the installation of internationalized domain name country code top-level domains ("ccTLD"s). In addition, many registries of the existing top-level domain names ("TLD"s) have adopted the IDNA system, guided by RFC 5890, RFC 5891, RFC 5892, RFC 5893.
The Domain Name System is maintained by a distributed database system, which uses the client–server model. The nodes of this database are the name servers. Each domain has at least one authoritative DNS server that publishes information about that domain and the name servers of any domains subordinate to it. The top of the hierarchy is served by the root name servers, the servers to query when looking up ("resolving") a TLD.
An "authoritative" name server is a name server that only gives answers to DNS queries from data that has been configured by an original source, for example, the domain administrator or by dynamic DNS methods, in contrast to answers obtained via a query to another name server that only maintains a cache of data.
An authoritative name server can either be a "primary" server or a "secondary" server. Historically the terms "master/slave" and "primary/secondary" were sometimes used interchangeably but the current practice is to use the latter form. A primary server is a server that stores the original copies of all zone records. A secondary server uses a special automatic updating mechanism in the DNS protocol in communication with its primary to maintain an identical copy of the primary records.
Every DNS zone must be assigned a set of authoritative name servers. This set of servers is stored in the parent domain zone with name server (NS) records.
An authoritative server indicates its status of supplying definitive answers, deemed "authoritative", by setting a protocol flag, called the ""Authoritative Answer"" ("AA") bit in its responses. This flag is usually reproduced prominently in the output of DNS administration query tools, such as dig, to indicate "that the responding name server is an authority for the domain name in question."
Domain name resolvers determine the domain name servers responsible for the domain name in question by a sequence of queries starting with the right-most (top-level) domain label.
For proper operation of its domain name resolver, a network host is configured with an initial cache ("hints") of the known addresses of the root name servers. The hints are updated periodically by an administrator by retrieving a dataset from a reliable source.
Assuming the resolver has no cached records to accelerate the process, the resolution process starts with a query to one of the root servers. In typical operation, the root servers do not answer directly, but respond with a referral to more authoritative servers, e.g., a query for "www.wikipedia.org" is referred to the "org" servers. The resolver now queries the servers referred to, and iteratively repeats this process until it receives an authoritative answer. The diagram illustrates this process for the host that is named by the fully qualified domain name "www.wikipedia.org".
This mechanism would place a large traffic burden on the root servers, if every resolution on the Internet required starting at the root. In practice caching is used in DNS servers to off-load the root servers, and as a result, root name servers actually are involved in only a relatively small fraction of all requests.
In theory, authoritative name servers are sufficient for the operation of the Internet. However, with only authoritative name servers operating, every DNS query must start with recursive queries at the root zone of the Domain Name System and each user system would have to implement resolver software capable of recursive operation.
To improve efficiency, reduce DNS traffic across the Internet, and increase performance in end-user applications, the Domain Name System supports DNS cache servers which store DNS query results for a period of time determined in the configuration ("time-to-live") of the domain name record in question.
Typically, such caching DNS servers also implement the recursive algorithm necessary to resolve a given name starting with the DNS root through to the authoritative name servers of the queried domain. With this function implemented in the name server, user applications gain efficiency in design and operation.
The combination of DNS caching and recursive functions in a name server is not mandatory; the functions can be implemented independently in servers for special purposes.
Internet service providers typically provide recursive and caching name servers for their customers. In addition, many home networking routers implement DNS caches and recursors to improve efficiency in the local network.
The client side of the DNS is called a DNS resolver. A resolver is responsible for initiating and sequencing the queries that ultimately lead to a full resolution (translation) of the resource sought, e.g., translation of a domain name into an IP address. DNS resolvers are classified by a variety of query methods, such as "recursive", "non-recursive", and "iterative". A resolution process may use a combination of these methods.
In a "non-recursive query", a DNS resolver queries a DNS server that provides a record either for which the server is authoritative, or it provides a partial result without querying other servers. In case of a caching DNS resolver, the non-recursive query of its local DNS cache delivers a result and reduces the load on upstream DNS servers by caching DNS resource records for a period of time after an initial response from upstream DNS servers.
In a "recursive query", a DNS resolver queries a single DNS server, which may in turn query other DNS servers on behalf of the requester. For example, a simple stub resolver running on a home router typically makes a recursive query to the DNS server run by the user's ISP. A recursive query is one for which the DNS server answers the query completely by querying other name servers as needed. In typical operation, a client issues a recursive query to a caching recursive DNS server, which subsequently issues non-recursive queries to determine the answer and send a single answer back to the client. The resolver, or another DNS server acting recursively on behalf of the resolver, negotiates use of recursive service using bits in the query headers. DNS servers are not required to support recursive queries.
The "iterative query" procedure is a process in which a DNS resolver queries a chain of one or more DNS servers. Each server refers the client to the next server in the chain, until the current server can fully resolve the request. For example, a possible resolution of www.example.com would query a global root server, then a "com" server, and finally an "example.com" server.
Name servers in delegations are identified by name, rather than by IP address. This means that a resolving name server must issue another DNS request to find out the IP address of the server to which it has been referred. If the name given in the delegation is a subdomain of the domain for which the delegation is being provided, there is a circular dependency.
In this case, the name server providing the delegation must also provide one or more IP addresses for the authoritative name server mentioned in the delegation. This information is called "glue". The delegating name server provides this glue in the form of records in the "additional section" of the DNS response, and provides the delegation in the "authority section" of the response. A glue record is a combination of the name server and IP address.
For example, if the authoritative name server for example.org is ns1.example.org, a computer trying to resolve www.example.org first resolves ns1.example.org. As ns1 is contained in example.org, this requires resolving example.org first, which presents a circular dependency. To break the dependency, the name server for the top level domain org includes glue along with the delegation for example.org. The glue records are address records that provide IP addresses for ns1.example.org. The resolver uses one or more of these IP addresses to query one of the domain's authoritative servers, which allows it to complete the DNS query.
A standard practice in implementing name resolution in applications is to reduce the load on the Domain Name System servers by caching results locally, or in intermediate resolver hosts. Results obtained from a DNS request are always associated with the time to live (TTL), an expiration time after which the results must be discarded or refreshed. The TTL is set by the administrator of the authoritative DNS server. The period of validity may vary from a few seconds to days or even weeks.
As a result of this distributed caching architecture, changes to DNS records do not propagate throughout the network immediately, but require all caches to expire and to be refreshed after the TTL. RFC 1912 conveys basic rules for determining appropriate TTL values.
Some resolvers may override TTL values, as the protocol supports caching for up to sixty-eight years or no caching at all. Negative caching, i.e. the caching of the fact of non-existence of a record, is determined by name servers authoritative for a zone which must include the Start of Authority (SOA) record when reporting no data of the requested type exists. The value of the "minimum" field of the SOA record and the TTL of the SOA itself is used to establish the TTL for the negative answer.
A reverse DNS lookup is a query of the DNS for domain names when the IP address is known. Multiple domain names may be associated with an IP address. The DNS stores IP addresses in the form of domain names as specially formatted names in pointer (PTR) records within the infrastructure top-level domain arpa. For IPv4, the domain is in-addr.arpa. For IPv6, the reverse lookup domain is ip6.arpa. The IP address is represented as a name in reverse-ordered octet representation for IPv4, and reverse-ordered nibble representation for IPv6.
When performing a reverse lookup, the DNS client converts the address into these formats before querying the name for a PTR record following the delegation chain as for any DNS query. For example, assuming the IPv4 address 208.80.152.2 is assigned to Wikimedia, it is represented as a DNS name in reverse order: 2.152.80.208.in-addr.arpa. When the DNS resolver gets a pointer (PTR) request, it begins by querying the root servers, which point to the servers of American Registry for Internet Numbers (ARIN) for the 208.in-addr.arpa zone. ARIN's servers delegate 152.80.208.in-addr.arpa to Wikimedia to which the resolver sends another query for 2.152.80.208.in-addr.arpa, which results in an authoritative response.
Users generally do not communicate directly with a DNS resolver. Instead DNS resolution takes place transparently in applications such as web browsers, e-mail clients, and other Internet applications. When an application makes a request that requires a domain name lookup, such programs send a resolution request to the DNS resolver in the local operating system, which in turn handles the communications required.
The DNS resolver will almost invariably have a cache (see above) containing recent lookups. If the cache can provide the answer to the request, the resolver will return the value in the cache to the program that made the request. If the cache does not contain the answer, the resolver will send the request to one or more designated DNS servers. In the case of most home users, the Internet service provider to which the machine connects will usually supply this DNS server: such a user will either have configured that server's address manually or allowed DHCP to set it; however, where systems administrators have configured systems to use their own DNS servers, their DNS resolvers point to separately maintained name servers of the organization. In any event, the name server thus queried will follow the process outlined above, until it either successfully finds a result or does not. It then returns its results to the DNS resolver; assuming it has found a result, the resolver duly caches that result for future use, and hands the result back to the software which initiated the request.
Some large ISPs have configured their DNS servers to violate rules, such as by disobeying TTLs, or by indicating that a domain name does not exist just because one of its name servers does not respond.
Some applications such as web browsers maintain an internal DNS cache to avoid repeated lookups via the network. This practice can add extra difficulty when debugging DNS issues as it obscures the history of such data. These caches typically use very short caching times on the order of one minute.
Internet Explorer represents a notable exception: versions up to IE 3.x cache DNS records for 24 hours by default. Internet Explorer 4.x and later versions (up to IE 8) decrease the default timeout value to half an hour, which may be changed by modifying the default configuration.
When Google Chrome detects issues with the DNS server it displays a specific error message.
The Domain Name System includes several other functions and features.
Hostnames and IP addresses are not required to match in a one-to-one relationship. Multiple hostnames may correspond to a single IP address, which is useful in virtual hosting, in which many web sites are served from a single host. Alternatively, a single hostname may resolve to many IP addresses to facilitate fault tolerance and load distribution to multiple server instances across an enterprise or the global Internet.
DNS serves other purposes in addition to translating names to IP addresses. For instance, mail transfer agents use DNS to find the best mail server to deliver e-mail: An MX record provides a mapping between a domain and a mail exchanger; this can provide an additional layer of fault tolerance and load distribution.
The DNS is used for efficient storage and distribution of IP addresses of blacklisted email hosts. A common method is to place the IP address of the subject host into the sub-domain of a higher level domain name, and to resolve that name to a record that indicates a positive or a negative indication.
For example:
E-mail servers can query blacklist.example to find out if a specific host connecting to them is in the blacklist. Many of such blacklists, either subscription-based or free of cost, are available for use by email administrators and anti-spam software.
To provide resilience in the event of computer or network failure, multiple DNS servers are usually provided for coverage of each domain. At the top level of global DNS, thirteen groups of root name servers exist, with additional "copies" of them distributed worldwide via anycast addressing.
Dynamic DNS (DDNS) updates a DNS server with a client IP address on-the-fly, for example, when moving between ISPs or mobile hot spots, or when the IP address changes administratively.
The DNS protocol uses two types of DNS messages, queries and replies; both have the same format. Each message consists of a header and four sections: question, answer, authority, and an additional space. A header field ("flags") controls the content of these four sections.
The header section consists of the following fields: "Identification", "Flags", "Number of questions", "Number of answers", "Number of authority resource records" (RRs), and "Number of additional RRs". Each field is 16 bits long, and appears in the order given. The identification field is used to match responses with queries. The flag field consists of sub-fields as follows:
After the flag, the header ends with four 16-bit integers which contain the number of records in each of the sections that follow, in the same order.
The question section has a simpler format than the resource record format used in the other sections. Each question record (there is usually just one in the section) contains the following fields:
The domain name is broken into discrete labels which are concatenated; each label is prefixed by the length of that label.
DNS primarily uses the User Datagram Protocol (UDP) on port number 53 to serve requests. DNS queries consist of a single UDP request from the client followed by a single UDP reply from the server. When the length of the answer exceeds 512 bytes and both client and server support EDNS, larger UDP packets are used. Otherwise, the query is sent again using the Transmission Control Protocol (TCP). TCP is also used for tasks such as zone transfers. Some resolver implementations use TCP for all queries.
The Domain Name System specifies a database of information elements for network resources. The types of information elements are categorized and organized with a list of DNS record types, the resource records (RRs). Each record has a type (name and number), an expiration time (time to live), a class, and type-specific data. Resource records of the same type are described as a "resource record set" (RRset), having no special ordering. DNS resolvers return the entire set upon query, but servers may implement round-robin ordering to achieve load balancing. In contrast, the Domain Name System Security Extensions (DNSSEC) work on the complete set of resource record in canonical order.
When sent over an Internet Protocol network, all records use the common format specified in RFC 1035:
"NAME" is the fully qualified domain name of the node in the tree . On the wire, the name may be shortened using label compression where ends of domain names mentioned earlier in the packet can be substituted for the end of the current domain name. A free standing "@" is used to denote the current origin.
"TYPE" is the record type. It indicates the format of the data and it gives a hint of its intended use. For example, the "A" record is used to translate from a domain name to an IPv4 address, the "NS" record lists which name servers can answer lookups on a DNS zone, and the "MX" record specifies the mail server used to handle mail for a domain specified in an e-mail address.
"RDATA" is data of type-specific relevance, such as the IP address for address records, or the priority and hostname for MX records. Well known record types may use label compression in the RDATA field, but "unknown" record types must not (RFC 3597).
The "CLASS" of a record is set to IN (for "Internet") for common DNS records involving Internet hostnames, servers, or IP addresses. In addition, the classes Chaos (CH) and Hesiod (HS) exist. Each class is an independent name space with potentially different delegations of DNS zones.
In addition to resource records defined in a zone file, the domain name system also defines several request types that are used only in communication with other DNS nodes ("on the wire"), such as when performing zone transfers (AXFR/IXFR) or for EDNS (OPT).
The domain name system supports wildcard DNS records which specify names that start with the "asterisk label", '*', e.g., *.example. DNS records belonging to wildcard domain names specify rules for generating resource records within a single DNS zone by substituting whole labels with matching components of the query name, including any specified descendants. For example, in the following configuration, the DNS zone "x.example" specifies that all subdomains, including subdomains of subdomains, of "x.example" use the mail exchanger (MX) "a.x.example". The A record for "a.x.example" is needed to specify the mail exchanger IP address. As this has the result of excluding this domain name and its subdomains from the wildcard matches, an additional MX record for the subdomain "a.x.example", as well as a wildcarded MX record for all of its subdomains, must also be defined in the DNS zone.
x.example. MX 10 a.x.example.
a.x.example. MX 10 a.x.example.
a.x.example. AAAA 2001:db8::1
The role of wildcard records was refined in RFC 4592, because the original definition in RFC 1034 was incomplete and resulted in misinterpretations by implementers.
The original DNS protocol had limited provisions for extension with new features. In 1999, Paul Vixie published in RFC 2671 (superseded by RFC 6891) an extension mechanism, called Extension mechanisms for DNS (EDNS) that introduced optional protocol elements without increasing overhead when not in use. This was accomplished through the OPT pseudo-resource record that only exists in wire transmissions of the protocol, but not in any zone files. Initial extensions were also suggested (EDNS0), such as increasing the DNS message size in UDP datagrams.
Dynamic DNS updates use the UPDATE DNS opcode to add or remove resource records dynamically from a zone database maintained on an authoritative DNS server. The feature is described in RFC 2136. This facility is useful to register network clients into the DNS when they boot or become otherwise available on the network. As a booting client may be assigned a different IP address each time from a DHCP server, it is not possible to provide static DNS assignments for such clients.
Originally, security concerns were not major design considerations for DNS software or any software for deployment on the early Internet, as the network was not open for participation by the general public. However, the expansion of the Internet into the commercial sector in the 1990s changed the requirements for security measures to protect data integrity and user authentication.
Several vulnerability issues were discovered and exploited by malicious users. One such issue is DNS cache poisoning, in which data is distributed to caching resolvers under the pretense of being an authoritative origin server, thereby polluting the data store with potentially false information and long expiration times (time-to-live). Subsequently, legitimate application requests may be redirected to network hosts operated with malicious intent.
DNS responses traditionally do not have a cryptographic signature, leading to many attack possibilities; the Domain Name System Security Extensions (DNSSEC) modify DNS to add support for cryptographically signed responses. DNSCurve has been proposed as an alternative to DNSSEC. Other extensions, such as TSIG, add support for cryptographic authentication between trusted peers and are commonly used to authorize zone transfer or dynamic update operations.
Some domain names may be used to achieve spoofing effects. For example, and paypa1.com are different names, yet users may be unable to distinguish them in a graphical user interface depending on the user's chosen typeface. In many fonts the letter "l" and the numeral "1" look very similar or even identical. This problem is acute in systems that support internationalized domain names, as many character codes in ISO 10646 may appear identical on typical computer screens. This vulnerability is occasionally exploited in phishing.
Techniques such as forward-confirmed reverse DNS can also be used to help validate DNS results.
DNS can also "leak" from otherwise secure or private connections, if attention is not paid to their configuration, and at times DNS has been used to bypass firewalls by malicious persons, and exfiltrate data, since it is often seen as innocuous.
Originally designed as a public, hierarchical, distributed and heavily cached database, DNS protocol has no confidentiality controls. User queries and nameserver responses are being sent unencrypted which enables network packet sniffing, DNS hijacking, DNS cache poisoning and man-in-the-middle attacks. This deficiency is commonly used by cybercriminals and network operators for marketing purposes, user authentication on captive portals and censorship.
User privacy is further exposed by proposals increasing level of client IP in DNS queries (RFC 7871) for the benefit of Content Delivery Networks.
The main approaches are in use to counter privacy issues with DNS:
Solutions preventing DNS inspection by local network operator are criticized for thwarting corporate network security policies and Internet censorship. They are also criticized from privacy point of view, as giving away the DNS resolution to the hands of a small number of companies known for monetizing user traffic and for centralizing DNS name resolution, which is generally perceived as harmful for the Internet.
The right to use a domain name is delegated by domain name registrars which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN) or other organizations such as OpenNIC, that are charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization, operating a registry. A "registry" is responsible for operating the database of names within its authoritative zone, although the term is most often used for TLDs. A "registrant" is a person or organization who asked for domain registration. The registry receives registration information from each domain name "registrar", which is authorized (accredited) to assign names in the corresponding zone and publishes the information using the WHOIS protocol. As of 2015, usage of RDAP is being considered.
ICANN publishes the complete list of TLDs, TLD registries, and domain name registrars. Registrant information associated with domain names is maintained in an online database accessible with the WHOIS service. For most of the more than 290 country code top-level domains (ccTLDs), the domain registries maintain the WHOIS (Registrant, name servers, expiration dates, etc.) information. For instance, DENIC, Germany NIC, holds the DE domain data. From about 2001, most Generic top-level domain (gTLD) registries have adopted this so-called "thick" registry approach, i.e. keeping the WHOIS data in central registries instead of registrar databases.
For top-level domains on COM and NET, a "thin" registry model is used. The domain registry (e.g., GoDaddy, BigRock and PDR, VeriSign, etc., etc.) holds basic WHOIS data (i.e., registrar and name servers, etc.). Organizations, or registrants using ORG on the other hand, are on the Public Interest Registry exclusively.
Some domain name registries, often called "network information centers" (NIC), also function as registrars to end-users, in addition to providing access to the WHOIS datasets. The top-level domain registries, such as for the domains COM, NET, and ORG use a registry-registrar model consisting of many domain name registrars. In this method of management, the registry only manages the domain name database and the relationship with the registrars. The "registrants" (users of a domain name) are customers of the registrar, in some cases through additional subcontracting of resellers.
The Domain Name System is defined by Request for Comments (RFC) documents published by the Internet Engineering Task Force (Internet standards). The following is a list of RFCs that define the DNS protocol.
These RFCs are advisory in nature, but may provide useful information despite defining neither a standard or BCP. (RFC 1796)
These RFCs have an official status of Unknown, but due to their age are not clearly labeled as such. | https://en.wikipedia.org/wiki?curid=8339 |
David Letterman
David Michael Letterman (born April 12, 1947) is an American television host, comedian, writer, and producer. He hosted late night television talk shows for 33 years, beginning with the February 1, 1982, debut of "Late Night with David Letterman" on NBC, and ending with the May 20, 2015, broadcast of "Late Show with David Letterman" on CBS. In total, Letterman hosted 6,080 episodes of "Late Night" and "Late Show", surpassing his friend and mentor Johnny Carson as the longest-serving late night talk show host in American television history. In 1996, Letterman was ranked 45th on "TV Guide"s 50 Greatest TV Stars of All Time. In 2002, "The Late Show with David Letterman" was ranked seventh on TV Guide's 50 Greatest TV Shows of All Time.
He is also a television and film producer. His company, Worldwide Pants, produced his shows as well as "The Late Late Show" and several prime-time comedies, the most successful of which was "Everybody Loves Raymond", now in syndication.
Several late-night hosts have cited Letterman's influence, including Conan O'Brien (his successor on "Late Night"), Stephen Colbert (his successor on "The Late Show"), Jimmy Fallon, Jimmy Kimmel, John Oliver, and Seth Meyers.
Letterman currently hosts the Netflix series "My Next Guest Needs No Introduction with David Letterman".
Letterman was born in Indianapolis, Indiana in 1947, and has two sisters, one older and one younger. His father, Harry Joseph Letterman (April 15, 1915 – February 13, 1973), was a florist. His mother, Dorothy Marie Letterman Mengering (née Hofert; July 18, 1921 – April 11, 2017), a church secretary for the Second Presbyterian Church of Indianapolis, was an occasional figure on Letterman's show, usually at holidays and birthdays.
Letterman grew up on the north side of Indianapolis, in the Broad Ripple area, about 12 miles from the Indianapolis Motor Speedway. He enjoyed collecting model cars, including racers. In 2000, he told an interviewer for "Esquire" that, while growing up, he admired his father's ability to tell jokes and be the life of the party. Harry Joseph Letterman survived a heart attack at the age of 36, when David was a young boy. The fear of losing his father was constantly with Letterman as he grew up. The elder Letterman died of a second heart attack in 1973, at the age of 57.
Letterman attended his hometown's Broad Ripple High School and worked as a stock boy at the local Atlas Supermarket. According to the "Ball State Daily News", he had originally wanted to attend Indiana University, but his grades were not good enough, so he instead attended Ball State University, in Muncie, Indiana. He is a member of the Sigma Chi fraternity, and he graduated in 1969 from what was then the Department of Radio and Television. A self-described average student, Letterman later endowed a scholarship for what he called "C students" at Ball State. Though he registered for the draft and passed his physical after graduating from college, he was not drafted for service in Vietnam because of receiving a draft lottery number of 346 (out of 366).
Letterman began his broadcasting career as an announcer and newscaster at the college's student-run radio station—WBST—a 10-watt campus station which now is part of Indiana Public Radio. He was fired for treating classical music with irreverence. He then became involved with the founding of another campus station—WAGO-AM 570 (now WWHI, 91.3).
He credits Paul Dixon, host of the "Paul Dixon Show", a Cincinnati-based talk show also shown in Indianapolis while he was growing up, for inspiring his choice of career:
I was just out of college [in 1969], and I really didn't know what I wanted to do. And then all of a sudden I saw him doing it [on TV]. And I thought: That's really what I want to do!
Soon after graduating from Ball State in 1969, Letterman began his career as a radio talk show host on WNTS (AM) and on Indianapolis television station WLWI (which changed its call sign to WTHR in 1976) as an anchor and weatherman. He received some attention for his unpredictable on-air behavior, which included congratulating a tropical storm for being upgraded to a hurricane and predicting hail stones "the size of canned hams." He would also occasionally report the weather and the day's very high and low temps for fictitious cities ("Eight inches of snow in Bingree and surrounding areas"), while on another occasion saying that a state border had been erased when a satellite map accidentally omitted the state border between Indiana and Ohio, attributing it to dirty political dealings. ("The higher-ups have removed the border between Indiana and Ohio making it one giant state. Personally, I'm against it. I don't know what to do about it.") He also starred in a local kiddie show, made wisecracks as host of a late-night TV show called "Freeze-Dried Movies" (he once acted out a scene from "Godzilla" using plastic dinosaurs), and hosted a talk show that aired early on Saturday mornings called "Clover Power", in which he interviewed 4-H members about their projects.
In 1971, Letterman appeared as a pit road reporter for ABC Sports' tape-delayed coverage of the Indianapolis 500, which was his first nationally telecast appearance (WLWI was the local ABC affiliate at the time). Letterman was initially introduced as Chris Economaki, although this was corrected at the end of the interview (Jim McKay announced his name as Dave Letterman). Letterman interviewed Mario Andretti, who had just crashed out of the race.
In 1975, encouraged by his then-wife Michelle and several of his Sigma Chi fraternity brothers, Letterman moved to Los Angeles, California, with hope of becoming a comedy writer. He and Michelle packed their belongings in his pickup truck and headed west. As of 2012, he still owned the truck. In Los Angeles, he began performing comedy at The Comedy Store. Jimmie Walker saw him on stage; with an endorsement from George Miller, Letterman joined a group of comedians whom Walker hired to write jokes for his stand-up act, a group that at various times would also include Jay Leno, Paul Mooney, Robert Schimmel, Richard Jeni, Louie Anderson, Elayne Boosler, Byron Allen, Jack Handey, and Steve Oedekerk.
By the summer of 1977, Letterman was a writer and regular on the six-week summer series "The Starland Vocal Band Show", broadcast on CBS. He hosted a 1977 pilot for a game show entitled "The Riddlers" (which was never picked up), and co-starred in the Barry Levinson-produced comedy special "Peeping Times", which aired in January 1978. Later that year, Letterman was a cast member on Mary Tyler Moore's variety show, "Mary". Letterman made a guest appearance on "Mork & Mindy" (as a parody of EST leader Werner Erhard) and appearances on game shows such as "The $20,000 Pyramid", "The Gong Show", "Hollywood Squares", "Password Plus", and "Liar's Club", as well as the Canadian cooking show "Celebrity Cooks" (November 1977), talk shows such as "90 Minutes Live" (February 24 and April 14, 1978), and "The Mike Douglas Show" (April 3, 1979 and February 7, 1980). He was also screen tested for the lead role in the 1980 film "Airplane!", a role that eventually went to Robert Hays.
Letterman's brand of dry, sarcastic humor caught the attention of scouts for "The Tonight Show Starring Johnny Carson", and he was soon a regular guest on the show. He became a favorite of Carson and was a regular guest host for the show beginning in 1978. Letterman credits Carson as the person who influenced his career the most.
On June 23, 1980, Letterman was given his own morning comedy show on NBC, "The David Letterman Show". It was originally 90 minutes long, but was shortened to 60 minutes in August 1980. The show was a critical success, winning two Emmy Awards, but was a ratings disappointment and was canceled, the last show airing October 24, 1980.
NBC kept Letterman on their payroll to be able to try him in a different time slot. "Late Night with David Letterman" debuted February 1, 1982; the first guest on the first show was Bill Murray. Murray later went on to become one of Letterman's most recurrent guests, guesting on his later CBS show's celebration of his 30th anniversary in late-night television, which aired January 31, 2012 and on the very last CBS show, which aired May 20, 2015. The show ran Monday through Thursday at 12:30 a.m. Eastern Time, immediately following "The Tonight Show Starring Johnny Carson" (a Friday night broadcast was added in June 1987). It was seen as edgy and unpredictable, and soon developed a cult following (particularly among college students). Letterman's reputation as an acerbic interviewer was borne out in verbal sparring matches with Cher (who even called him an asshole on the show), Shirley MacLaine, Charles Grodin, and Madonna. The show also featured comedy segments and running characters, in a style heavily influenced by the 1950s and 1960s programs of Steve Allen.
The show often featured quirky, genre-mocking regular features, including "Stupid Pet Tricks" (which had its origins on Letterman's morning show), Stupid Human Tricks, dropping various objects off the roof of a five-story building, demonstrations of unorthodox clothing (such as suits made of Alka-Seltzer, Velcro and suet), a recurring Top 10 list, the Monkey-Cam (and the Audience Cam), a facetious letter-answering segment, several "Film[s] by My Dog Bob" in which a camera was mounted on Letterman's own dog (often with comic results) and Small Town News, all of which would eventually move with Letterman to CBS.
Other episodes included Letterman using a bullhorn to interrupt a live interview on "The Today Show", announcing that he was the NBC News president and that he was not wearing any pants; walking across the hall to Studio 6B, at the time the news studio for WNBC-TV, and interrupting Al Roker's weather segments during "Live at Five"; and staging "elevator races", complete with commentary by NBC Sports' Bob Costas. In one appearance, in 1982, Andy Kaufman (who was wearing a neck brace) appeared with professional wrestler Jerry Lawler, who slapped and knocked the comedian to the ground (though Lawler and Kaufman's friend Bob Zmuda later revealed that the incident was staged).
In 1992, Johnny Carson retired, and many fans believed that Letterman would become host of "The Tonight Show". When NBC instead gave the job to Jay Leno, Letterman departed NBC to host his own late-night show on CBS, opposite "The Tonight Show" at 11:30 p.m., called the "Late Show with David Letterman". The new show debuted on August 30, 1993, and was taped at the historic Ed Sullivan Theater, where Ed Sullivan broadcast his eponymous variety series from 1948 to 1971. For Letterman's arrival, CBS spent million in renovations. In addition to that cost, CBS also signed Letterman to a lucrative three-year, million/year contract, doubling his "Late Night" salary. The total cost for everything (renovations, negotiation right paid to NBC, signing Letterman, announcer Bill Wendell, Paul Shaffer, the writers and the band) was over million.
But while the expectation was that Letterman would retain his unique style and sense of humor with the move, "Late Show" was not an exact replica of his old NBC program. Recognizing the more formal mood (and wider audience) of his new time slot and studio, Letterman eschewed his trademark blazer with khaki pants and white wrestling shoes wardrobe combination in favor of expensive shoes, tailored suits and light-colored socks. The monologue was lengthened. Paul Shaffer and the World's Most Dangerous Band followed Letterman to CBS, but they added a brass section and were rebranded the CBS Orchestra (Shaffer's request); a small band had been mandated by Carson while Letterman occupied the 12:30 slot. Additionally, because of intellectual property disagreements, Letterman was unable to import many of his "Late Night" segments verbatim, but he sidestepped this problem by simply renaming them (the "Top Ten List" became the "Late Show Top Ten", "Viewer Mail" became the "CBS Mailbag", etc.) "Time" magazine stated that "Letterman's innovation ... gained power from its rigorous formalism", as his biographer Jason Zinoman puts it, he was "a fascinatingly disgruntled eccentric trapped inside a more traditional talk show."
The main competitor of the "Late Show" was NBC's "The Tonight Show", which was hosted by Jay Leno for 22 years, but from June 1, 2009, to January 22, 2010, was hosted by Conan O'Brien. In 1993 and 1994, the "Late Show" consistently gained higher ratings than "The Tonight Show". But in 1995, ratings dipped and Leno's show consistently beat Letterman's in the ratings from the time that Hugh Grant came on Leno's show after Grant's arrest for soliciting a prostitute.
Leno typically attracted about five million nightly viewers between 1999 and 2009. The "Late Show" lost nearly half its audience during its competition with Leno, attracting 7.1 million viewers nightly in its 1993–94 season and about 3.8 million per night as of Leno's departure in 2009. In the final months of his first stint as host of "The Tonight Show", Leno beat Letterman in the ratings by a 1.3 million viewer margin (5.2 million to 3.9 million), and "Nightline" and the "Late Show" were virtually tied. Once O'Brien took over "Tonight", however, Letterman closed the gap in the ratings. O'Brien initially drove the median age of "Tonight Show" viewers from 55 to 45, with most older viewers opting to watch the "Late Show" instead. Following Leno's return to "The Tonight Show", however, Leno regained his lead.
Letterman's shows have garnered both critical and industry praise, receiving 67 Emmy Award nominations, winning 12 times in his first 20 years in late night television. From 1993 to 2009, Letterman ranked higher than Leno in the annual Harris Poll of "Nation's Favorite TV Personality" 12 times. For example, in 2003 and 2004 Letterman ranked second in that poll, behind only Oprah Winfrey, a year that Leno was ranked fifth. Leno was higher than Letterman on that poll three times during the same period, in 1998, 2007, and 2008.
On March 27, 1995, Letterman acted as the host for the 67th Academy Awards ceremony. Critics blasted Letterman for what they deemed a poor hosting of the Oscars, noting that his irreverent style undermined the traditional importance and glamor of the event. In a joke about their unusual names (inspired by a celebrated comic essay in "The New Yorker", "Yma Dream" by Thomas Meehan), he started off by introducing Uma Thurman to Oprah Winfrey, and then both of them to Keanu Reeves: "Oprah...Uma. Uma...Oprah," "Have you kids met Keanu?" This and many of his other jokes fell flat. Although Letterman attracted the highest ratings to the annual telecast since 1983, many felt that the bad publicity garnered by Letterman's hosting caused a decline in the "Late Show"'s ratings.
Letterman recycled the apparent debacle into a long-running gag. On his first show after the Oscars, he joked, "Looking back, I had no idea that thing was being televised." He lampooned his stint two years later, during Billy Crystal's opening Oscar skit, which also parodied the plane-crashing scenes from that year's chief nominated film, "The English Patient".
For years afterward, Letterman recounted his hosting the Oscars, although the Academy of Motion Picture Arts and Sciences continued to hold Letterman in high regard and they had invited him to host the Oscars again. On September 7, 2010, he made an appearance on the premiere of the 14th season of "The View", and confirmed that he had been considered for hosting again.
On January 14, 2000, a routine check-up revealed that an artery in Letterman's heart was severely obstructed. He was rushed to emergency surgery for a quintuple bypass at New York Presbyterian Hospital. During the initial weeks of his recovery, reruns of the "Late Show" were shown and introduced by friends of Letterman including Norm MacDonald, Drew Barrymore, Ray Romano, Robin Williams, Bonnie Hunt, Megan Mullally, Bill Murray, Regis Philbin, Charles Grodin, Nathan Lane, Julia Roberts, Bruce Willis, Jerry Seinfeld, Martin Short, Steven Seagal, Hillary Clinton, Danny DeVito, Steve Martin, and Sarah Jessica Parker.
Subsequently, while still recovering from surgery, Letterman revived the late-night talk show tradition of "guest hosts" that had virtually disappeared on network television during the 1990s by allowing Bill Cosby, Kathie Lee Gifford, Dana Carvey, Janeane Garofalo, and others to host new episodes of the "Late Show". Upon his return to the show on February 21, 2000, Letterman brought all but one of the doctors and nurses on stage who had participated in his surgery and recovery (with extra teasing of a nurse who had given him bed baths—"This woman has seen me naked!"), including Dr. O. Wayne Isom and physician Louis Aronne, who frequently appeared on the show. In a show of emotion, Letterman was nearly in tears as he thanked the health care team with the words "These are the people who saved my life!" The episode earned an Emmy nomination.
For a number of episodes, Letterman continued to crack jokes about his bypass, including saying, "Bypass surgery: it's when doctors surgically create new blood flow to your heart. A bypass is what happened to me when I didn't get "The Tonight Show!" It's a whole different thing." In a later running gag he lobbied his home state of Indiana to rename the freeway circling Indianapolis (I-465) "The David Letterman Bypass". He also featured a montage of faux news coverage of his bypass surgery, which included a clip of Letterman's heart for sale on the Home Shopping Network. Letterman became friends with his doctors and nurses. In 2008, a "Rolling Stone" interview stated he hosted a doctor and nurse who'd helped perform the emergency quintuple-bypass heart surgery that saved his life in 2000. 'These are people who were complete strangers when they opened my chest,' he says. 'And now, eight years later, they're among my best friends.' Additionally, Letterman invited the band Foo Fighters to play "Everlong", introducing them as "my favorite band, playing my favorite song." During Letterman's last show, on which Foo Fighters appeared, Letterman said that Foo Fighters had been in the middle of a South American tour which they canceled to come play on his comeback episode.
Letterman again handed over the reins of the show to several guest hosts (including Bill Cosby, Brad Garrett, Whoopi Goldberg, Elvis Costello, John McEnroe, Vince Vaughn, Will Ferrell, Bonnie Hunt, Luke Wilson, and bandleader Paul Shaffer) in February 2003, when he was diagnosed with a severe case of shingles. Later that year, Letterman made regular use of guest hosts—including Tom Arnold and Kelsey Grammer—for new shows broadcast on Fridays. In March 2007, Adam Sandler, who had been scheduled to be the lead guest, served as a guest host while Letterman was ill with a stomach virus.
In March 2002, as Letterman's contract with CBS neared expiration, ABC offered him the time slot for long-running news program "Nightline" with Ted Koppel. Letterman was interested as he believed he could never match Jay Leno's ratings at CBS due to Letterman's complaint of weaker lead-ins from the network's late local news programs, but was reluctant to replace Koppel. Letterman addressed his decision to re-sign on the air, stating that he was content at CBS and that he had great respect for Koppel.
On December 4, 2006, CBS revealed that Letterman signed a new contract to host "Late Show with David Letterman" through the fall of 2010. "I'm thrilled to be continuing on at CBS," said Letterman. "At my age you really don't want to have to learn a new commute." Letterman further joked about the subject by pulling up his right pants leg, revealing a tattoo, presumably temporary, of the ABC logo.
"Thirteen years ago, David Letterman put CBS late night on the map and in the process became one of the defining icons of our network," said Leslie Moonves, president and CEO of CBS Corporation. "His presence on our air is an ongoing source of pride, and the creativity and imagination that the "Late Show" puts forth every night is an ongoing display of the highest quality entertainment. We are truly honored that one of the most revered and talented entertainers of our time will continue to call CBS 'home.'"
According to a 2007 article in "Forbes" magazine, Letterman earned million a year. A 2009 article in "The New York Times", however, said his salary was estimated at million per year. In June 2009, Letterman's Worldwide Pants and CBS reached an agreement to continue the "Late Show" until at least August 2012. The previous contract had been set to expire in 2010, and the two-year extension is shorter than the typical three-year contract period negotiated in the past. Worldwide Pants agreed to lower its fee for the show, though it had remained a "solid moneymaker for CBS" under the previous contract.
On the February 3, 2011, edition of the "Late Show", during an interview with Howard Stern, Letterman said he would continue to do his talk show for "maybe two years, I think." In April 2012, CBS announced it had extended its contract with Letterman through 2014. His contract was subsequently extended to 2015.
During the taping of his show on April 3, 2014, Letterman announced that he had informed CBS president Leslie Moonves that he would retire from hosting "Late Show" by May 20, 2015. However, later in his retirement Letterman occasionally stated, in jest, that he was fired. It was announced soon after that comedian and political satirist Stephen Colbert would succeed Letterman. Letterman's last episode aired on May 20, 2015, and opened with a presidential send-off featuring four of the five living American presidents, George H. W. Bush, Bill Clinton, George W. Bush, and Barack Obama, each mimicking the late president Gerald Ford's statement that "Our long national nightmare is over." It also featured cameos from "The Simpsons" and "Wheel of Fortune" (the latter with a puzzle saying "Good riddance to David Letterman"), a Top Ten List of "things I wish I could have said to David Letterman" performed by regular guests including Alec Baldwin, Barbara Walters, Steve Martin, Jerry Seinfeld, Jim Carrey, Chris Rock, Julia Louis-Dreyfus, Peyton Manning, Tina Fey, and Bill Murray, and closed with a montage of scenes from both his CBS and NBC series set to a live performance of "Everlong" by Foo Fighters.
The final episode of "Late Show with David Letterman" was watched by 13.76 million viewers in the United States with an audience share of 9.3/24, earning the show its highest ratings since following the 1994 Winter Olympics on February 25, 1994, and the show's highest demo numbers (4.1 in adults 25–54 and 3.1 in adults 18–49) since Oprah Winfrey's first "Late Show" appearance following the ending of her feud with Letterman on December 1, 2005. Bill Murray, who had been his first guest on "Late Night", was his final guest on "Late Show". In a rarity for a late-night show, it was also the highest-rated program on network television that night, beating out all prime-time shows. In total, Letterman hosted 6,080 episodes of "Late Night" and "Late Show", surpassing friend and mentor Johnny Carson as the longest-serving late-night talk show host in American television history.
In the months following the end of "Late Show", Letterman was seen occasionally at sports events such as the Indianapolis 500, during which he submitted to an interview with a local publication. He made a surprise appearance on stage in San Antonio, Texas, when he was invited up for an extended segment during Steve Martin and Martin Short's "A Very Stupid Conversation" show, saying "I retired, and...I have no regrets," Letterman told the crowd after walking on stage. "I was happy. I'll make actual friends. I was complacent. I was satisfied. I was content, and then a couple of days ago Donald Trump said he was running for president. I have made the biggest mistake of my life, ladies and gentlemen" and then delivering a Top Ten List roasting Trump's presidential campaign followed by an on-stage conversation with Martin and Short. Cell phone recordings of the appearance were posted on YouTube by audience members and were widely reported in the media.
In 2016, Letterman joined the climate change documentary show "Years of Living Dangerously" as one of the show's celebrity correspondents. In season two's premiere episode, Letterman traveled to India to investigate the country's efforts to expand its inadequate energy grid, power its booming economy, and bring electricity to 300 million citizens for the first time. He also interviewed Indian Prime Minister Narendra Modi, and traveled to rural villages where power is a scarce luxury and explored the United States' role in India's energy future.
On April 7, 2017, Letterman gave the induction speech for the band Pearl Jam into the Rock & Roll Hall Of Fame at a ceremony held at the Barclays Center in Brooklyn, New York City. Also in 2017, Letterman and Alec Baldwin co-hosted "The Essentials" on Turner Classic Movies. Letterman and Baldwin introduced seven films for the series.
In 2018, Letterman began hosting a six-episode monthly series of hour-long programs on Netflix consisting of long-form interviews and field segments. The show, "My Next Guest Needs No Introduction with David Letterman", premiered January 12, 2018, featuring Barack Obama as its first guest. The second season premiered on May 31, 2019.
In spite of Johnny Carson's clear intention to pass his title to Letterman, NBC selected Jay Leno to host "The Tonight Show" after Carson's departure. Letterman maintained a close relationship with Carson through his break with NBC. Three years after he left for CBS, HBO produced a made-for-television movie called "The Late Shift", based on a book by "The New York Times" reporter Bill Carter, chronicling the battle between Letterman and Leno for the coveted "Tonight Show" hosting spot.
Carson later made a few cameo appearances as a guest on Letterman's show. Carson's final television appearance came May 13, 1994, on a "Late Show" episode taped in Los Angeles, when he made a surprise appearance during a Top 10 list segment. In early 2005, it was revealed that Carson occasionally sent jokes to Letterman, who used these jokes in his monologue; according to CBS senior vice president Peter Lassally (a one-time producer for both men), Carson got "a big kick out of it." Letterman would do a characteristic Johnny Carson golf swing after delivering one of Carson's jokes. In a tribute to Carson, all of the opening monologue jokes during the first show following Carson's death were written by Carson.
Lassally also claimed that Carson had always believed Letterman, not Leno, to be his "rightful successor". During the early years of the "Late Show"s run, Letterman occasionally used some of Carson's trademark bits, including "Carnac the Magnificent" (with Paul Shaffer as Carnac), "Stump the Band", and the "Week in Review".
Oprah Winfrey appeared on Letterman's show when he was hosting NBC's "Late Night" on May 2, 1989. Following that appearance, the two had a 16-year feud which arose, as Winfrey explained to Letterman after the feud had been resolved, as a result of the acerbic tone of their 1989 interview, of which she said that it "felt so uncomfortable to me that I didn't want to have that experience again". The feud apparently ended in 2005 when Winfrey appeared on CBS's "Late Show with David Letterman" on December 2, in an event Letterman jokingly referred to as "the Super Bowl of Love".
Winfrey and Letterman also appeared together in a "Late Show" promo that aired during CBS's coverage of Super Bowl XLI in February 2007, with the two sitting next to each other on the couch watching the game. Since the game was played between the Indianapolis Colts and Chicago Bears, the Indianapolis-born Letterman wears a Peyton Manning jersey, while Winfrey, whose show was taped in Chicago, wears a Brian Urlacher jersey. On September 10, 2007, Letterman made his first appearance on "The Oprah Winfrey Show" at Madison Square Garden in New York City.
Three years later, during CBS's coverage of Super Bowl XLIV between the Colts and the New Orleans Saints, the two appeared again in a "Late Show" promo, this time with Winfrey sitting on a couch between Letterman and Jay Leno. This time Letterman was wearing the retired 70 jersey of Art Donovan, a member of the Colts' Hall of Fame and a regular Letterman guest. The appearance was Letterman's idea: Leno flew to New York City on an NBC corporate jet, sneaking into the Ed Sullivan Theater during the "Late Show"s February 4 taping wearing a disguise, meeting Winfrey and Letterman at a living room set created in the theater's balcony where they taped their promo.
Winfrey interviewed Letterman in January 2013 on "Oprah's Next Chapter". Winfrey and Letterman discussed their feud during the interview and Winfrey revealed that she had had a "terrible experience" while appearing on Letterman's show years earlier. Letterman could not recall the incident but apologized.
"Late Show" went off air for eight weeks in 2007 during the months of November and December because of the Writers Guild of America strike. Letterman's production company, Worldwide Pants, was the first company to make an individual agreement with the WGA, thus allowing his show to come back on the air on January 2, 2008. On his first episode since returning from off air, he surprised the viewing audience with his newly grown beard, which signified solidarity with the strike. His beard was shaved off during the show on January 7, 2008.
On June 8 and 9, 2009, Letterman told two sexually themed jokes about a daughter (never named) of Sarah Palin on his TV show. Palin was in New York City at the time with her then fourteen-year-old daughter, Willow, and some contemporaries thought the jokes were aimed at Willow, which caused some small amount of controversy.
In a statement posted on the Internet, Palin said, "I doubt [Letterman would] ever dare make such comments about anyone else's daughter" and that "laughter incited by sexually perverted comments made by a 62-year-old male celebrity aimed at a 14-year-old girl is disgusting." On his show of June 10, Letterman responded to the controversy, saying the jokes were meant to be about Palin's eighteen-year-old daughter, Bristol, whose pregnancy as an unmarried teenager had caused some controversy during the United States presidential election of 2008. "These are not jokes made about (Palin's) 14-year-old daughter ... I would never, never make jokes about raping or having sex of any description with a 14-year-old girl."
His remarks did not put an end to public criticism, however. The National Organization for Women (NOW) released a statement supporting Palin, noting that Letterman had made "[only] something of an apology." When the controversy failed to subside, Letterman addressed the issue again on his show of June 15, faulting himself for the error and apologizing "especially to the two daughters involved, Bristol and Willow, and also to the governor and her family and everybody else who was outraged by the joke."
On August 17, 2011, it was reported that an Islamist militant had posted a death threat against Letterman on a website frequented by Al-Qaeda supporters, calling on American Muslims to kill Letterman for making a joke about the death of Ilyas Kashmiri, an Al-Qaeda leader who was killed in a drone strike in Pakistan in June 2011. In his show on August 22, Letterman joked about the threat, saying "State Department authorities are looking into this. They're not taking this lightly. They're looking into it. They're questioning, they're interrogating, there's an electronic trail—but everybody knows it's Leno."
Letterman appeared in the pilot episode of the short-lived 1986 series "Coach Toast", and he appears with a bag over his head as a guest on Bonnie Hunt's 1990s sitcom, "The Building". He appeared in "The Simpsons" as himself in a couch gag when the Simpsons find themselves (and the couch) in "Late Night with David Letterman". He had a cameo in the feature film "Cabin Boy", with Chris Elliott, who worked as a writer on Letterman's show. In this and other appearances, Letterman is listed in the credits as "Earl Hofert", the name of Letterman's maternal grandfather. He also appeared as himself in the Howard Stern biographical film "Private Parts" as well as the 1999 Andy Kaufman biopic "Man on the Moon", in a few episodes of Garry Shandling's 1990s TV series "The Larry Sanders Show", and in "The Abstinence", a 1996 episode of the sitcom "Seinfeld".
Letterman provided vocals for the Warren Zevon song "Hit Somebody" from "My Ride's Here", and provided the voice for Butt-head's father in the 1996 animated film "Beavis and Butt-Head Do America", once again credited as Earl Hofert.
Letterman was the focus of "The Avengers on "Late Night with David Letterman"", issue 239 (January 1984) of the Marvel comic book series "The Avengers", in which the title characters (specifically Hawkeye, Wonder Man, Black Widow, Beast, and Black Panther) are guests on "Late Night". A parody of Letterman, named "David Endochrine", is gassed to death along with his bandleader named "Paul" and their audience in Frank Miller's "The Dark Knight Returns". In "", Letterman was parodied as "David Litterbin". Letterman appears in issues #13–14 and #18 of "American Splendor", the autobiographical comic book by Harvey Pekar. Those issues show Pekar's accounts of appearances on "Late Night".
In 2010, a documentary directed by Joke Fincioen and Biagio Messina, "Dying to do Letterman", was released, featuring Steve Mazan, a stand-up comic, who has cancer and wants to appear on the Letterman show. The film won best documentary and jury awards at the Cinequest Film Festival. Mazan published a book of the same name (full title "Dying to Do Letterman: Turning Someday into Today") about his own saga.
Letterman appeared as a guest on CNN's "Piers Morgan Tonight" on May 29, 2012, when he was interviewed by Regis Philbin, the guest host and Letterman's long-time friend. Philbin again interviewed Letterman (and Shaffer) while guest-hosting CBS' "The Late Late Show" (between the tenures of Craig Ferguson and James Corden) on January 27, 2015. In June 2013, Letterman appeared in the second episode of season two of "Comedians in Cars Getting Coffee". On November 5, 2013, Letterman and Bruce McCall published a fiction satire book titled "This Land Was Made for You and Me (But Mostly Me)".
Letterman started his production company, Worldwide Pants Incorporated, which produced his show and several others, in 1991. The company also produces feature films and documentaries and founded its own record label, Clear Entertainment. Worldwide Pants received significant attention in December 2007, after it was announced the company had independently negotiated its own contract with the Writers Guild of America, East, thus allowing Letterman, Craig Ferguson, and their writers to return to work, while the union continued its strike against production companies, networks, and studios with whom it had not yet reached agreements.
Letterman, Bobby Rahal, and Mike Lanigan are co-owners of Rahal Letterman Lanigan Racing, an auto racing team competing in the WeatherTech SportsCar Championship and NTT IndyCar series. The team won the 2004 Indianapolis 500 with driver Buddy Rice.
The Letterman Foundation for Courtesy and Grooming is a private foundation through which Letterman has donated millions of dollars to charities and other non-profit organizations in Indiana and Montana, celebrity-affiliated organizations such as Paul Newman's Hole in the Wall Gang Camp, Ball State University, the American Cancer Society, the Salvation Army, and "Médecins Sans Frontières".
Letterman's biggest influence and his mentor was Johnny Carson. Other comedians that influenced Letterman were Paul Dixon, Steve Allen, Jonathan Winters, Garry Moore, Jack Paar, Don Rickles, and David Brenner. Although Ernie Kovacs has also been mentioned as an influence, Letterman has denied this.
Comedians influenced by Letterman include Conan O'Brien, Jon Stewart, Stephen Colbert, Ray Romano, Jimmy Kimmel, Jay Leno, Arsenio Hall, Larry Wilmore, Seth Meyers, Norm Macdonald, Jimmy Fallon, John Oliver, and James Corden.
On July 2, 1968, Letterman married his college sweetheart, Michelle Cook, in Muncie, Indiana; their marriage ended in divorce by October 1977. He also had a long-term cohabiting relationship with the former head writer and producer on "Late Night", Merrill Markoe, from 1978 to 1988. Markoe was the mind behind several "Late Night" staples, such as "Stupid Pet/Human Tricks". "Time" magazine stated that theirs was the defining relationship of Letterman's career with Merrill also acting as his writing partner. She "put the surrealism in Letterman's comedy."
Letterman and Regina Lasko started dating in February 1986, while he was still living with Markoe. Lasko gave birth to a son in 2003. In 2005, police discovered a plot to kidnap his son and demand a ransom of US$5 million. Kelly Frank, a house painter who had worked for Letterman, was charged in the conspiracy.
Letterman and Lasko wed on March 19, 2009, during a quiet courthouse civil ceremony in Choteau, Montana, where he had purchased a ranch in 1999. Letterman announced the marriage during the taping of his show of March 23, shortly after congratulating Bruce Willis for his marriage the week before. Letterman told the audience he nearly missed the ceremony because his truck became stuck in mud two miles from their house. The family resides in North Salem, New York, on a estate.
Letterman suffers from tinnitus, a symptom of hearing loss. On the "Late Show" in 1996, Letterman talked about his experience with tinnitus during an interview with William Shatner, who himself has severe tinnitus caused by an on-set explosion. Letterman has stated that initially he was unable to pinpoint the noise inside his head, and that he hears a constant ringing in his ears 24 hours a day.
Letterman no longer drinks alcohol. On more than one occasion, he said that he had once been a "horrible alcoholic" and had begun drinking around the age of 13 and continued until 1981 when he was 34. He recounts in 1981, "I was drunk 80% of the time ... I loved it. I was one of those guys, I looked around, and everyone else had stopped drinking and I couldn't understand why." When he is shown drinking what appears to be alcohol on "Late Night" or the "Late Show", it is actually replaced with apple juice by the crew.
In 2015, Letterman stated about his anxiety: "For years and years and years—30, 40 years—I was anxious and hypochondriacal and an alcoholic, and many, many other things that made me different from other people." He became calmer through a combination of transcendental meditation and low doses of medication. Letterman is a Presbyterian, a religious tradition he was originally brought up in by his mother. However, he once said he is motivated by "Lutheran, Midwestern guilt".
Letterman is a car enthusiast and owns an extensive collection. In 2012, it was reported that the collection consisted of ten Ferraris, eight Porsches, four Austin-Healeys, two Honda motorcycles, a Chevy pickup, and one car each from automakers Mercedes-Benz, Jaguar, MG, Volvo, and Pontiac.
In his 2013 appearance on "Comedians in Cars Getting Coffee", part of Jerry Seinfeld's conversation with Letterman was filmed in Letterman's 1995 Volvo 960 station wagon, which is powered by a 380-horsepower racing engine. Paul Newman had the car built for Letterman.
Letterman shares a close relationship with the rock and roll band Foo Fighters since their appearance on his first show upon his return from heart surgery. The band appeared many times on the "Late Show", including a week-long stint in October 2014. While introducing the band's performance of "Miracle" on the show of October 17, 2014, Letterman told the story of how a souvenir video of himself and his four-year-old son learning to ski used the song as background music, unbeknownst to Letterman until he saw it. He stated: "This is the second song of theirs that will always have great, great meaning for me for the rest of my life". This was the first time the band had heard this story. Worldwide Pants co-produced Dave Grohl's "" TV series. "Letterman was the first person to get behind this project," Grohl stated.
Beginning in May 1988, Letterman was stalked by Margaret Mary Ray, a woman suffering from schizophrenia. She stole his Porsche, camped out on his tennis court, and repeatedly broke into his house. Her exploits drew national attention, with Letterman occasionally joking about her on his show, although he never referred to her by name. After she killed herself at age 46 in October 1998, Letterman told "The New York Times" that he had great compassion for her. A spokesperson for Letterman said: "This is a sad ending to a confused life."
In 2005 another obsessed fan was able to obtain a restraining order from a New Mexico judge, prohibiting Letterman from contacting her. She claimed the New York-based Letterman had sent coded messages to her via his television program, causing her bankruptcy and emotional distress. Law professor Eugene Volokh described the case as "patently frivolous".
Letterman has admitted to having multiple affairs with different women, including his intern Holly Hester and his long-time personal assistant Stephanie Birkitt.
On October 1, 2009, Letterman announced on his show that he had been the victim of a blackmail attempt by a person threatening to reveal his sexual relationships with several of his female employees – a fact Letterman immediately thereafter confirmed to be true. He stated that someone had left a package in his car with material he said he would write into a screenplay and a book if Letterman did not pay him US$2 million. Letterman said that he contacted the Manhattan District Attorney's office and partook in a sting operation that involved the handover of a fake check to his extortionist.
Subsequently, Joe Halderman, a producer of the CBS news magazine television series "48 Hours", was arrested after trying to deposit the check. He was indicted by a Manhattan grand jury and pleaded not guilty to a charge of attempted grand larceny on October 2, 2009. Halderman pleaded guilty in March 2010 and was sentenced to six months in prison, followed by probation and community service.
A central figure in the case and one of the women with whom Letterman had had a sexual relationship was his long-time personal assistant Stephanie Birkitt, who often appeared with him on his show. She had also worked for "48 Hours". Until a month prior to the revelations, she had shared a residence with Halderman, who allegedly had copied her personal diary and used it, along with private emails, in the blackmail package.
In the days following the initial announcement of the affairs and the arrest, several prominent women, including Kathie Lee Gifford, co-host of NBC's "Today Show", and NBC news anchor Ann Curry questioned whether Letterman's affairs with subordinates created an unfair working environment. A spokesman for Worldwide Pants said that the company's sexual harassment policy did not prohibit sexual relationships between managers and employees. According to business news reporter Eve Tahmincioglu, "CBS suppliers are supposed to follow the company's business conduct policies" and the CBS 2008 Business Conduct Statement states that "If a consenting romantic or sexual relationship between a supervisor and a direct or indirect subordinate should develop, CBS requires the supervisor to disclose this information to his or her Company's Human Resources Department...".
On October 3, 2009, a former CBS employee, Holly Hester, announced that she and Letterman had engaged in a year-long secret affair in the early 1990s while she was his intern and a student at New York University. On October 5, 2009, Letterman devoted a segment of his show to a public apology to his wife and staff. Three days later, Worldwide Pants announced that Birkitt had been placed on a "paid leave of absence" from the "Late Show". On October 15, CBS News announced that the company's chief investigative correspondent, Armen Keteyian, had been assigned to conduct an "in-depth investigation" into Letterman.
On September 7, 2007, Letterman visited his "alma mater", Ball State University in Muncie, Indiana, for the dedication of a communications facility named in his honor for his dedication to the university. The million, David Letterman Communication and Media Building opened for the 2007 fall semester. Thousands of Ball State students, faculty, and local residents welcomed Letterman back to Indiana. Letterman's emotional speech touched on his struggles as a college student and his late father, and also included the "top ten good things about having your name on a building", finishing with "if reasonable people can put my name on a million building, anything is possible." Over many years Letterman "has provided substantial assistance to [Ball State's] Department of Telecommunications, including an annual scholarship that bears his name."
At the same time, Letterman received a Sagamore of the Wabash award given by Indiana Governor Mitch Daniels, which recognizes distinguished service to the state of Indiana.
In his capacities as either a performer, producer, or as part of a writing team, Letterman is among the most nominated people in the history of the Emmy Awards with 52 nominations, winning two Daytime Emmys and ten Primetime Emmys since 1981. He won four American Comedy Awards and in 2011 became the first recipient of the Johnny Carson Award for Comedic Excellence at The Comedy Awards.
Letterman was a recipient of the 2012 Kennedy Center Honors, where he was called "one of the most influential personalities in the history of television, entertaining an entire generation of late-night viewers with his unconventional wit and charm." On May 16, 2017, Letterman was named the next recipient of the Mark Twain Prize for American Humor, the award granted annually by the John F. Kennedy Center for the Performing Arts. He received the prize in a ceremony on October 22, 2017. | https://en.wikipedia.org/wiki?curid=8340 |
Delroy Lindo
Delroy George Lindo (born 18 November 1952) is an English actor and theatre director. He has been nominated for Tony and Screen Actors Guild awards and has won a Satellite Award.
Lindo has played prominent roles in four Spike Lee films: West Indian Archie in "Malcolm X" (1992), Woody Carmichael in "Crooklyn" (1994), Rodney Little in "Clockers" (1995), and Paul in "Da 5 Bloods" (2020). Lindo also played Catlett in "Get Shorty" (1995), Arthur Rose in "The Cider House Rules" (1999), and Detective Castlebeck in "Gone in 60 Seconds" (2000). Lindo starred as Alderman Ronin Gibbons in the TV series "The Chicago Code" (2011), as Winter on the series "Believe" (2014), and currently stars as Adrian Boseman in "The Good Fight" (2017–present).
Delroy Lindo was born in 1952 in Lewisham, south east London, the son of Jamaican parents who had emigrated to Britain. Lindo grew up in nearby Eltham, and became interested in acting as a child when he appeared in a nativity play at school. His mother was a nurse and his father worked in various jobs. As a teenager, he and his mother moved to Toronto, Ontario, Canada. When he was sixteen, they moved to San Francisco. At the age of 24, Lindo started acting studies at the American Conservatory Theater, graduating in 1979.
Lindo's film debut came in 1976 with the Canadian John Candy comedy "Find the Lady", followed by two other roles in films, including an Army Sergeant in "More American Graffiti" (1979).
He stopped his film career for 10 years to concentrate on theatre acting. In 1982 he debuted on Broadway in ""Master Harold"...and the Boys," directed by the play's South African author Athol Fugard. By 1988 Lindo had earned a Tony nomination for his portrayal of Herald Loomis in August Wilson's "Joe Turner's Come and Gone".
Lindo returned to film in the 1990s, acting alongside Rutger Hauer and Joan Chen in the science fiction film "Salute of the Jugger" (1990), which has become a cult classic. Although he had turned down Spike Lee for a role in "Do the Right Thing", Lee cast him as Woody Carmichael in the drama "Crooklyn" (1994), which brought him notice. Together with his other roles with Lee - as the West Indian Archie, a psychotic gangster, in "Malcolm X", and a starring role as a neighbourhood drug dealer in "Clockers" - he became established in his film career.
Other films in which he has starring roles are Barry Sonnenfeld's "Get Shorty" (1995), Ron Howard's "Ransom" (1996) and "Soul of the Game" (1996), as the baseball player Satchel Paige.
In 1998 Lindo co-starred as African-American explorer Matthew Henson, in the TV film "Glory & Honor", directed by Kevin Hooks. It portrayed his nearly 20-year partnership with Commander Robert Peary in Arctic exploration and their effort to find the Geographic North Pole in 1909. He received a Satellite Award for best actor. Lindo has continued to work in television and in 2006 was seen on the short-lived NBC drama "Kidnapped".
Lindo had a small role in the 1995 film "Congo," playing the corrupt Captain Wanta. Lindo was not credited for the role. Lindo played an angel in the comedy film "A Life Less Ordinary" (1997).
He guest-starred on "The Simpsons" in the episode "Brawl in the Family", playing a character named Gabriel.
In the British film, "Wondrous Oblivion" (2003), directed by Paul Morrison, he starred as Dennis Samuels, the father of a Jamaican immigrant family in London in the 1950s; he coaches his children and the son of a neighbour Jewish family in cricket, earning their admiration in a time of strained social relations. Lindo said he made the film in honour of his parents, who had similarly moved to London in those years.
In 2007, Lindo began an association with Berkeley Repertory Theatre in Berkeley, California, when he directed Tanya Barfield's play "The Blue Door". In the autumn of 2008, Lindo revisited August Wilson's play, "Joe Turner's Come and Gone", directing a production at the Berkeley Rep. In 2010, he played the role of elderly seer Bynum in David Lan's production of "Joe Turner" at the Young Vic Theatre in London.
As of 2015, Lindo was expected to play Marcus Garvey in a biopic of the black nationalist historical figure that had been in pre-production for several years. | https://en.wikipedia.org/wiki?curid=8341 |
David Janssen
David Janssen (born David Harold Meyer, March 27, 1931 – February 13, 1980) was an American film and television actor who is best known for his starring role as Richard Kimble in the television series "The Fugitive" (1963–1967). Janssen also had the title roles in three other series: "Richard Diamond, Private Detective"; "Harry O"; and "O'Hara, U.S. Treasury".
In 1996 "TV Guide" ranked him number 36 on its "50 Greatest TV Stars of All Time" list.
A heavy drinker and cigarette smoker, as well as a workaholic attitude, Janssen's health deteriorated rapidly in the late 1970s and he died in 1980 at the age of 48.
Janssen was born in 1931 in Naponee, a village in Franklin County in southern Nebraska, to Harold Edward Meyer, a banker (May 12, 1906 – November 4, 1990) and Berniece Graf (May 11, 1910 – November 26, 1995). Janssen was of Irish and Jewish descent. Following his parents' divorce in 1935, his mother moved with five-year-old David to Los Angeles, California, and later married Eugene Janssen (February 18, 1918 – March 30, 1996) in 1940 in Los Angeles. Young David used his stepfather's name after he entered show business as a child.
He attended Fairfax High School in Los Angeles, where he excelled on the basketball court, setting a school-scoring record that lasted over 20 years. His first film part was at the age of thirteen, and by the age of twenty-five he had appeared in twenty films and served two years as an enlisted man in the United States Army. During his Army days, Janssen became friends with fellow enlistees Martin Milner and Clint Eastwood while posted at Fort Ord, California.
Janssen appeared in many television series before he landed programs of his own. In 1956, he and Peter Breck appeared in John Bromfield's syndicated series "Sheriff of Cochise" in the episode "The Turkey Farmers". Later, he guest-starred on NBC's medical drama "The Eleventh Hour" in the role of Hal Kincaid in the 1962 episode "Make Me a Place", with series co-stars Wendell Corey and Jack Ging. He joined friend Martin Milner in a 1962 episode of "Route 66" as the character Kamo in the episode "One Tiger to a Hill."
Janssen starred in four television series of his own:
At the time, the final episode of "The Fugitive" held the record for the greatest number of American homes with television sets to watch a series finale, at 72 percent in August 1967.
His films include "To Hell and Back", the biography of Audie Murphy, who was the most decorated American soldier of World War II; John Wayne's Vietnam war film "The Green Berets"; opposite Gregory Peck in the space story "Marooned", in which Janssen played an astronaut sent to rescue three stranded men in space, and "The Shoes of the Fisherman", as a television journalist in Rome reporting on the election of a new Pope (Anthony Quinn). He also played pilot Harry Walker in the 1973 action movie "Birds of Prey".
He starred as a Los Angeles police detective trying to clear himself in the killing of an apparently innocent doctor in the 1967 film "Warning Shot". The film was shot during a break in the spring and summer of 1966 between the third and fourth seasons of "The Fugitive."
Janssen played an alcoholic in the 1977 TV movie "A Sensitive, Passionate Man", which co-starred Angie Dickinson, and an engineer who devises an unbeatable system for blackjack in the 1978 made-for-TV movie "Nowhere to Run", co-starring Stefanie Powers and Linda Evans. Janssen's impressively husky voice was used to good effect as the narrator for the TV mini-series "Centennial" (1978–79); he also appeared in the final episode. He starred in the made-for-TV mini series "S.O.S. Titanic" as John Jacob Astor, playing opposite Beverly Ross as his wife, Madeleine, in 1979.
Though Janssen's scenes were cut from the final release, he also appeared as a journalist in the film "Inchon", which he accepted to work with Laurence Olivier who played General Douglas MacArthur. At the time of his death, Janssen had just begun filming a television movie playing the part of Father Damien, the priest who dedicated himself to the leper colony on the island of Molokai, Hawaii. The part was eventually reassigned to actor Ken Howard of the CBS series "The White Shadow".
In 1996 "TV Guide" ranked "The Fugitive" number 36 on its '50 Greatest Shows of All Time' list.
Janssen was married twice. His first marriage was to model and interior decorator Ellie Graham, whom he married in Las Vegas on August 25, 1958. They divorced in 1968. In 1975, he married actress and model Dani Crayne Greco. They remained married until Janssen's death.
A heavy drinker and a four-pack-a-day smoker, Janssen died of a heart attack in the early morning of February 13, 1980, at his home in Malibu, California at the age of 48. At the time of his death, Janssen was filming the television movie "Father Damien". Janssen was buried at the Hillside Memorial Park Cemetery in Culver City, California. A non-denominational funeral was held at the Jewish chapel of the cemetery on February 17. Suzanne Pleshette delivered the eulogy at the request of Janssen's widow. Milton Berle, Johnny Carson, Tommy Gallagher, Richard Harris, Stan Herman, Rod Stewart and Gregory Peck were among Janssen's pallbearers. Honorary pallbearers included Jack Lemmon, George Peppard, James Stewart and Danny Thomas.
For his contribution to the television industry, David Janssen has a star on the Hollywood Walk of Fame located on the 7700 block of Hollywood Boulevard. | https://en.wikipedia.org/wiki?curid=8343 |
Greek drachma
The drachma ( , ; pl. "drachmae" or "drachmas") was the currency used in Greece during several periods in its history:
It was also a small unit of weight.
The name "drachma" is derived from the verb (, "(I) grasp"). It is believed that the same word with the meaning of "handful" or "handle" is found in Linear B tablets of the Mycenean Pylos. Initially a drachma was a fistful (a "grasp") of six "oboloí" or "obeloí" (metal sticks, literally "spits") used as a form of currency as early as 1100 BC and being a form of "bullion": bronze, copper, or iron ingots denominated by weight. A hoard of over 150 rod-shaped obeloi was uncovered at Heraion of Argos in Peloponnese. Six of them are displayed at the Numismatic Museum of Athens.
It was the standard unit of silver coinage at most ancient Greek mints, and the name "obol" was used to describe a coin that was one-sixth of a drachma. The notion that "drachma" derived from the word for fistful was recorded by Herakleides of Pontos (387–312 BC) who was informed by the priests of Heraion that Pheidon, king of Argos, dedicated rod-shaped obeloi to Heraion. Similar information about Pheidon's obeloi was also recorded at the Parian Chronicle.
Ancient Greek coins normally had distinctive names in daily use. The Athenian tetradrachm was called owl, the Aeginetic stater was called chelone, the Corinthian stater was called "hippos" (horse) and so on. Each city would mint its own and have them stamped with recognizable symbols of the city, known as badge in numismatics, along with suitable inscriptions, and they would often be referred to either by the name of the city or of the image depicted. The exact exchange value of each was determined by the quantity and quality of the metal, which reflected on the reputation of each mint.
Among the Greek cities that used the drachma were: Abdera, Abydos, Alexandria, Aetna, Antioch, Athens, Chios, Cyzicus, Corinth, Ephesus, Eretria, Gela, Catana, Kos, Maronia, Naxos, Pella, Pergamum, Rhegion, Salamis, Smyrni, Sparta, Syracuse, Tarsus, Thasos, Tenedos, Troy and more.
The 5th century BC Athenian "tetradrachm" ("four drachmae") coin was perhaps the most widely used coin in the Greek world prior to the time of Alexander the Great (along with the Corinthian stater). It featured the helmeted profile bust of Athena on the obverse (front) and an owl on the reverse (back). In daily use they were called "glaukes" (owls), hence the proverb , 'an owl to Athens', referring to something that was in plentiful supply, like 'coals to Newcastle'. The reverse is featured on the national side of the modern Greek 1 euro coin.
Drachmae were minted on different weight standards at different Greek mints. The standard that came to be most commonly used was the Athenian or Attic one, which weighed a little over 4.3 grams.
After Alexander the Great's conquests, the name "drachma" was used in many of the Hellenistic kingdoms in the Middle East, including the Ptolemaic kingdom in Alexandria and the Parthian Empire based in what is modern-day Iran. The Arabic unit of currency known as "dirham" (), known from pre-Islamic times and afterwards, inherited its name from the drachma or didrachm (, 2 drachmae); the dirham is still the name of the official currencies of Morocco and the United Arab Emirates. The Armenian dram () also derives its name from the drachma.
It is difficult to estimate comparative exchange rates with modern currency because the range of products produced by economies of centuries gone by were different from today, which makes purchasing power parity (PPP) calculations very difficult; however, some historians and economists have estimated that in the 5th century BC a drachma had a rough value of 25 U.S. dollars (in the year 1990 – equivalent to 46.50 USD in 2015), whereas classical historians regularly say that in the heyday of ancient Greece (the fifth and fourth centuries) the daily wage for a skilled worker or a hoplite was one drachma, and for a heliast (juror) half a drachma since 425 BC.
Modern commentators derived from Xenophon that half a drachma per day (360 days per year) would provide "a comfortable subsistence" for "the poor citizens" (for the head of a household in 355 BC). Earlier in 422 BC, we also see in Aristophanes ("Wasps", line 300–302) that the daily half-drachma of a juror is just enough for the daily subsistence of a family of three.
A modern person might think of one drachma as the rough equivalent of a skilled worker's daily pay in the place where they live, which could be as low as US$1, or as high as $100, depending on the country.
Fractions and multiples of the drachma were minted by many states, most notably in Ptolemaic Egypt, which minted large coins in gold, silver and bronze.
Notable Ptolemaic coins included the gold "pentadrachm" and "octadrachm", and silver "tetradrachm", "decadrachm" and "pentakaidecadrachm". This was especially noteworthy as it would not be until the introduction of the Guldengroschen in 1486 that coins of substantial size (particularly in silver) would be minted in significant quantities.
For the Roman successors of the drachma, see Roman provincial coins.
The weight of the silver drachma was approximately 4.3 grams or 0.15 ounces, although weights varied significantly from one city-state to another. It was divided into six obols of 0.72 grams, which were subdivided into four tetartemoria of 0.18 grams, one of the smallest coins ever struck, approximately 5–7 mm in diameter.
Minae and talents were never actually minted: they represented weight measures used for commodities (e.g. grain) as well as metals like silver or gold. The New Testament mentions both didrachma and, by implication, tetradrachma in context of the Temple tax. Luke's Gospel includes a parable told by Jesus of a woman with 10 drachmae, who lost one and searched her home until she found it.
The drachma was reintroduced in May 1832, shortly before the establishment of the modern state of Greece (with the exception of the subdivision Taurus). It replaced the "phoenix" at par. The drachma was subdivided into 100 lepta.
The first coinage consisted of copper denominations of 1, 2, 5 and 10 lepta, silver denominations of , , 1 and 5 drachmae and a gold coin of 20 drachmae. The drachma coin weighed 4.5 g and contained 90% silver, with the 20-drachma coin containing 5.8 g of gold.
In 1868, Greece joined the Latin Monetary Union and the drachma became equal in weight and value to the French franc. The new coinage issued consisted of copper coins of 1, 2, 5 and 10 lepta, with the 5- and 10-lepta coins bearing the names "obolos" () and "diobolon" (), respectively; silver coins of 20 and 50 lepta, 1, 2 and 5 drachmae and gold coins of 5, 10 and 20 drachmae. (Very small numbers of 50- and 100-drachma coins in gold were also issued.)
In 1894, cupro-nickel 5-, 10- and 20-lepta coins were introduced. No 1-lepton or 2-lepta coin had been issued since the late 1870s. Silver coins of 1 and 2 drachmae were last issued in 1911, and no coins were issued between 1912 and 1922, during which time the Latin Monetary Union collapsed due to World War I.
Between 1926 and 1930, a new coinage was introduced for the new Hellenic Republic, consisting of cupro-nickel coins in denominations of 20 lepta, 50 lepta, 1 drachma, and 2 drachmae; nickel coins of 5 drachmae; and silver coins of 10 and 20 drachmae. These were the last coins issued for the first modern drachma, and none were issued for the second.
Notes were issued by the National Bank of Greece from 1841 until 1928. The Bank of Greece issued notes from 1928 until 2001, when Greece joined the Euro. Early denominations ranged from 10 to 500 drachmae. Smaller denominations (1, 2, 3 and 5 drachmae) were issued from 1885, with the first 5-drachma notes being made by cutting 10-drachma notes in half.
When Greece finally achieved its independence from the Ottoman Empire in 1828, the phoenix was introduced as the monetary unit; its use was short-lived, however, and in 1832 the phoenix was replaced by the drachma, adorned with the image of King Otto of Greece, who reigned as modern Greece's first king from 1832 to 1862. The drachma was divided into 100 lepta. In 2002 the drachma ceased to be legal tender after the euro, the monetary unit of the European Union, became Greece's sole currency.
From 1917 to 1920, the Greek government took control of issuing small change notes under Law 991/1917. During that time, the government issued denominations of 10 & 50 lepta, and 1, 2 & 5 drachmae. The National Bank of Greece introduced 1000-drachma notes in 1901, and the Bank of Greece introduced 5,000-drachma notes in 1928. The economic depression of the 1920s affected many nations around the globe, including Greece. In 1922, the Greek government issued a forced loan in order to finance their growing budget deficit. On April 1, 1922, the government decreed that half of all bank notes had to be surrendered and exchanged for 6.5% bonds. The notes were then cut in half, with the portion bearing the Greek crown standing in for the bonds while the other half was exchanged for a new issue of central bank notes at half the original value. The Greek government again issued notes between 1940 and 1944, in denominations ranging from 50 lepta to 20 drachmae.
During the German–Italian occupation of Greece from 1941 to 1944, catastrophic hyperinflation and Nazi looting of the Greek treasury caused much higher denominations to be issued, culminating in 100,000,000,000-drachma notes in 1944. The Italian occupation authorities in the Ionian Islands printed their own currency (Ionian drachma).
On 11 November 1944, following the liberation of Greece from Nazi Germany, old drachma were exchanged for new ones at the rate of 50,000,000,000 to 1. Only paper money was issued for the second drachma. The government issued notes of 1, 5, 10 and 20 drachma, with the Bank of Greece issuing 50-, 100-, 500-, 1000-, 5000-, and 10,000-drachma notes. This drachma also suffered from high inflation. The government later issued 100-, 500-, and 1000-drachma notes, and the Bank of Greece issued 20,000-and 50,000-drachma notes.
On 9 April 1953, in an effort to halt inflation, Greece joined the Bretton Woods system. On 1 May 1954, the drachma was revalued at a rate of 1000 to 1, and small change notes were abolished for the last time. The third drachma assumed a fixed exchange rate of 30 drachmae per dollar until 20 October 1973: over the next 25 years, the official exchange rate gradually declined, reaching 400 drachmae per dollar. On 1 January 2002, the Greek drachma was officially replaced as the circulating currency by the euro, and it has not been legal tender since 1 March 2002.
The first issue of coins minted in 1954 consisted of holed aluminium 5-, 10- and 20-lepton pieces, with 50-lepton, 1-, 2-, 5- and 10-drachma pieces in cupro-nickel. A silver 20-drachma piece was issued in 1960, replacing the 20-drachma banknote, and also minted only in collector sets in 1965. Coins in denominations from 50 lepta to 20 drachmae carried a portrait of King Paul (1947–1964). New coins were introduced in 1966, ranging from 50 lepta to 10 drachmae, depicting King Constantine II (1964–1974). A silver 30 drachma coin for the centennial of Greece's royal dynasty was minted in 1963. The following year a non-circulating coin of this value was produced to commemorate the royal wedding. The reverse of all coins was altered in 1971 to reflect the military junta which was in power from 1967 to 1974. This design included a soldier standing in front of the flames of the rising phoenix.
A 20-drachmae coin in cupro-nickel with an image of Europa on the obverse was issued in 1973. In the late 1973, several new coin types were introduced: unholed aluminium (10 and 20 lepta), nickel-brass (50 lepta, 1 drachma, and 2 drachmae) and cupro-nickel (5, 10, and 20 drachmae). These provisional coins carried the design of the phoenix rising from the flame on the obverse, and used the country's new designation as the "Hellenic Republic", replacing the coins also issued in 1973 as the Kingdom of Greece with King Constantine II's portrait. A new series of all 8 denominations was introduced in 1976 carrying images of early national heroes on the smaller values.
Cupro-nickel 50-drachmae coins were introduced in 1980. In 1986, nickel-brass 50-drachma coins were introduced, followed by copper 1- and 2-drachma pieces in 1988 and nickel-brass coins of 20 and 100 drachmae in 1990. In 2000, a set of 6 themed 500-drachma coins were issued to commemorate the 2004 Athens Olympic Games.
Coins in circulation at the time of the adoption of the euro were
The first issues of banknotes were in denominations of 10, 20 and 50 drachmae, soon followed by 100, 500 and 1000 drachmae by 1956. 5000-drachma notes were introduced in 1984, followed by 10,000-drachma notes in 1995 and 200-drachma notes in 1997.
Banknotes in circulation at the time of the adoption of the euro were
In Unicode, the currency symbol is . There is a special Attic numeral, , for the value of one drachma but it fails to render in most browsers.
The Drachmi Greek Democratic Movement Five Stars which was founded in 2013, aims to restore the Drachma, as Greece's currency. | https://en.wikipedia.org/wiki?curid=8347 |
Denarius
The denarius (, dēnāriī ) was the standard Roman silver coin from its introduction in the Second Punic War to the reign of Gordian III (AD 238–244), when it was gradually replaced by the Antoninianus. It continued to be minted in very small quantities, likely for ceremonial purposes, until and through the tetrarchy (293–313).
The word "dēnārius" is derived from the Latin "dēnī" "containing ten", as its value was originally of 10 assēs. The word for "money" descends from it in Italian ("denaro"), Slovene ("denar"), Portuguese ("dinheiro"), and Spanish ("dinero"). Its name also survives in the dinar currency.
Its symbol is represented in Unicode as 𐆖 (U+10196), however it can also be represented as X̶ (capital letter X with combining long stroke overlay).
A predecessor of the "denarius" was first struck in 269 or 268 BC, five years before the First Punic War, with an average weight of 6.81 grams, or of a Roman pound. Contact with the Greeks had prompted a need for silver coinage in addition to the bronze currency that the Romans were using at that time. This predecessor of the "denarius" was a Greek-styled silver coin of "didrachm"weight, which was struck in Neapolis and other Greek cities in southern Italy. These coins were inscribed with a legend that indicated that they were struck for Rome, but in style they closely resembled their Greek counterparts. They were rarely seen at Rome, to judge from finds and hoards, and were probably used either to buy supplies or pay soldiers.
The first distinctively Roman silver coin appeared around 226 BC. Classical historians have sometimes called these coins "heavy denarii", but they are classified by modern numismatists as "quadrigati", a term which survives in one or two ancient texts and is derived from the quadriga, or four-horse chariot, on the reverse. This, with a two-horse chariot or "biga" which was used as a reverse type for some early denarii, was the prototype for the most common designs used on Roman silver coins for a number of years.
Rome overhauled its coinage shortly before 211 BC, and introduced the denarius alongside a short-lived denomination called the victoriatus. The denarius contained an average 4.5 grams, or of a Roman pound, of silver, and was at first tariffed at ten asses, hence its name, which means 'tenner'. It formed the backbone of Roman currency throughout the Roman republic and the early empire.
The denarius began to undergo slow debasement toward the end of the republican period. Under the rule of Augustus (27 BC to AD 14) its weight fell to 3.9 grams (a theoretical weight of of a Roman pound). It remained at nearly this weight until the time of Nero (AD 37–68), when it was reduced to of a pound, or 3.4 grams. Debasement of the coin's silver content continued after Nero. Later Roman emperors also reduced its weight to 3 grams around the late 3rd century.
The value at its introduction was 10 asses, giving the denarius its name, which translates as "containing ten". In about 141 BC, it was re-tariffed at 16 asses, to reflect the decrease in weight of the as. The denarius continued to be the main coin of the Roman Empire until it was replaced by the so-called antoninianus in the early 3rd century AD. The coin was last issued, in bronze, under Aurelian between AD 270 and 275, and in the first years of the reign of Diocletian. ('Denarius', in "A Dictionary of Ancient Roman Coins", by John R. Melville-Jones (1990)).
It is difficult to give even rough comparative values for money from before the 20th century, as the range of products and services available for purchase was so different. Classical historians often say that in the late Roman Republic and early Roman Empire () the daily wage for an unskilled laborer and common soldier was 1 denarius (with no tax deductions) or about US$20 in bread. During the republic (509 BC–27 BC), legionary pay was 112.5 denarii per year (0.3 per day), later doubled by Julius Caesar to 225 denarii (0.6 per day), with soldiers having to pay for their own food and arms. Centurions received considerably higher pay: under Augustus, the lowest rank of centurion was paid 3,750 denarii per year, and the highest rank, 15,000 denarii.
The silver content of the denarius under the Roman Empire (after Nero) was about 50 grains, 3.24 grams, or (0.105ozt) troy ounce. On June 6, 2011, this was about US$3.62 in value if the silver were 0.999 pure.
The fineness of the silver content varied with political and economic circumstances. From a purity of greater than 90% silver in the 1st century AD, the denarius fell to under 60% purity by AD 200, and plummeted to 5% purity by AD 300. By the reign of Gallienus, the "antoninianus" was a copper coin with a thin silver wash.
By comparison, a laborer earning the minimum wage in the United States in January 2014 made US$58 for an 8-hour day, before taxes (based on the mode value of $7.25 per hour, which was true then in 20 states) and an employee earning the minimum wage in the United Kingdom in 2014 made £52 for an 8-hour day, before taxes.
In the final years of the 1st century BC Tincomarus, a local ruler in southern Britain, started issuing coins that appear to have been made from melted down "denarii". The coins of Eppillus, issued around Calleva Atrebatum around the same time, appear to have derived design elements from various "denarii" such as those of Augustus and M. Volteius.
Even after the "denarius" was no longer regularly issued, it continued to be used as a unit of account, and the name was applied to later Roman coins in a way that is not understood. The Arabs who conquered large parts of the land that once belonged to the Eastern Roman Empire issued their own gold dinar. The lasting legacy of the "denarius" can be seen in the use of "d" as the abbreviation for the British penny until 1971. It also survived in France as the name of a coin, the denier. The denarius also survives in the common Arabic name for a currency unit, the "dinar" used from pre-Islamic times, and still used in several modern Arab nations. The major currency unit in former Principality of Serbia, Kingdom of Serbia and former Yugoslavia was "dinar", and it is still used in present-day Serbia. The Macedonian currency "denar" is also derived from the Roman denarius. The Italian word "denaro", the Spanish word "dinero", the Portuguese word "dinheiro", and the Slovene word "", all meaning money, are also derived from Latin "denarius".
1 gold aureus = 2 gold quinarii = 25 silver denarii = 50 silver quinarii = 100 bronze sestertii = 200 bronze dupondii = 400 copper asses = 800 copper semisses = 1,600 copper quadrantes
At the height of the Roman Empire a sextarius (546ml) of ordinary wine cost roughly one Dupondius (⅛ of a Denarius), after Diocletian's Edict on Maximum Prices were issued in 301 AD, the same cost 8 debased common denarii. This was roughly an increase of £1 to £3.20 in the modern equivalent.
In the New Testament, the gospels refer to the denarius as a day's wage for a common laborer (Matthew 20:2, John 12:5). In the Book of Revelation, during the Third Seal: Black Horse, a choinix (or quart) of wheat and three quarts of barley were each valued at one denarius. Bible scholar Robert H. Mounce says the price of the wheat and barley as described in the vision appears to be ten to twelve times their normal cost in ancient times. Revelation describes a condition where basic goods are sold at greatly inflated prices. Thus, the black horse rider depicts times of deep scarcity or famine but not of starvation. The English word "quart" translates "choinix". Apparently, a choinix of wheat was the daily ration of one adult. Thus, in the conditions pictured by Revelation 6 the normal income for a working-class family would buy enough food for only one person. The less costly barley would feed three people for one day's wages.
The denarius is also mentioned in the Parable of the Good Samaritan (Luke 10:25–37). The Render unto Caesar passage in Matthew 22:15–22 and Mark 12:13–17 uses the word δηνάριον to describe the coin held up by Jesus, translated in the King James Bible as "tribute penny". It is commonly thought to be a denarius with the head of Tiberius. | https://en.wikipedia.org/wiki?curid=8349 |
David Mamet
David Alan Mamet (; born November 30, 1947) is an American playwright, film director, screenwriter and author. He won a Pulitzer Prize and received Tony nominations for his plays "Glengarry Glen Ross" (1984) and "Speed-the-Plow" (1988). He first gained critical acclaim for a trio of off-Broadway 1970s plays: "The Duck Variations," "Sexual Perversity in Chicago," and "American Buffalo." His plays "Race" and "The Penitent", respectively, opened on Broadway in 2009 and previewed off-Broadway in 2017.
Feature films that Mamet both wrote and directed include "House of Games" (1987), "Homicide" (1991), "The Spanish Prisoner" (1997) and his biggest commercial success "Heist" (2001). His screenwriting credits include "The Postman Always Rings Twice" (1981), "The Verdict" (1982), "The Untouchables" (1987), "Hoffa" (1992), "Wag the Dog" (1997), and "Hannibal" (2001). Mamet himself wrote the screenplay for the 1992 adaptation of "Glengarry Glen Ross", and wrote and directed the 1994 adaptation of his play "Oleanna" (1992). He was the executive producer and frequent writer for the TV show "The Unit" (2006–2009).
Mamet's books include: "On Directing Film" (1991), a commentary and dialogue about film-making; "The Old Religion" (1997), a novel about the lynching of Leo Frank; "Five Cities of Refuge: Weekly Reflections on Genesis, Exodus, Leviticus, Numbers and Deuteronomy" (2004), a Torah commentary with Rabbi Lawrence Kushner; "The Wicked Son" (2006), a study of Jewish self-hatred and antisemitism; "Bambi vs. Godzilla", a commentary on the movie business; "The Secret Knowledge: On the Dismantling of American Culture" (2011), a commentary on cultural and political issues; and "Three War Stories" (2013), a trio of novellas about the physical and psychological effects of war.
Mamet was born in 1947 in Chicago to Lenore June (née Silver), a teacher, and Bernard Morris Mamet, a labor attorney. His family was Jewish. His paternal grandparents were from Poland. One of Mamet's earliest jobs was as a busboy at Chicago's London House and The Second City. He also worked as an actor, editor for "Oui" magazine and as a cab-driver. He was educated at the progressive Francis W. Parker School and at Goddard College in Plainfield, Vermont. At the Chicago Public Library Foundation 20th anniversary fundraiser in 2006, though, Mamet announced "My alma mater is the Chicago Public Library. I got what little educational foundation I got in the third-floor reading room, under the tutelage of a Coca-Cola sign".
After a move to Chicago's North Side, Mamet encountered theater director Robert Sickinger, and began to work occasionally at Sickinger's Hull House Theatre. This represented the beginning of Mamet's lifelong involvement with the theater.
Mamet is a founding member of the Atlantic Theater Company; he first gained acclaim for a trio of off-Broadway plays in 1976, "The Duck Variations," "Sexual Perversity in Chicago," and "American Buffalo." He was awarded the Pulitzer Prize in 1984 for "Glengarry Glen Ross," which received its first Broadway revival in the summer of 2005. His play "Race", which opened on Broadway on December 6, 2009 and featured James Spader, David Alan Grier, Kerry Washington, and Richard Thomas in the cast, received mixed reviews. His play "The Anarchist", starring Patti LuPone and Debra Winger, in her Broadway debut, opened on Broadway on November 13, 2012 in previews and was scheduled to close on December 16, 2012. His 2017 play "The Penitent" previewed off-Broadway on February 8, 2017.
In 2002, Mamet was inducted into the American Theater Hall of Fame. Mamet later received the PEN/Laura Pels Theater Award for Grand Master of American Theater in 2010.
In 2017, Mamet released an online class for writers entitled "David Mamet teaches dramatic writing".
It was announced in 2019 that David Mamet will return to the London West End with his new play "Bitter Wheat" starring John Malkovich.
Mamet's first film work was as a screenwriter, later directing his own scripts.
Mamet's first produced screenplay was the 1981 production of "The Postman Always Rings Twice", based on James M. Cain's novel. He received an Academy Award nomination one year later for "The Verdict", written in the late 1970s. He also wrote the screenplays for "The Untouchables" (1987), "Hoffa" (1992), "The Edge" (1997), "Wag the Dog" (1997), "Ronin" (1998), and "Hannibal" (2001). He received a second Academy Award nomination for "Wag the Dog".
In 1987, Mamet made his film directing debut with his screenplay "House of Games", which won Best Film and Best Screenplay awards at the 1987 Venice Film Festival and the Film of the Year in 1989 from the London Film Critics' Circle Awards. The film starred his then-wife, Lindsay Crouse, and many longtime stage associates and friends, including fellow Goddard College graduates. Mamet was quoted as saying, "It was my first film as a director and I needed support, so I stacked the deck." After "House of Games", Mamet later wrote and directed two more films focusing on the world of con artists, "The Spanish Prisoner" (1997) and "Heist" (2001). Among those films, "Heist" enjoyed the biggest commercial success.
Other films that Mamet both wrote and directed include: "Things Change" (1988), "Homicide" (1991) (nominated for the Palme d'Or at 1991 Cannes Film Festival and won a "Screenwriter of the Year" award for Mamet from the London Film Critics' Circle Awards), "Oleanna" (1994), "The Winslow Boy" (1999), "State and Main" (2000), "Spartan" (2004), "Redbelt" (2008), and the 2013 bio-pic TV movie "Phil Spector".
A feature-length film, a thriller titled "Blackbird", was intended for release in 2015, but is still in development.
When Mamet adapted his play for the 1992 film "Glengarry Glen Ross", he wrote an additional part (including the monologue "Coffee's for closers") for Alec Baldwin.
Mamet continues to work with an informal repertory company for his films, including Crouse, William H. Macy, Joe Mantegna, and Rebecca Pidgeon, as well as the aforementioned school friends.
David did a rewrite of the script for "Ronin" under the pseudonym "Richard Weisz" and turned in an early version of a script for "Malcolm X" which was rejected by director Spike Lee. In 2000, Mamet directed a film version of "Catastrophe," a one-act play by Samuel Beckett featuring Harold Pinter and John Gielgud (in his final screen performance). In 2008, he directed and wrote the mixed martial arts movie "Redbelt," about a martial arts instructor tricked into fighting in a professional bout.
In "On Directing Film," Mamet asserts that directors should focus on getting the point of a scene across, rather than simply following a protagonist, or adding visually beautiful or intriguing shots. Films should create order from disorder in search of the objective.
In 1990 Mamet published "The Hero Pony", a 55-page collection of poetry. He has also published a series of short plays, monologues and four novels, "The Village" (1994), "The Old Religion" (1997), "Wilson: A Consideration of the Sources" (2000), and "Chicago" (2018). He has written several non-fiction texts, and children's stories, including "True and False: Heresy and Common Sense for the Actor"(1997). In 2004 he published a lauded version of the classical Faust story, "Faustus", however, when the play was staged in San Francisco during the spring of 2004, it was not well received by critics. On May 1, 2010, Mamet released a graphic novel "The Trials of Roderick Spode (The Human Ant)".
On June 2, 2011, "The Secret Knowledge: On the Dismantling of American Culture", Mamet's book detailing his conversion from modern liberalism to "a reformed liberal" was released.
Mamet published "Three War Stories", a collection of novellas, on November 11, 2013. In an interview with Newsmax TV, Mamet said he wanted to write about war, despite never having served. Moreover, the book allowed Mamet to free characters that had occupied his mind for years. On the subject of characters as a reason for writing, Mamet told the host, "You want to get these guys out of your head. You just want them to stop talking to you."
One December 3, 2019, Mamet is set to publish a novel, "The Diary of a Porn Star by Priscilla Wriston-Ranger: As Told to David Mamet With an Afterword by Mr. Mamet."
Mamet wrote one episode of "Hill Street Blues", "A Wasted Weekend", that aired in 1987. His then-wife, Lindsay Crouse, appeared in numerous episodes (including that one) as Officer McBride. Mamet is also the creator, producer and frequent writer of the television series "The Unit", where he wrote a well-circulated memo to the writing staff. He directed a third-season episode of "The Shield" with Shawn Ryan. In 2007, Mamet directed two television commercials for Ford Motor Company. The two 30-second ads featured the Ford Edge and were filmed in Mamet's signature style of fast-paced dialogue and clear, simple imagery. Mamet's sister, Lynn, is a producer and writer for television shows, such as "The Unit" and "Law & Order".
Mamet has contributed several dramas to BBC Radio through Jarvis & Ayres Productions, including an adaptation of "Glengarry Glen Ross" for BBC Radio 3 and new dramas for BBC Radio 4. The comedy "Keep Your Pantheon (or On the Whole I'd Rather Be in Mesopotamia)" was aired in 2007.
Since May 2005 he has been a contributing blogger at "The Huffington Post", drawing satirical cartoons with themes including political strife in Israel. In a 2008 essay at "The Village Voice" titled "Why I Am No Longer a 'Brain-Dead Liberal'" he revealed that he had gradually rejected so-called political correctness and progressivism and embraced conservatism. Mamet has spoken in interviews of changes in his views, highlighting his agreement with free market theorists such as Friedrich Hayek the historian Paul Johnson, and economist Thomas Sowell, whom Mamet called "one of our greatest minds".
During promotion of a book, Mamet said British people had "a taint of anti-semitism," claiming they "want to give [Israel] away to some people whose claim is rather dubious." In the same interview, Mamet went on to say that "there are famous dramatists and novelists [in the UK] whose works are full of anti-Semitic filth." He refused to give examples because of British libel laws (the interview was conducted in New York City for the "Financial Times"). He is known for his pro-Israel positions; in his book "The Secret Knowledge" he claimed that "Israelis would like to live in peace within their borders; the Arabs would like to kill them all."
Mamet wrote an article for the November 2012 issue of "The Jewish Journal of Greater Los Angeles" imploring fellow Jewish Americans to vote for Republican nominee Mitt Romney.
In an essay for "Newsweek", published on January 29, 2013, Mamet argued against gun control laws: "It was intended to guard us against this inevitable decay of government that the Constitution was written. Its purpose was and is not to enthrone a Government superior to an imperfect and confused electorate, but to protect us from such a government."
Mamet has described the NFL anthem protests as "absolutely fucking despicable".
Mamet's style of writing dialogue, marked by a cynical, street-smart edge, precisely crafted for effect, is so distinctive that it has come to be called "Mamet speak." Mamet himself has criticized his (and other writers') tendency to write "pretty" at the expense of sound, logical plots. When asked how he developed his style for writing dialogue, Mamet said, "In my family, in the days prior to television, we liked to while away the evenings by making ourselves miserable, based solely on our ability to speak the language viciously. That's probably where my ability was honed."
One instance of Mamet's dialogue style can be found in "Glengarry Glen Ross", in which two down-on-their-luck real estate salesmen are considering stealing from their employer's office. George Aaronow and Dave Moss equivocate on the meaning of "talk" and "speak", turning language and meaning to deceptive purposes:
Mamet dedicated "Glengarry Glen Ross" to Harold Pinter, who was instrumental in its being first staged at the Royal National Theatre, (London) in 1983, and whom Mamet has acknowledged as an influence on its success, and on his other work.
Mamet's plays have frequently sparked debate and controversy. During a staging of "Oleanna" in 1992, in which a college student falsely accuses her professor of trying to rape her, a critic reported that the play divided the audience by gender and recounted "couples emerged screaming at each other".
Arthur Holmberg in his 2014 book "David Mamet and Male Friendship", examines Mamet's portrayal of male friendships, especially focusing on the contradictions and ambiguities of male bonding as dramatized in Mamet's plays and films.
Mamet and actress Lindsay Crouse married in 1977 and divorced in 1990. The couple have two children, Willa and Zosia. Willa was a professional photographer and is now a singer/songwriter; Zosia is an actress. Mamet has been married to actress and singer-songwriter Rebecca Pidgeon since 1991. They live together in Santa Monica, California. They have two children, Clara and Noah.
Mamet is a Reform Jew and strongly pro-Israel. In a 2020 interview, he described Donald Trump as a "great president" and supported his re-election.
The papers of David Mamet were sold to the Harry Ransom Center at the University of Texas at Austin in 2007 and first opened for research in 2009. The growing collection consists mainly of manuscripts and related production materials for most of his plays, films, and other writings, but also includes his personal journals from 1966 to 2005. In 2015, the Ransom Center secured a second major addition to Mamet's papers that include more recent works. Additional materials relating to Mamet and his career can be found in the Ransom Center's collections of Robert De Niro, Mel Gussow, Tom Stoppard, Sam Shepard, Paul Schrader, Don DeLillo, and John Russell Brown.
Mamet is credited as writer of these works except where noted. Credits in addition to writer also noted. | https://en.wikipedia.org/wiki?curid=8351 |
Definable real number
Informally, a definable real number is a real number that can be uniquely specified by its description. The description may be expressed as a construction or as a formula of a formal language. For example, the positive square root of 2, formula_1, can be defined as the unique positive solution to the equation formula_2, and it can be constructed with a compass and straightedge.
Different choices of a formal language or its interpretation can give rise to different notions of definability. Specific varieties of definable numbers include the constructible numbers of geometry, the algebraic numbers, and the computable numbers.
One way of specifying a real number uses geometric techniques. A real number "r" is a constructible number if there is a method to construct a line segment of length "r" using a compass and straightedge, beginning with a fixed line segment of length 1.
Each positive integer, and each positive rational number, is constructible. The positive square root of 2 is constructible. However, the cube root of 2 is not constructible; this is related to the impossibility of doubling the cube.
A real number "r" is called an real algebraic number if there is a polynomial "p"("x"), with only integer coefficients, so that "r" is a root of "p", that is, "p"("r")=0.
Each real algebraic number can be defined individually using the order relation on the reals. For example, if a polynomial "q"("x") has 5 roots, the third one can be defined as the unique "r" such that "q"("r") = 0 and such that there are two distinct numbers less than "r" for which "q" is zero.
All rational numbers are algebraic, and all constructible numbers are algebraic. There are numbers such as the cube root of 2 which are algebraic but not constructible.
The real algebraic numbers form a subfield of the real numbers. This means that 0 and 1 are algebraic numbers and, moreover, if "a" and "b" are algebraic numbers, then so are "a"+"b", "a"−"b", "ab" and, if "b" is nonzero, "a"/"b".
The real algebraic numbers also have the property, which goes beyond being a subfield of the reals, that for each positive integer "n" and each real algebraic number "a", all of the "n"th roots of "a" that are real numbers are also algebraic.
There are only countably many algebraic numbers, but there are uncountably many real numbers, so in the sense of cardinality most real numbers are not algebraic. This nonconstructive proof that not all real numbers are algebraic was first published by
Georg Cantor in his 1874 paper "On a Property of the Collection of All Real Algebraic Numbers".
Non-algebraic numbers are called transcendental numbers. Specific examples of transcendental numbers include π and Euler's number "e".
A real number is a computable number if there is an algorithm that, given a natural number "n", produces a decimal expansion for the number accurate to "n" decimal places. This notion was introduced by Alan Turing in 1936.
The computable numbers include the algebraic numbers along with many transcendental numbers including π and "e". Like the algebraic numbers, the computable numbers also form a subfield of the real numbers, and the positive computable numbers are closed under taking "n"th roots for each positive "n".
Not all real numbers are computable. The entire set of computable numbers is countable, so most reals are not computable. Specific examples of noncomputable real numbers include the limits of Specker sequences, and algorithmically random real numbers such as Chaitin's Ω numbers.
Another notion of definability comes from the formal theories of arithmetic, such as Peano arithmetic. The language of arithmetic has symbols for 0, 1, the successor operation, addition, and multiplication, intended to be interpreted in the usual way over the natural numbers. Because no variables of this language range over the real numbers, a different sort of definability is needed to refer to real numbers. A real number "a" is "definable in the language of arithmetic" (or "arithmetical") if its Dedekind cut can be defined as a predicate in that language; that is, if there is a first-order formula "φ" in the language of arithmetic, with three free variables, such that
A real number "a" is first-order definable in the language of set theory, without parameters, if there is a formula "φ" in the language of set theory, with one free variable, such that "a" is the unique real number such that "φ"("a") holds (see ). This notion cannot be expressed as a formula in the language of set theory.
All analytical numbers, and in particular all computable numbers, are definable in the language of set theory. Thus the real numbers definable in the language of set theory include all familiar real numbers such as 0, 1, π, "e", et cetera, along with all algebraic numbers. Assuming that they form a set in the model, the real numbers definable in the language of set theory over a particular model of ZFC form a field.
Each set model "M" of ZFC set theory that contains uncountably many real numbers must contain real numbers that are not definable within "M" (without parameters). This follows from the fact that there are only countably many formulas, and so only countably many elements of "M" can be definable over "M". Thus, if "M" has uncountably many real numbers, we can prove from "outside" "M" that not every real number of "M" is definable over "M".
This argument becomes more problematic if it is applied to class models of ZFC, such as the von Neumann universe . The argument that applies to set models cannot be directly generalized to class models in ZFC because the property "the real number "x" is definable over the class model "N"" cannot be expressed as a formula of ZFC. Similarly, the question whether the von Neumann universe contains real numbers that it cannot define cannot be expressed as a sentence in the language of ZFC. Moreover, there are countable models of ZFC in which all real numbers, all sets of real numbers, functions on the reals, etc. are definable . | https://en.wikipedia.org/wiki?curid=8361 |
Diego de Almagro
Diego de Almagro (; – July 8, 1538), also known as El Adelantado and El Viejo, was a Spanish conquistador known for his exploits in western South America. He participated with Francisco Pizarro in the Spanish conquest of Peru. While subduing the Inca Empire he laid the foundation for Quito and Trujillo as Spanish cities in present-day Ecuador and Peru respectively. From Peru Almagro led the first Spanish military expedition to central Chile. Back in Peru, a longstanding conflict with Pizarro over the control of the former Inca capital of Cuzco erupted into a civil war between the two bands of conquistadores. In the battle of Las Salinas in 1538 Almagro was defeated by the Pizarro brothers and months later he was executed.
The origins of Diego de Almagro remain obscure. He was born in 1475 in the village of Almagro, in Ciudad Real, where he took the surname for being the illegitimate son of Juan de Montenegro and Elvira Gutiérrez. In order to save the honor of the mother, her relatives took her infant and moved him to the nearby town of Bolaños de Calatrava, being raised in this town and in Aldea del Rey, run by Sancha López del Peral.
When he turned four he returned to Almagro, being under the tutelage of an uncle named Hernán Gutiérrez until he was fifteen years old, when due to his uncle's hardness he ran away from home. He went to the home of his mother, who was now living with her new husband, to tell her what had happened and that he was going to travel the world, asking for some bread to help her live in her misery. His mother, anguished, gave him a piece of bread and some coins and said: ""Take, son, and do not give me more pressure, and go, and God help in your adventure.""
He went to Seville and after probably stealing to survive the boy becomes a "criado" or servant and raised by Don Luis de Polanco, one of the four mayors of the Catholic Monarchs and later his counselor, and who was mayor of that city. While performing his duties as a servant, Almagro stabbed another servant for certain differences, leaving him with injuries so serious that they motivated that a trial against him be promoted.
Being wanted for justice, Don Luis de Polanco, making use of his influence, got Don Pedro Arias de Avila to allow him to embark in one of the ships that would go to the New World from the port of Sanlucar de Barrameda. The Casa de Contratacion demanded that the men who crossed the Indies carry their own weapons, clothes, and farming tools, which Don Polanco provided to his servant.
Diego de Almagro arrived in the New World on June 30, 1514, under the expedition that Ferdinand II of Aragon had sent under the guidance of Pedrarias Dávila. The expedition had landed in the city of Santa María la Antigua del Darién, Panama, where many other future conquistadors had already arrived, among them Francisco Pizarro.
There are not many details of Almagro's activities during this period, but it is known that he accompanied various sailors who departed from the city of Darien between 1514 and 1515. De Almagro eventually returned and settled in Darien, where he was granted an encomienda. He built a house and made a living from agriculture.
De Almagro undertook his first conquest on November 1515, commanding 260 men as he founded Villa del Acla, named after the Indian place. Due to illness he had to leave behind this mission to the licenciate Gaspar de Espinosa.
Espinosa decided to undertake a new expedition, which departed in December 1515 with 200 men, including De Almagro and Francisco Pizarro, who for the first time was designated as a captain. During this expedition, which lasted 14 months, De Almagro, Pizarro and Hernando de Luque became close friends.
Also during this time De Almagro established a friendship with Vasco Núñez de Balboa, who was in charge of Acla. De Almagro wanted to have a ship built with the remaining materials of the Espinosa expedition, to be finished on the coast of the "Great South Sea", as the Pacific Ocean was first called by the Spanish. Current historians do not believe that De Almagro was expected to participate in Balboa's expedition and probably returned to Darien.
De Almagro took part in the various expeditions that took place in the Gulf of Panama, taking part again in Espinosa's parties. Espinosa was supported by using Balboa's ships. De Almagro was recorded as a witness on the lists of natives whom Espinosa ordered to be carried. De Almagro remained as an early settler in the newly founded city of Panama. For four years he stayed there, working at the management of his properties and those of Pizarro. He took Ana Martínez, an indigenous woman, as a common-law wife. In this period, his first son, el "Mozo", was born to them.
By 1524 an association of conquest regarding South America was formalized among Almagro, Pizarro and Luque. By the beginning of August 1524, they had received the requisite permission to discover and conquer lands further south. De Almagro would remain in Panama to recruit men and gather supplies for the expeditions led by Pizarro.
After several expeditions to South America, Pizarro secured his stay in Peru with the "Capitulation" on 6 July 1529. During Pizarro's continued exploration of Incan territory, he and his men succeeded in defeating the Inca army under Emperor Atahualpa during the Battle of Cajamarca in 1532. De Almagro joined Pizarro soon afterward, bringing more men and arms.
After Peru fell to the Spanish, both Pizarro and De Almagro initially worked together in the founding of new cities to consolidate their dominions. As such, Pizarro dispatched De Almagro to pursue Quizquiz, fleeing to the Inca Empire's northern city of Quito. Their fellow conquistador Sebastián de Belalcázar, who had gone forth without Pizarro's approval, had already reached Quito and witnessed the destruction of the city by Inca general Rumiñawi. The Inca warrior had ordered the city to be burned and its gold to be buried at an undisclosed location where the Spanish could never find it. The arrival of Pedro de Alvarado from Guatemala, in search of Inca gold further complicated the situation for Almagro and Belalcázar. Alvarado's presence, however, did not last long as he left South America in exchange for monetary compensation from Pizarro.
In an attempt to claim Quito ahead of Belalcázar, in August 1534 De Almagro founded a city on the shores of Laguna de Colta (Colta Lake) in the foothills of Chimborazo, some south of present-day Quito, and named it "Santiago de Quito." Four months later would come the foundation of the Peruvian city of Trujillo, which Almagro named as "Villa Trujillo de Nueva Castilla" (the Village of Trujillo in New Castille) in honor of Francisco Pizarro's birthplace, Trujillo in Extremadura, Spain. These events were the height of the Pizarro-Almagro friendship, which historians describe as one of the last events in which their friendship soon faded and entered a period of turmoil for the control of the Incan capital of Cuzco.
After splitting the treasure of Inca emperor Atahualpa, both Pizarro and Almagro left towards Cuzco and took the city in 1533. However, De Almagro's friendship with Pizarro showed signs of deterioration in 1526 when Pizarro, in the name of the rest of the conquistadors, called forth the "Capitulacion de Toledo" law in which King Charles I of Spain had laid out his authorization for the conquest of Peru and the awards every conquistador would receive from it. Long before, however, each conquistador had promised to equally split the benefits. Pizarro managed to have a larger stake and awards for himself. Despite this, De Almagro still obtained an important fortune for his services, and the King awarded him in November 1532 the noble title of "Don" and he was assigned a personal coat of arms.
Although by this time Diego de Almagro had already acquired sufficient wealth in the conquest of Peru and was living a luxurious life in Cuzco, the prospect of conquering the lands further south was very attractive to him. Given that the dispute with Pizarro over Cuzco had kept intensifying, Almagro spent a great deal of time and money equipping a company of 500 men for a new exploration south of Peru.
By 1534 the Spanish crown had determined to split the region in two parallel lines, forming the governorship of "Nueva Castilla" (from the 1° to the 14° latitude, close to Pisco), and that of "Nueva Toledo" (from the 14° to the 25° latitude, in Taltal, Chile), assigning the first to Francisco Pizarro and the second to Diego de Almagro. The crown had previously assigned Almagro the governorship of Cuzco, and as such De Almagro was heading there when Charles V divided the territory between Nueva Castilla and Nuevo Toledo. This might have been the reason why Almagro did not immediately confront Pizarro for Cuzco, and promptly decided to embark on his new quest for the discovery of the riches of Chile.
Charles V had given Diego a grant extending two hundred leagues south of Francisco Pizarro's. Francisco and Diego concluded a new contract on 12 June 1535, in which they agreed to share future discoveries equally. Diego raised an expedition for Chile, expecting it "would lead to even greater riches than they had found in Peru." Almagro prepared the way by sending ahead three of his Spanish soldiers, the religious chief of the Inca empire, Willaq Umu, and Paullo Topa, brother of Manco Inca Yupanqui. De Almagro sent Juan de Saavedra forward with one hundred and fifty men, and soon followed them with additional forces. Saavedra established on January 23, 1535 the first Spanish settlement in Bolivia near the Inca regional capital of Paria.
Almagro left Cuzco on July 3, 1535 with his supporters and stopped at Moina until the 20th of that month. Meanwhile, Francisco Pizarro's brother, Juan Pizarro, had arrested Inca Manco Inca Yupanqui, further complicating De Almagro's plans as it heavily increased the dissatisfaction of the Indians submitted to Spanish rule. Not having formally been appointed governor of any territories in the Capitulation of Toledo in 1528, however, forcing him to declare himself "adelantado" (governor) of Nueva Toledo, or southern Peru and present-day Chile. Some sources suggest Almagro received such a requirement in 1534 by the Spanish king and was officially declared governor of New Toledo.
Once he left Moina, De Almagro followed the Inca trail followed by 750 Spaniards deciding to join him in quest for the gold lost in the ransom of Atahualpa, which had mainly benefited the Pizarro brothers and their supporters. After crossing the Bolivian mountain range and traveling past Lake Titicaca, Almagro arrived on the shores of the Desaguadero River and finally set up camp in Tupiza. From there, the expedition stopped at Chicoana and then turned to the southeast to cross the Andes mountains.
The expedition turned out to be a difficult and exhausting endeavor. The hardest phase was the crossing of the Andean cordilleras: the cold, hunger and tiredness meant the death of various Spanish and natives, but mainly slaves who were not accustomed to such rigorous climate.
Upon this point, De Almagro determined everything was a failure. He ordered a small group under Rodrigo Orgonez on a reconnaissance of the country to the south.
By luck, these men found the Valley of Copiapó, where Gonzalo Calvo Barrientos, a Spanish soldier whom Pizarro had expelled from Peru for stealing objects the Inca had offered for his ransom, had already established a friendship with the local natives. There, in the valley of the river Copiapó, Almagro took official possession of Chile and claimed it in the name of King Charles V.
De Almagro promptly initiated the exploration of the new territory, starting up the valley the Aconcagua River, where he was well received by the natives. However, the intrigues of his interpreter, Felipillo, who had previously helped Pizarro in dealing with "Atahualpa", almost thwarted De Almagro's efforts. Felipillo had secretly urged the local natives to attack the Spanish, but they desisted, not understanding the dangers that they posed. De Almagro directed Gómez de Alvarado along with 100 horsemen and 100 foot to continue the exploration, which ended in the confluence of the Ñuble and Itata rivers. The Battle of Reinohuelén between the Spanish and Mapuche indigenous peoples forced the explorers to return to the north.
De Almagro's own reconnaissance of the land and the bad news of Gómez de Alvarado's encounter with the fierce Mapuche, along with the bitter cold winter that settled ferociously upon them, only served to confirm that everything had failed. He never found gold or the cities which Incan scouts had told him lay ahead, only communities of the indigenous population who lived from subsistence agriculture. Local tribes put up fierce resistance to the Spanish forces. The exploration of the territories of Nueva Toledo, which lasted 2 years, was marked by a complete failure for De Almagro. Despite this, at first he thought staying and founding a city would serve well for his honor. The initial optimism that led Almagro to bring his son he had with the indigenous Panamanian Ana Martínez to Chile had faded.
Some historians have suggested that, but for the urging of his senior explorers, De Almagro would probably have stayed permanently in Chile. He was urged to return to Peru and this time take definitive possession of Cuzco, so as to consolidate an inheritance for his son. Dismayed with his experience in the south, Almagro made plans of return to Peru. He never officially founded a city in the territory of what is now Chile.
The withdrawal of the Spanish from valleys of Chile was violent: Almagro authorized his soldiers to ransack the natives' properties, leaving their soil desolate. In addition, the Spanish soldiers took natives captive to serve as slaves. The locals were captured, tied together, and forced to carry the heavy loads belonging to the conquistadors.
After the exhausting crossing of the Atacama Desert, mainly due to the harsh weather conditions, Almagro finally reached Cuzco, Peru, in 1537. According to some authors, it was during this time that the Spanish term ""roto"" (torn), used by Peruvians to refer to Chileans, was first coined. De Almagro's disappointed troops returned to Cuzco with their "torn clothes" due to the extensive and laborious passage on foot by the Atacama Desert.
After his return, De Almagro was surprised to learn of the Inca Manco's rebellion. Diego de Almagro sent an embassy to the Inca, but they mistrusted all of the Spaniards by this time. Hernando Pizarro's men formed an uneasy truce with De Almagro's men, surveying to determine the boundaries of their leaders' royal grants. They needed to determine in which portion the city of Cuzco was located. However, De Almagro's troops quickly took the city and imprisoned the Pizarro brothers, Hernando and Gonzalo, on the night of 8 April 1537.
After occupying Cuzco, De Almagro confronted an army sent by Francisco Pizarro to liberate his brothers. Alonso de Alvarado commanded it and was defeated during the Battle of Abancay on July 12, 1537. He and some of his men were imprisoned. Later, Gonzalo Pizarro and De Alvarado escaped prison. Subsequent negotiations between Francisco Pizarro and De Almagro concluded with the liberation of Hernando, the third Pizarro brother, in return for conceding control and administration of Cuzco to De Almagro. Pizarro never intended to give up the city permanently, but was buying time to organize an army strong enough to defeat Almagro's troops.
During this time De Almagro fell ill, and Pizarro and his brothers grabbed the opportunity to defeat him and his followers. The Almagristas were defeated at Las Salinas in April 1538, with Orgóñez being killed on the field of battle. De Almagro fled to Cuzco, still in the hands of his loyal supporters, but found only temporary refuge; the forces of the Pizarro brothers entered the city without resistance. Once captured, Almagro was humiliated by Hernando Pizarro and his requests to appeal to the King were ignored.
When Diego de Almagro begged for his life, Hernando responded:
"-he was surprised to see Almagro demean himself in a manner so unbecoming a brave cavalier, that his fate was no worse than had befallen many a soldier before him; and that, since God had given him the grace to be a Christian, he should employ his remaining moments in making up his account with Heaven!"
Almagro was condemned to death and executed by "garrote" in his dungeon, and then decapitated, on July 8, 1538. His corpse was taken to the public Plaza Mayor of Cuzco, where a herald proclaimed his crimes. Hernán Ponce de León took his body and buried him in the church of Our Lady of Mercy in Cuzco.
Diego de Almagro II (1520–1542), known as "El Mozo" (The Lad), son of Diego de Almagro I, whose mother was an Indian girl of Panama, became the foil of the conspirators who had put Pizarro to the sword. Pizarro was murdered on June 26, 1541; the conspirators promptly proclaimed the lad De Almagro Governor of Peru. From various causes, all of the conspirators either died or were killed except for one, who was executed after the lad Almagro gave an order. The lad De Almagro fought the desperate battle of Chupas on September 16, 1542, escaped to Cuzco, but was arrested, immediately condemned to death, and executed in the great square of the city. | https://en.wikipedia.org/wiki?curid=8362 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.