text stringlengths 10 951k | source stringlengths 39 44 |
|---|---|
ACF Fiorentina
ACF Fiorentina, commonly referred to as Fiorentina , is an Italian professional football club based in Florence, Tuscany. Founded by a merger in August 1926, and refounded in August 2002 following bankruptcy, Fiorentina have played at the top level of Italian football for the majority of their existence; only four clubs have played in more Serie A seasons.
Fiorentina has won two Italian Championships, in 1955–56 and again in 1968–69, as well as six Coppa Italia trophies and one Supercoppa Italiana. On the European stage, Fiorentina won the UEFA Cup Winners' Cup in 1960–61 and lost the final one year later. They finished runners-up in the 1956–57 European Cup, losing against Real Madrid, and also came close to winning the 1989–90 UEFA Cup, finishing as runners-up against Juventus after losing the first leg in Turin and drawing in the second one in Avellino.
Fiorentina is one of the fifteen European teams that played the finals in all three major continental competitions: the Champions League (1956–1957, the first Italian team to reach the final in the top continental competition), the UEFA Cup Winners (1960–1961 and 1961–1962) and the UEFA Cup (1989–1990).
Since 1931, the club have played at the Stadio Artemio Franchi, which currently has a capacity of 43,147. The stadium has used several names over the years and has undergone several renovations. Fiorentina are known widely by the nickname "Viola", a reference to their distinctive purple colours.
Associazione Calcio Fiorentina was founded in the autumn of 1926 by local noble and National Fascist Party member Luigi Ridolfi, who initiated the merger of two older Florentine clubs, CS Firenze and PG Libertas. The aim of the merger was to give Florence a strong club to rival those of the more dominant Italian Football Championship sides of the time from Northwest Italy. Also influential was the cultural revival and rediscovery of "Calcio Fiorentino", an ancestor of modern football that was played by members of the Medici family.
After a rough start and three seasons in lower leagues, Fiorentina reached the Serie A in 1931. That same year saw the opening of the new stadium, originally named after Giovanni Berta, a prominent fascist, but now known as Stadio Artemio Franchi. At the time, the stadium was a masterpiece of engineering, and its inauguration was monumental. To be able to compete with the best teams in Italy, Fiorentina strengthened their team with some new players, notably the Uruguayan Pedro Petrone, nicknamed "el Artillero". Despite enjoying a good season and finishing in fourth place, Fiorentina were relegated the following year, although they would return quickly to Serie A. In 1941, they won their first Coppa Italia, but the team were unable to build on their success during the 1940s due to World War II and other troubles.
In 1950, Fiorentina started to achieve consistent top-five finishes in the domestic league. The team consisted of great players such as well-known goalkeeper Giuliano Sarti, Sergio Cervato, Francesco Rosella, Guido Gratton, Giuseppe Chiappella and Aldo Scaramucci but above all, the attacking duo of Brazilian Julinho and Argentinian Miguel Montuori. This team won Fiorentina's first "scudetto" (Italian championship) in 1955–56, 12 points ahead of second-place Milan. Milan beat Fiorentina to top spot the following year, but more significantly Fiorentina became the first Italian team to play in a European Cup final, when a disputed penalty led to a 2–0 defeat at the hands of Alfredo Di Stéfano's Real Madrid.
Fiorentina were runners-up again in the three subsequent seasons. In the 1960–61 season, the club won the Coppa Italia again and was also successful in Europe, winning the first Cup Winners' Cup against Scottish side Rangers.
After several years of runner-up finishes, Fiorentina dropped away slightly in the 1960s, bouncing from fourth to sixth place, although the club won the Coppa Italia and the Mitropa Cup in 1966.
While the 1960s did result in some trophies and good Serie A finishes for Fiorentina, nobody believed that the club could challenge for the title. The 1968–69 season started with Milan as frontrunners, but on matchday 7, they lost to Bologna and were overtaken by Gigi Riva's Cagliari. Fiorentina, after an unimpressive start, then moved to the top of the Serie A, but the first half of their season finished with a 2–2 draw against Varese, leaving Cagliari as outright league leader. The second half of the season was a three-way battle between the three contending teams, Milan, Cagliari and Fiorentina. Milan fell away, instead focusing their efforts on the European Cup, and it seemed that Cagliari would retain top spot. After Cagliari lost against Juventus, however, Fiorentina took over at the top. The team then won all of their remaining matches, beating rivals Juve in Turin on the penultimate matchday to seal their second, and last, national title. In the European Cup competition the following year, Fiorentina had some good results, including a win in the Soviet Union against Dynamo Kyiv, but they were eventually knocked out in the quarter-finals after a 3–0 defeat in Glasgow to Celtic.
"Viola" players began the 1970s decade with "Scudetto" sewed on their breast, but the period was not especially fruitful for the team. After a fifth-place finish in 1971, they finished in mid-table almost every year, even flirting with relegation in 1972 and 1978. The "Viola" did win the Anglo-Italian League Cup in 1974 and won the Coppa Italia again in 1975. The team consisted of young talents like Vincenzo Guerini and Moreno Roggi, who had the misfortune to suffer bad injuries, and above all Giancarlo Antognoni, who would later become an idol to Fiorentina's fans. The young average age of the players led to the team being called "Fiorentina Ye-Ye".
In 1980, Fiorentina was bought by Flavio Pontello, who came from a rich house-building family. He quickly changed the team's anthem and logo, leading to some complaints by the fans, but he started to bring in high-quality players such as Francesco Graziani and Eraldo Pecci from Torino; Daniel Bertoni from Sevilla; Daniele Massaro from Monza; and a young Pietro Vierchowod from Como. The team was built around Giancarlo Antognoni, and in 1982, Fiorentina were involved in an exciting duel with rivals Juventus. After a bad injury to Antognoni, the league title was decided on the final day of the season when Fiorentina were denied a goal against Cagliari and were unable to win. Juventus won the title with a disputed penalty and the rivalry between the two teams erupted.
The following years were strange for Fiorentina, who vacillated between high finishes and relegation battles. Fiorentina also bought two interesting players, "El Puntero" Ramón Díaz and, most significantly, the young Roberto Baggio.
In 1990, Fiorentina fought to avoid relegation right up until the final day of the season, but did reach the UEFA Cup final, where they again faced Juventus. The Turin team won the trophy, but Fiorentina's "tifosi" once again had real cause for complaint: the second leg of the final was played in Avellino (Fiorentina's home ground was suspended), a city with many Juventus fans, and emerging star Roberto Baggio was sold to the rival team on the day of the final. Pontello, suffering from economic difficulties, was selling all the players and was forced to leave the club after serious riots in Florence's streets. The club was then acquired by the famous filmmaker Mario Cecchi Gori.
The first season under Cecchi Gori's ownership was one of stabilisation, after which the new chairman started to sign some good players like Brian Laudrup, Stefan Effenberg, Francesco Baiano and, most importantly, Gabriel Batistuta, who became an iconic player for the team during the 1990s. In 1993, however, Cecchi Gori died and was succeeded as chairman by his son, Vittorio. Despite a good start to the season, Cecchi Gori fired the coach, Luigi Radice, after a defeat against Atalanta, and replaced him with Aldo Agroppi. The results were dreadful: Fiorentina fell into the bottom half of the standings and were relegated on the last day of the season.
Claudio Ranieri was brought in as coach for the 1993–94 season, and that year, Fiorentina dominated Serie B, Italy's second division. Upon their return to Serie A, Ranieri put together a good team centred around new top scorer Batistuta, signing the young talent Rui Costa from Benfica and the new world champion Brazilian defender Márcio Santos. The former became an idol to Fiorentina fans, while the second disappointed and was sold after only a season. The "Viola" finished the season in tenth place.
The following season, Cecchi Gori bought other important players, namely Swedish midfielder Stefan Schwarz. The club again proved its mettle in cup competitions, winning the Coppa Italia against Atalanta and finishing joint-third in Serie A. In the summer, Fiorentina became the first non-national champions to win the Supercoppa Italiana, defeating Milan 2–1 at the San Siro.
Fiorentina's 1995–96 season was disappointing in the league, but they did reach the Cup Winners' Cup semi-final by beating Gloria Bistrița, Sparta Prague and Benfica. The team lost the semi-final to the eventual winner of the competition, Barcelona (away 1–1; home 0–2). The season's main signings were Luís Oliveira and Andrei Kanchelskis, the latter of whom suffered from many injuries.
At the end of the season, Ranieri left Fiorentina for Valencia in Spain, with Cecchi Gori appointing Alberto Malesani as his replacement. Fiorentina played well but struggled against smaller teams, although they did manage to qualify for the UEFA Cup. Malesani left Fiorentina after only a season and was succeeded by Giovanni Trapattoni. With Trapattoni's expert guidance and Batistuta's goals, Fiorentina challenged for the title in 1998–99 but finished the season in third, earning them qualification for the Champions League. The following year was disappointing in Serie A, but "Viola" played some historical matches in the Champions League, beating Arsenal 1–0 at the old Wembley Stadium and Manchester United 2–0 in Florence. They were ultimately eliminated in the second group stage.
At the end of the season, Trapattoni left the club and was replaced by Turkish coach Fatih Terim. More significantly, however, Batistuta was sold to Roma, who eventually won the title the following year. Fiorentina played well in 2000–01 and stayed in the top half of Serie A, despite the resignation of Terim and the arrival of Roberto Mancini. They also won the Coppa Italia for the sixth and last time.
The year 2001 heralded major changes for Fiorentina, as the terrible state of the club's finances was revealed: they were unable to pay wages and had debts of around US$50 million. The club's owner, Vittorio Cecchi Gori, was able to raise some more money, but even this soon proved to be insufficient resources to sustain the club. Fiorentina were relegated at the end of the 2001–02 season and went into judicially-controlled administration in June 2002. This form of bankruptcy (sports companies cannot exactly fail in this way in Italy, but they can suffer a similar procedure) meant that the club was refused a place in Serie B for the 2002–03 season, and as a result effectively ceased to exist.
The club was promptly re-established in August 2002 as Associazione Calcio Fiorentina e Florentia Viola with shoe and leather entrepreneur Diego Della Valle as new owner and the club was admitted into Serie C2, the fourth tier of Italian football. The only player to remain at the club in its new incarnation was Angelo Di Livio, whose commitment to the club's cause further endeared him to the fans. Helped by Di Livio and 30-goal striker Christian Riganò, the club won its Serie C2 group with considerable ease, which would normally have led to a promotion to Serie C1. Due to the bizarre "Caso Catania" (Catania Case), however, the club skipped Serie C1 and was admitted into Serie B, something that was only made possible by the Italian Football Federation (FIGC)'s decision to resolve the Catania situation by increasing the number of teams in Serie B from 20 to 24 and promoting Fiorentina for "sports merits." In the 2003 off-season, the club also bought back the right to use the Fiorentina name and the famous shirt design, and re-incorporated itself as ACF Fiorentina. The club finished the 2003–04 season in sixth place and won the playoff against Perugia to return to top-flight football.
In their first season back in Serie A, however, the club struggled to avoid relegation, only securing survival on the last day of the season on head-to-head record against Bologna and Parma. In 2005, Della Valle decided to appoint Pantaleo Corvino as new sports director, followed by the appointment of Cesare Prandelli as head coach in the following season. The club made several signings during the summer transfer market, most notably Luca Toni and Sébastien Frey. This drastic move earned them a fourth-place finish with 74 points and a Champions League qualifying round ticket. Toni scored 31 goals in 38 appearances, the first player to pass the 30-goal mark since Antonio Valentin Angelillo in the 1958–59 season, for which he was awarded the European Golden Boot. On 14 July 2006, however, Fiorentina were relegated to Serie B due to their involvement in the 2006 Serie A match fixing scandal and given a 12-point penalty. The team was reinstated to the Serie A on appeal, but with a 19-point penalty for the 2006–07 season. The team's 2006–07 Champions League place was also revoked. After the start of the season, Fiorentina's penalisation was reduced from 19 points to 15 on appeal to the Italian courts. In spite of this penalty, they managed to secure a place in the UEFA Cup.
Despite Toni's departure to Bayern Munich, Fiorentina had a strong start to the 2007–08 season and were tipped by Italian national team head coach Marcello Lippi, among others, as a surprise challenger for the "Scudetto", and although this form tailed off towards the middle of the season, the "Viola" managed to qualify for the Champions League. In Europe, the club reached the semi-final of the UEFA Cup, where they were ultimately defeated by Rangers on penalties. The 2008–09 season continued this success, a fourth-place finish assuring Fiorentina's spot in 2010's Champions League playoffs. Their European campaign was also similar to that of the previous run, relegated to the 2008–09 UEFA Cup and were eliminated by Ajax in the end.
In the 2009–10 season, Fiorentina started their domestic campaign strongly before steadily losing momentum and slipped to mid-table positions at the latter half of the season. In Europe, the team proved to be a surprise dark horse: after losing their first away fixture against Lyon, they staged a comeback with a five-match streak by winning all their remaining matches (including defeating Liverpool home and away). The "Viola" qualified as group champions, but eventually succumbed to Bayern Munich due to the away goals rule. This was controversial due to a mistaken refereeing decision by Tom Henning Øvrebø, who allowed a clearly offside goal for Bayern in the first leg. Bayern eventually finished the tournament as runners-up, making a deep run all the way to the final. The incident called into attention the possible implementation of video replays in football. Despite a good European run and reaching the semi-finals in the Coppa Italia, Fiorentina failed to qualify for Europe.
During this period, on 24 September 2009, Andrea Della Valle resigned from his position as chairman of Fiorentina, and announced all duties would be temporarily transferred to Mario Cognini, Fiorentina's vice-president until a permanent position could be filled.
In June 2010, the "Viola" bid farewell to long-time manager Cesar Prandelli, by then the longest-serving coach in the team's history, who was departing to coach the Italian national team. Catania manager Siniša Mihajlović was appointed to replace him. The club spent much of the early 2010–11 season in last place, but their form improved and Fiorentina ultimately finished ninth. Following a 1–0 defeat to Chievo in November 2011, Mihajlović was sacked and replaced by Delio Rossi. After a brief period of improvements, the "Viola" were again fighting relegation, prompting the sacking of Sporting Director Pantaleo Corvino in early 2012 following a 0–5 home defeat to Juventus. Their bid for survival was kept alive by a number of upset victories away from home, notably at Roma and Milan. During a home game against Novara, trailing 0–2 within half an hour, manager Rossi decided to substitute midfielder Adem Ljajić early. Ljajić sarcastically applauded him in frustration, whereupon Rossi retaliated by physical assaulting his player, an action that ultimately prompted his termination by the club. His replacement, caretaker manager Vincenzo Guerini, then guided the team away from the relegation zone to a 13th-place finish to end the turbulent year.
To engineer a resurrection of the club after the disappointing season, the Della Valle family invested heavily in the middle of 2012, buying 17 new players and appointing Vincenzo Montella as head coach. The team began the season well, finishing the calendar year in joint third place and eventually finishing the 2012–13 season in fourth, enough for a position in the 2013–14 Europa League.
The club lost fan favourite Stevan Jovetić during the middle of 2013, selling him to English Premier League club Manchester City for a €30 million transfer fee. They also sold Adem Ljajić to Roma and Alessio Cerci to Torino, using the funds to bring in Mario Gómez, Josip Iličić and Ante Rebić, among others. During the season, Fiorentina topped their Europa League group, moving on to the round of 32 to face Danish side Esbjerg fB, which Fiorentina defeated 4–2 on aggregate. In the following round of 16, however, they then lost to Italian rivals Juventus 2–1 on aggregate, ousting Fiorentina from the competition. At the end of the season, the team finished fourth again in the league, and also finishing the year as Coppa Italia runners-up after losing 3–1 to Napoli in the final.
In 2014–15, during the 2015 winter transfer window, the team club sold star winger Juan Cuadrado to Chelsea for €30 million but were able to secure the loan of Mohamed Salah in exchange, who was a revelation in the second half of the season. Their 2014–15 Europa League campaign saw them progress to the semi-finals, where they were knocked-out by Spanish side Sevilla, the eventual champions. In the 2014–15 domestic season, Fiorentina once again finished fourth, thus qualifying for the 2015–16 Europa League. In June 2015, Vincenzo Montella was sacked as manager after the club grew impatient with the coaches inability to prove his commitment to the club, and was replaced by Paulo Sousa, who lasted until June 2017 and the appointment of Stefano Pioli. Club captain Davide Astori died suddenly at the age of 31 in March 2018. The club subsequently retired Astori's kit number, 13.
On 9 April 2019, Pioli resigned as manager.
On 6 June 2019, the club was sold to Italian-American billionaire Rocco Commisso for around 160 million euros. The sale marked the end of the Della Valle family's seventeen-year association with the club. Vincenzo Montella was confirmed as coach for the first season of the new era despite the team's poor end to the previous campaign, which saw them finish only three points clear of the relegation zone.
Franck Ribéry
Fiorentina have had many managers and head coaches throughout their history. Below is a chronological list from the club's foundation in 1926 to the present day.
The official emblem of the city of Florence, a red fleur-de-lis on a white field, has been pivotal in the all-round symbolism of the club.
Over the course of the club's history, they have had several badge changes, all of which incorporated Florence's fleur-de-lis in some way. The first one was nothing more than the city's coat of arms, a white shield with the red fleur-de-lis inside. It was soon changed to a very stylised fleur-de-lis, always red, and sometimes even without the white field. The most common symbol, adopted for about 20 years, had been a white lozenge with the flower inside. During the season they were Italian champions, the lozenge disappeared and the flower was overlapped with the "scudetto".
The logo introduced by owner Flavio Pontello in 1980 was particularly distinct, consisting of one-half of the city of Florence's emblem and one-half of the letter "F", for Fiorentina. People disliked it when it was introduced, believing it was a commercial decision and, above all, because the symbol bore more of a resemblance to a halberd than a fleur-de-lis.
Today's logo is a kite shaped double lozenge bordered in gold. The outer lozenge has a purple background with the letters "AC" in white and the letter "F" in red, standing for the club's name. The inner lozenge is white with a gold border and the red fleur-de-lis of Florence. This logo had been in use from 1992 to 2002, but after the financial crisis and resurrection of the club the new one couldn't use the same logo. Florence's "comune" instead granted Florentia Viola use of the stylised coat of arms used in other city documents. Diego Della Valle acquired the current logo the following year in a judicial auction for a fee of €2.5 million, making it the most expensive logo in Italian football.
When Fiorentina was founded in 1926, the players wore red and white halved shirts derived from the colour of the city emblem. The more well-known and highly distinctive purple kit was adopted in 1928 and has been used ever since, giving rise to the nickname "La Viola" ("The Purple (team)"). Tradition has it that Fiorentina got their purple kit by mistake after an accident washing the old red and white coloured kits in the river.
The away kit has always been predominantly white, sometimes with purple and red elements, sometimes all-white. The shorts had been purple when the home kit was with white shorts. Fiorentina's third kit was first worn in the 1995–96 season and it was all-red with purple borders and two lilies on the shoulders. The red shirt has been the most worn 3rd shirt by Fiorentina, although they also wore rare yellow shirts ('97–'98, '99–'00 and '10–'11) and a sterling version, mostly in the Coppa Italia, in 2000–01.
For the 2017–18 season and the first time in its history, the club used five kits during the season, composing of one home kit (all-purple) and four away kits, each one representing one historic quartiere of the city of Florence: all-blue (Santa Croce), all-white (Santo Spirito), all-green (San Giovanni) and all-red (Santa Maria Novella).
Serie A:
Serie B
Coppa Italia:
Supercoppa Italiana:
Serie C2 (as "Florentia Viola")
European Cup / UEFA Champions League:
UEFA Cup:
UEFA Cup Winners' Cup:
Coppa Grasshoppers
Mitropa Cup
Anglo-Italian League Cup
This is the UEFA club's coefficient as of 17 March 2019:
A.C. Fiorentina S.p.A. was unable to register for 2002–03 Serie B due to financial difficulties, and then the sports title was transferred to a new company thanks to Article 52 of N.O.I.F., while the old company was liquidated. At that time the club was heavily relying on windfall profit from selling players, especially in pure player swap or cash plus player swap that potentially increased the cost by the increase in amortisation of player contracts (an intangible assets). For example, Marco Rossi joined Fiorentina for 17 billion lire in 2000, but at the same time Lorenzo Collacchioni moved to Salernitana for 1 billion lire, meaning the club had a player profit of 997 million lire and extra 1 billion lire to be amortised in 5-years. In 1999, Emiliano Bigica also swapped with Giuseppe Taglialatela, which the latter was valued for 10 billion lire. The operating income (excluding windfall profit from players trading) of 2000–01 season was minus 113,271,475,933 Italian lire (minus €58,499,835). It was only boosted by the sales of Francesco Toldo and Rui Costa in June 2001 (a profit of 134.883 billion lire; €69.661 million). However, it was alleged they were to transfer to Parma for a reported 140 million lire. The two players eventually joined Inter Milan and A.C. Milan in 2001–02 financial year instead, for undisclosed fees. Failing to have financial support from the owner Vittorio Cecchi Gori, the club was forced to windup due to its huge imbalance in operating income.
Since re-established in 2002, ACF Fiorentina S.p.A. are yet to self-sustain to keep the team in top division as well as in European competitions. In the 2005 financial year, which cover the first Serie A season, the club made a net loss of €9,159,356, followed by a net loss of €19,519,789. In 2006 (2005–06 Serie A and 2006–07 Serie A), Fiorentina heavily invested on players, meaning the amortisation of intangible asset (the player contract) had increased from €17.7 million to €24 million. However the club suffered from the 2006 Italian football scandal, which meant the club did not qualify for Europe. In 2007 Fiorentina almost broke-even, with a net loss of just €3,704,953. In the 2007 financial year the TV revenue increased after they qualified to the 2007–08 UEFA Cup. Despite qualifying to the 2008–09 UEFA Champions League, Fiorentina made a net loss of €9,179,484 in 2008 financial year after the increase in TV revenue was outweighed by the increase in wage. In the 2009 financial year, Fiorentina made a net profit of €4,442,803, largely due to the profit on selling players (€33,631,489 from players such as Felipe Melo, Giampaolo Pazzini and Zdravko Kuzmanović; increased from about €3.5 million in 2008). However it was also offset by the write-down of selling players (€6,062,545, from players such as Manuel da Costa, Arturo Lupoli and Davide Carcuro).
After the club failed to qualify to Europe at the end of 2009–10 Serie A, as well as lack of player profit, Fiorentina turnover was decreased from €140,040,713 in 2009 to just €79,854,928, despite the wage bill also falling, "la Viola" still made a net loss of €9,604,353. In the 2011 financial year, the turnover slipped to €67,076,953, as the club's lack of capital gains from selling players and 2010 financial year still included the instalments from UEFA for participating 2009–10 UEFA Europa League. Furthermore, the gate income had dropped from €11,070,385 to €7,541,260. The wage bill did not fall much and in reverse the amortisation of transfer fee had sightly increased due to new signings. "La Viola" had savings in other costs but counter-weighted by huge €11,747,668 write-down for departed players, due to D'Agostino, Frey and Mutu, but the former would counter-weight by co-ownership financial income, which all made the operating cost remained high as worse as last year. Moreover, in 2010 the result was boosted by acquiring the asset from subsidiary (related to AC Fiorentina) and the re-valuation of its value in separate balance sheet. If deducting that income (€14,737,855), 2010 financial year was net loss 24,342,208 and 2011 result was worse with €8,131,876 only in separate balance sheet. In 2012, the club benefited from the sales of Matija Nastasić and Valon Behrami, followed by Stevan Jovetić and Adem Ljajić in 2013. In 2014, due to €28.4 million drop from the windfall profit of selling players, the club recorded their worst financial results since re-foundation, despite the fact the club maintained the same level of windfall profit, the result was still worse than in 2013. Moreover, Fiorentina also revealed that the club had a relevant football net income of minus €19.5 million in the first assessment period of UEFA Financial Fair Play Regulations in the 2013–14 season (in May 2014). (aggregate of 2012 and 2013 results), which within the limit of minus €45 million, as well as minus €25.5 million in assessment period 2014–15 (aggregate of 2012, 2013 and 2014 results). However, as the limit was reduced to minus €30 million in assessment period 2015–16, 2016–17 and 2017–18 season, the club had to achieve a relevant net income of positive €5.6 million in 2015 financial year. "La Viola" sold Juan Cuadrado to Chelsea in January 2015 for €30 million fee, to make the club eligible to 2016–17 edition of UEFA competitions. | https://en.wikipedia.org/wiki?curid=3165 |
Afrobeat
Afrobeat (not to be confused with afrobeats) is a music genre which involves the combination of elements of West African musical styles such as fuji music and highlife with American funk and jazz influences, with a focus on chanted vocals, complex intersecting rhythms, and percussion.
The term was coined in the 1960s by Nigerian multi-instrumentalist and bandleader Fela Kuti, who is responsible for pioneering and popularizing the style both within and outside Nigeria.
Distinct from Afrobeat is Afrobeats – a sound originating in West Africa in the 21st century, one which takes in diverse influences and is an eclectic combination of genres such as British house music, hiplife, hip hop, dancehall, soca, Jùjú music, highlife, R&B, Ndombolo, Naija beats, Azonto, and Palm-wine music. The two genres, though often conflated, are not the same.
Afrobeat began in Ghana in the early 1920s. During that time, Ghanaian musicians incorporated foreign influences like the foxtrot and calypso with Ghanaian rhythms like "osibisaba" (Fante). Highlife was associated with the local African aristocracy during the colonial period and was played by numerous bands including the Jazz Kings, Cape Coast Sugar Babies, and Accra Orchestra along the country's coast. Nigeria later joined the Afrobeat wave in the late 60s led by Fela Kuti who experimented with different contemporary music of the time. Upon arriving in Nigeria, Kuti also changed the name of his group to Africa '70. The new sound hailed from a club that he established called the Afrika Shrine. The band maintained a five-year residency at the Afrika Shrine from 1970 to 1975 while afrobeat thrived among Nigerian youth.
Although the term Afrobeat was coined as early as 1968, after making a trip to the United States, Kuti wasn't really making music in the category of Afrobeat. The name “Afrobeat” shows the significance of groove to the music, as opposed to Afrofunk.
In 1969, Kuti and his band went on a trip to the U.S. and met Sandra Smith, a singer and former Black Panther. Sandra Smith (now known as Sandra Isadore) introduced Kuti to many writings of activist such as Martin Luther King Jr., Angela Davis, Jesse Jackson, and his biggest influence of all, Malcolm X.
As Kuti was interested in African American politics, Smith would inform him of current events. In return, Kuti would fill her in on African culture. Since Kuti stayed at Smith's house and was spending so much time with her, he started to re-evaluate his music. That was when Fela Kuti noticed that he was not playing African music. From that day forward, Kuti changed his sound and the message behind his music.
The name was partially borne out of an attempt to distinguish Fela Kuti's music from the soul music of American artists such as James Brown.
Prevalent in his and Lagbaja's music are native Nigerian harmonies and rhythms, taking different elements and combining, modernizing, and improvising upon them. Politics are essential to Afrobeat, since founder Kuti used social criticism to pave the way for social change. His message can be described as confrontational and controversial, which can be related to the political climate of most of the African countries in the 1970s, many of which were dealing with political injustice and military corruption while recovering from the transition from colonial governments to self-determination. As the genre spread throughout the African continent many bands took up the style. The recordings of these bands and their songs were rarely heard or exported outside the originating countries but many can now be found on compilation albums and CDs from specialist record shops.
Big band (15 to 30 pieces: Fela-era afrobeat) and energetic performances
Fela Kuti included the traditional Gbedu drum in his ensemble, with a percussionist pounding out a thunderous rhythm from a drum lying on its side.
Many jazz musicians have been attracted to Afrobeat. From Roy Ayers in the 1970s to Randy Weston in the 1990s, there have been collaborations that have resulted in albums such as "Africa: Centre of the World" by Roy Ayers, released on the Polydore label in 1981. In 1994 Branford Marsalis, the American jazz saxophonist, included samples of Fela's "Beast of No Nation" on his "Buckshot LeFonque" album. The new generation of DJs and musicians of the 2000s who have fallen in love with both Kuti's material and other rare releases have made compilations and remixes of these recordings, thus re-introducing the genre to new generations of listeners and fans of afropop and groove.
Afrobeat has also profoundly influenced important contemporary producers and musicians like Brian Eno and David Byrne, who credit Fela Kuti as an essential influence. Both worked on Talking Heads' highly acclaimed 1980 album "Remain In Light", which brought polyrhythmic afrobeat influences to Western music.
The horn section of Antibalas have been guest musicians on TV On The Radio's highly acclaimed 2008 album "Dear Science", as well as on British band Foals' 2008 album, "Antidotes". Some Afrobeat influence can also be found in the music of Vampire Weekend and Paul Simon.
In 2009 the music label Knitting Factory Records (KFR) produced the Broadway Musical "FELA!" As said on the musical's website, the story showcased Fela Kuti's “courage and incredible musical mastery” along with the story of his life. The show had 11 Tony nominations, receiving three for Best Costumes, Best Sound and Best Choreography. "FELA!" Was on Broadway for fifteen months and was produced by notables such as Shawn “Jay-Z” Carter and Will & Jada Pinkett-Smith. Many celebrities were noted on attending the shows such as, Denzel Washington, Madonna, Sting, Spike Lee (who saw it eight times), Kofi Annan, and even Michelle Obama. Michelle Williams, former singer of girl group Destiny's Child, was cast as the role of Sandra Isadore. | https://en.wikipedia.org/wiki?curid=3168 |
Arithmetic function
In number theory, an arithmetic, arithmetical, or number-theoretic function is for most authors any function "f"("n") whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of "n"".
An example of an arithmetic function is the divisor function whose value at a positive integer "n" is equal to the number of divisors of "n".
There is a larger class of number-theoretic functions that do not fit the above definition, for example, the prime-counting functions. This article provides links to functions of both classes.
Many of the functions mentioned in this article have expansions as series involving these sums; see the article Ramanujan's sum for examples.
An arithmetic function "a" is
Two whole numbers "m" and "n" are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them.
Then an arithmetic function "a" is
formula_1 and formula_2 mean that the sum or product is over all prime numbers:
Similarly, formula_5 and formula_6 mean that the sum or product is over all prime powers with strictly positive exponent (so "k" = 0 is not included):
formula_8 and formula_9 mean that the sum or product is over all positive divisors of "n", including 1 and "n". For example, if "n" = 12,
The notations can be combined: formula_11 and formula_12 mean that the sum or product is over all prime divisors of "n". For example, if "n" = 18,
and similarly formula_14 and formula_15 mean that the sum or product is over all prime powers dividing "n". For example, if "n" = 24,
The fundamental theorem of arithmetic states that any positive integer "n" can be represented uniquely as a product of powers of primes: formula_17 where "p"1 < "p"2 < ... < "p""k" are primes and the "aj" are positive integers. (1 is given by the empty product.)
It is often convenient to write this as an infinite product over all the primes, where all but a finite number have a zero exponent. Define the "p"-adic valuation ν"p"("n") to be the exponent of the highest power of the prime "p" that divides "n". That is, if "p" is one of the "p""i" then "ν""p"("n") = "a""i", otherwise it is zero. Then
In terms of the above the prime omega functions ω and Ω are defined by
To avoid repetition, whenever possible formulas for the functions listed in this article are given in terms of "n" and the corresponding "p""i", "a""i", ω, and Ω.
σ"k"("n") is the sum of the "k"th powers of the positive divisors of "n", including 1 and "n", where "k" is a complex number.
σ1("n"), the sum of the (positive) divisors of "n", is usually denoted by σ("n").
Since a positive number to the zero power is one, σ0("n") is therefore the number of (positive) divisors of "n"; it is usually denoted by "d"("n") or τ("n") (for the German "Teiler" = divisors).
Setting "k" = 0 in the second product gives
φ("n"), the Euler totient function, is the number of positive integers not greater than "n" that are coprime to "n".
J"k"("n"), the Jordan totient function, is the number of "k"-tuples of positive integers all less than or equal to "n" that form a coprime ("k" + 1)-tuple together with "n". It is a generalization of Euler's totient, .
μ("n"), the Möbius function, is important because of the Möbius inversion formula. See Dirichlet convolution, below.
This implies that μ(1) = 1. (Because Ω(1) = ω(1) = 0.)
τ("n"), the Ramanujan tau function, is defined by its generating function identity:
Although it is hard to say exactly what "arithmetical property of "n"" it "expresses", ("τ"("n") is (2π)−12 times the "n"th Fourier coefficient in the q-expansion of the modular discriminant function) it is included among the arithmetical functions because it is multiplicative and it occurs in identities involving certain σ"k"("n") and "r""k"("n") functions (because these are also coefficients in the expansion of modular forms).
"c""q"("n"), Ramanujan's sum, is the sum of the "n"th powers of the primitive "q"th roots of unity:
Even though it is defined as a sum of complex numbers (irrational for most values of "q"), it is an integer. For a fixed value of "n" it is multiplicative in "q":
The Dedekind psi function, used in the theory of modular functions, is defined by the formula
"λ"("n"), the Liouville function, is defined by
All Dirichlet characters "χ"("n") are completely multiplicative. Two characters have special notations:
The principal character (mod "n") is denoted by "χ"0("a") (or "χ"1("a")). It is defined as
The quadratic character (mod "n") is denoted by the Jacobi symbol for odd "n" (it is not defined for even "n".):
In this formula formula_31 is the Legendre symbol, defined for all integers "a" and all odd primes "p" by
Following the normal convention for the empty product, formula_33
ω("n"), defined above as the number of distinct primes dividing "n", is additive (see Prime omega function).
Ω("n"), defined above as the number of prime factors of "n" counted with multiplicities, is completely additive (see Prime omega function).
For a fixed prime "p", "ν""p"("n"), defined above as the exponent of the largest power of "p" dividing "n", is completely additive.
These important functions (which are not arithmetic functions) are defined for non-negative real arguments, and are used in the various statements and proofs of the prime number theorem. They are summation functions (see the main section just below) of arithmetic functions which are neither multiplicative nor additive.
("x"), the prime counting function, is the number of primes not exceeding "x". It is the summation function of the characteristic function of the prime numbers.
A related function counts prime powers with weight 1 for primes, 1/2 for their squares, 1/3 for cubes, ... It is the summation function of the arithmetic function which takes the value 1/"k" on integers which are the k-th power of some prime number, and the value 0 on other integers.
"θ"("x") and "ψ"("x"), the Chebyshev functions,
are defined as sums of the natural logarithms of the primes not exceeding "x".
The Chebyshev function "ψ"("x") is the summation function of the von Mangoldt function just below.
Λ("n"), the von Mangoldt function, is 0 unless the argument "n" is a prime power , in which case it is the natural log of the prime "p":
"p"("n"), the partition function, is the number of ways of representing "n" as a sum of positive integers, where two representations with the same summands in a different order are not counted as being different:
"λ"("n"), the Carmichael function, is the smallest positive number such that formula_40 for all "a" coprime to "n". Equivalently, it is the least common multiple of the orders of the elements of the multiplicative group of integers modulo "n".
For powers of odd primes and for 2 and 4, "λ"("n") is equal to the Euler totient function of "n"; for powers of 2 greater than 4 it is equal to one half of the Euler totient function of "n":
and for general "n" it is the least common multiple of λ of each of the prime power factors of "n":
"h"("n"), the class number function, is the order of the ideal class group of an algebraic extension of the rationals with discriminant "n". The notation is ambiguous, as there are in general many extensions with the same discriminant. See quadratic field and cyclotomic field for classical examples.
"r""k"("n") is the number of ways "n" can be represented as the sum of "k" squares, where representations that differ only in the order of the summands or in the signs of the square roots are counted as different.
Using the Heaviside notation for the derivative, "D"("n") is a function such that
Given an arithmetic function "a"("n"), its summation function "A"("x") is defined by
"A" can be regarded as a function of a real variable. Given a positive integer "m", "A" is constant along open intervals "m" < "x" < "m" + 1, and has a jump discontinuity at each integer for which "a"("m") ≠ 0.
Since such functions are often represented by series and integrals, to achieve pointwise convergence it is usual to define the value at the discontinuities as the average of the values to the left and right:
Individual values of arithmetic functions may fluctuate wildly – as in most of the above examples. Summation functions "smooth out" these fluctuations. In some cases it may be possible to find asymptotic behaviour for the summation function for large "x".
A classical example of this phenomenon is given by the divisor summatory function, the summation function of "d"("n"), the number of divisors of "n":
An average order of an arithmetic function is some simpler or better-understood function which has the same summation function asymptotically, and hence takes the same values "on average". We say that "g" is an "average order" of "f" if
as "x" tends to infinity. The example above shows that "d"("n") has the average order log("n").
Given an arithmetic function "a"("n"), let "F""a"("s"), for complex "s", be the function defined by the corresponding Dirichlet series (where it converges):
"F""a"("s") is called a generating function of "a"("n"). The simplest such series, corresponding to the constant function "a"("n") = 1 for all "n", is "ς"("s") the Riemann zeta function.
The generating function of the Möbius function is the inverse of the zeta function:
Consider two arithmetic functions "a" and "b" and their respective generating functions "F""a"("s") and "F""b"("s"). The product "F""a"("s")"F""b"("s") can be computed as follows:
It is a straightforward exercise to show that if "c"("n") is defined by
then
This function "c" is called the Dirichlet convolution of "a" and "b", and is denoted by formula_57.
A particularly important case is convolution with the constant function "a"("n") = 1 for all "n", corresponding to multiplying the generating function by the zeta function:
Multiplying by the inverse of the zeta function gives the Möbius inversion formula:
If "f" is multiplicative, then so is "g". If "f" is completely multiplicative, then "g" is multiplicative, but may or may not be completely multiplicative.
There are a great many formulas connecting arithmetical functions with each other and with the functions of analysis, especially powers, roots, and the exponential and log functions. The page divisor sum identities contains many more generalized and related examples of identities involving arithmetic functions.
Here are a few examples:
For all formula_77 (Lagrange's four-square theorem).
where the Kronecker symbol has the values
There is a formula for r3 in the section on class numbers below.
where "ν" = "ν"2("n").
where formula_82
Define the function σ"k"*("n") as
That is, if "n" is odd, σ"k"*("n") is the sum of the "k"th powers of the divisors of "n", that is, σ"k"("n"), and if "n" is even it is the sum of the "k"th powers of the even divisors of "n" minus the sum of the "k"th powers of the odd divisors of "n".
Adopt the convention that Ramanujan's "τ"("x") = 0 if "x" is not an integer.
Here "convolution" does not mean "Dirichlet convolution" but instead refers to the formula for the coefficients of the product of two power series:
The sequence formula_87 is called the convolution or the Cauchy product of the sequences "a""n" and "b""n".
See Eisenstein series for a discussion of the series and functional identities involved in these formulas.
Since "σ""k"("n") (for natural number "k") and "τ"("n") are integers, the above formulas can be used to prove congruences for the functions. See Ramanujan tau function for some examples.
Extend the domain of the partition function by setting "p"(0) = 1.
Peter Gustav Lejeune Dirichlet discovered formulas that relate the class number "h" of quadratic number fields to the Jacobi symbol.
An integer "D" is called a fundamental discriminant if it is the discriminant of a quadratic number field. This is equivalent to "D" ≠ 1 and either a) "D" is squarefree and "D" ≡ 1 (mod 4) or b) "D" ≡ 0 (mod 4), "D"/4 is squarefree, and "D"/4 ≡ 2 or 3 (mod 4).
Extend the Jacobi symbol to accept even numbers in the "denominator" by defining the Kronecker symbol:
Then if "D" < −4 is a fundamental discriminant
h(D) & = \frac{1}{D} \sum_{r=1}^r\left(\frac{D}{r}\right)\\ | https://en.wikipedia.org/wiki?curid=3170 |
ANSI C
ANSI C, ISO C and Standard C are successive standards for the C programming language published by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO). Historically, the names referred specifically to the original and best-supported version of the standard (known as C89 or C90). Software developers writing in C are encouraged to conform to the standards, as doing so helps portability between compilers.
The first standard for C was published by ANSI. Although this document was subsequently adopted by International Organization for Standardization (ISO) and subsequent revisions published by ISO have been adopted by ANSI, "ANSI C" is still used to refer to the standard. While some software developers use the term ISO C, others are standards-body neutral and use Standard C.
In 1983, the American National Standards Institute formed a committee, X3J11, to establish a standard specification of C. The standard was completed in 1989 and ratified as ANSI X3.159-1989 "Programming Language C." This version of the language is often referred to as "ANSI C". Later on sometimes the label "C89" is used to distinguish it from C90 but using the same labeling method.
The same standard as C89 was ratified by the International Organization for Standardization as ISO/IEC 9899:1990, with only formatting changes, which is sometimes referred to as C90. Therefore, the terms "C89" and "C90" refer to essentially the same language.
This standard has been withdrawn by both ANSI/INCITS and ISO/IEC.
In 1995, the ISO published an extension, called Amendment 1, for the ANSI-C standard. Its full name finally was "ISO/IEC 9899:1990/AMD1:1995" or nicknamed "C95". Aside from error correction there were further changes to the language capabilities, such as:
In addition to the amendment, two technical corrigenda were published by ISO for C90:
/* C95 compatible source code. */
/* C89 compatible source code. */
In March 2000, ANSI adopted the ISO/IEC 9899:1999 standard. This standard is commonly referred to as C99. Some notable additions to the previous standard include:
Three technical corrigenda were published by ISO for C99:
This standard has been withdrawn by both ANSI/INCITS and ISO/IEC in favour of C11.
, "C11" is the previous standard for the C programming language. Notable features introduced over the previous revision include improved Unicode support, type-generic expressions using the new codice_17 keyword, a cross-platform multi-threading API (codice_18) and atomic types support in both core language and the library (codice_19).
One technical corrigendum has been published by ISO for C11:
, "C18" is the current standard for the C programming language.
As part of the standardization process, ISO also publishes technical reports and specifications related to the C language:
More technical specifications are in development and pending approval, including the fifth and final part of TS 18661, a software transactional memory specification, and parallel library extensions.
ANSI C is now supported by almost all the widely used compilers. GCC and Clang are two major C compilers popular today, both are based on the C11 with updates including changes from later specifications such as C17 and C18. Any program written "only" in standard C and without any hardware dependent assumptions is virtually guaranteed to compile correctly on any platform with a conforming C implementation. Without such precautions, most programs may compile only on a certain platform or with a particular compiler, due, for example, to the use of non-standard libraries, such as GUI libraries, or to the reliance on compiler- or platform-specific attributes such as the exact size of certain data types and byte endianness.
To mitigate the differences between K&R C and the ANSI C standard, the codice_20 ("standard c") macro can be used to split code into ANSI and K&R sections.
In the above example, a prototype is used in a function declaration for ANSI compliant implementations, while an obsolescent non-prototype declaration is used otherwise. Those are still ANSI-compliant as of C99. Note how this code checks both definition and evaluation: this is because some implementations may set codice_20 to zero to indicate non-ANSI compliance. | https://en.wikipedia.org/wiki?curid=3172 |
Alien and Sedition Acts
The Alien and Sedition Acts were four laws passed by the Federalist-dominated 5th United States Congress and signed into law by President John Adams in 1798. They made it harder for an immigrant to become a citizen (Naturalization Act), allowed the president to imprison and deport non-citizens who were deemed dangerous ("An Act Concerning Aliens", also known as the Alien Friends Act of 1798) or who were from a hostile nation (Alien Enemy Act of 1798), and criminalized making false statements that were critical of the federal government (Sedition Act of 1798). The Alien Friends Act expired two years after its passage, and the Sedition Act expired on 3 March 1801, while the Naturalization Act and Alien Enemies Act had no expiration clause.
The Federalists argued that the bills strengthened national security during the Quasi-War, an undeclared naval war with France from 1798 to 1800. Critics argued that they were primarily an attempt to suppress voters who disagreed with the Federalist party and its teachings, and violated the right of freedom of speech in the First Amendment.
The Naturalization Act increased the residency requirement for American citizenship from five to fourteen years. At the time, the majority of immigrants supported Thomas Jefferson and the Democratic-Republicans, the political opponents of the Federalists. The Alien Friends Act allowed the president to imprison or deport aliens considered "dangerous to the peace and safety of the United States" at any time, while the Alien Enemies Act authorized the president to do the same to any male citizen of a hostile nation above the age of fourteen during times of war. Lastly, the controversial Sedition Act restricted speech that was critical of the federal government. Under the Sedition Act, the Federalists allowed people who were accused of violating the sedition laws to use truth as a defense. The Sedition Act resulted in the prosecution and conviction of many Jeffersonian newspaper owners who disagreed with the government.
The acts were denounced by Democratic-Republicans and ultimately helped them to victory in the 1800 election, when Thomas Jefferson defeated the incumbent, President Adams. The Sedition Act and the Alien Friends Act were allowed to expire in 1800 and 1801, respectively. The Alien Enemies Act, however, remains in effect as Chapter 3; Sections 21–24 of Title 50 of the United States Code. It was used by the government to identify and imprison allegedly "dangerous enemy" aliens from Germany, Japan, and Italy in World War II. (This was separate from the Japanese internment camps used to remove people of Japanese descent from the West Coast.) After the war they were deported to their home countries. In 1948 the Supreme Court determined that presidential powers under the acts continued after cessation of hostilities until there was a peace treaty with the hostile nation. The revised Alien Enemies Act remains in effect today.
The Federalists' fear of the opposing Democratic-Republican Party reached new heights with the Democratic-Republicans' support of France in the midst of the French Revolution. Some appeared to desire a similar revolution in the United States to overthrow the government and social structure. Newspapers sympathizing with each side exacerbated the tensions by accusing the other side's leaders of corruption, incompetence, and treason. As the unrest sweeping Europe threatened to spread to the United States, calls for secession started to rise, and the fledgling nation seemed ready to tear itself apart. Some of this agitation was seen by Federalists as having been caused by French and French-sympathizing immigrants. The Alien Act and the Sedition Act were meant to guard against this perceived threat of anarchy.
The Acts were highly controversial at the time, especially the Sedition Act. The Sedition Act was hotly debated in the Federalist-controlled Congress and passed only after multiple amendments softening its terms, such as enabling defendants to argue in their defense that their statements had been true. Still, it passed the House only after three votes and another amendment causing it to automatically expire in March 1801. They continued to be loudly protested and were a major political issue in the election of 1800. Opposition to them resulted in the also-controversial Virginia and Kentucky Resolutions, authored by James Madison and Thomas Jefferson.
Prominent prosecutions under the Sedition Act include:
After the passage of the highly unpopular Alien and Sedition Acts, protests occurred across the country, with some of the largest being seen in Kentucky, where the crowds were so large they filled the streets and the entire town square. Noting the outrage among the populace, the Democratic-Republicans made the Alien and Sedition Acts an important issue in the 1800 election campaign. Upon assuming the Presidency, Thomas Jefferson pardoned those still serving sentences under the Sedition Act, and Congress soon repaid their fines. It has been said that the Alien Acts were aimed at Albert Gallatin, and the Sedition Act aimed at Benjamin Bache's "Aurora". While government authorities prepared lists of aliens for deportation, many aliens fled the country during the debate over the Alien and Sedition Acts, and Adams never signed a deportation order.
The Virginia and Kentucky state legislatures also passed the Kentucky and Virginia Resolutions denouncing the federal legislation, secretly authored by Thomas Jefferson and James Madison. While the eventual resolutions followed Madison in advocating "interposition", Jefferson's initial draft would have nullified the Acts and even threatened secession. Jefferson's biographer Dumas Malone argued that this might have gotten Jefferson impeached for treason, had his actions become known at the time. In writing the Kentucky Resolutions, Jefferson warned that, "unless arrested at the threshold", the Alien and Sedition Acts would "necessarily drive these states into revolution and blood".
The Alien and Sedition Acts were never appealed to the Supreme Court, whose power of judicial review was not clearly established until "Marbury v. Madison" in 1803. Subsequent mentions in Supreme Court opinions beginning in the mid-20th century have assumed that the Sedition Act would today be found unconstitutional.
The Alien Enemies Acts remained in effect at the outset of World War I and remains U.S. law today. It was recodified to be part of the US war and national defense statutes (50 USC 21–24).
On December 7, 1941, responding to the bombing of Pearl Harbor, President Franklin Delano Roosevelt used the authority of the revised Alien Enemies Act to issue presidential proclamations 2525 (Alien Enemies – Japanese), 2526 (Alien Enemies – German), and 2527 (Alien Enemies – Italian), to apprehend, restrain, secure and remove Japanese, German, and Italian non-citizens. On February 19, 1942, citing authority of the wartime powers of the president and commander in chief, Roosevelt made Executive Order 9066, authorizing the Secretary of War to prescribe military areas and giving him authority that superseded the authority of other executives under Proclamations 2525–7. EO 9066 led to the internment of Japanese Americans, whereby over 110,000 people of Japanese ancestry living on the Pacific coast were forcibly relocated and forced to live in camps in the interior of the country, 62% of whom were United States citizens, not aliens.
Hostilities with Germany and Italy ended in May 1945, and with Japan in August. Alien enemies, and U.S. citizens, continued to be interned. On July 14, 1945, President Harry S. Truman issued Presidential Proclamation 2655, titled "Removal of Alien Enemies". The proclamation gave the Attorney General authority regarding aliens enemies within the continental United States, to decide whether they are "dangerous to the public peace and safety of the United States", to order them removed, and to create regulations governing their removal. The proclamation cited the revised Alien Enemies Act (50 U.S.C. 21–24) as to powers of the President to make public proclamation regarding "subjects of the hostile nation" more than fourteen years old and living inside the United States but not naturalized, to remove them as alien enemies, and to determine the means of removal.
On September 8, 1945, Truman issued Presidential Proclamation 2662, titled "Removal of Alien Enemies". The revised Alien Enemies Act (50 U.S.C. 21–24) was cited as to removal of alien enemies in the interest of the public safety. The United States had agreed, at a conference in Rio de Janeiro in 1942, to assume responsibility for the restraint and repatriation of dangerous alien enemies to be sent to the United States from Latin American republics. In another inter-American conference in Mexico City on March 8, 1945, North and South American governments resolved to recommended adoption of measures to prevent aliens of hostile nations who were deemed to be security threats or threats to welfare from remaining in North or South America. Truman gave authority to the Secretary of State to determine if alien enemies in the United States who were sent to the United States from Latin America, or who were in the United States illegally, endangered the welfare or security of the country. The Secretary of State was given power to remove them "to destinations outside the limits of the Western Hemisphere", to the former enemy territory of the governments to whose "principles of which (the alien enemies) have adhered". The Department of Justice was directed to assist the Secretary of State in their prompt removal.
On April 10, 1946, Truman issued Presidential Proclamation 2685, titled "Removal of Alien Enemies", citing the revised Alien Enemies Act (50 U.S.C. 21–24) as to its provision for the "removal from the United States of alien enemies in the interest of the public safety". Truman proclaimed regulations that were in addition to and supplemented other "regulations affecting the restraint and removal of alien enemies". As to alien enemies who had been brought into the continental United States from Latin America after December 1941, the proclamation gave the Secretary of State authority to decide if their presence was "prejudicial to the future security or welfare of the Americas", and to make regulations for their removal. 30 days was set as the reasonable time for them to "effect the recovery, disposal, and removal of (their) goods and effects, and for (their) departure".
In 1947 New York's Ellis Island continued to incarcerate hundreds of ethnic Germans. Fort Lincoln was a large internment camp still holding internees in North Dakota. North Dakota was represented by controversial Senator William "Wild Bill" Langer. Langer introduced a bill (S. 1749) "for the relief of all persons detained as enemy aliens", and directing the US Attorney General to cancel "outstanding warrants of arrest, removal, or deportation" for many German aliens still interned, listing many by name, and all of those detained by the Immigration and Naturalization Service (INS), which was under the Department of Justice (DOJ). It directed the INS not to issue any more warrants or orders, if their only basis was the original warrants of arrest. The bill never passed. The Attorney General gave up plenary jurisdiction over the last internee on Ellis Island late in 1948.
In Ludecke v. Watkins (1948), the Supreme Court interpreted the time of release under the Alien Enemies Act. German alien Kurt G. W. Ludecke was detained in 1941, under Proclamation 2526. and continued to be held after cessation of hostilities. In 1947, Ludecke petitioned for a writ of habeas corpus to order his release, after the Attorney General ordered him deported. The court ruled 5–4 to release Ludecke, but also found that the Alien Enemies Act allowed for detainment beyond the time hostilities ceased, until an actual treaty was signed with the hostile nation or government.
In 1988, President Reagan and the 100th Congress introduced the Civil Liberties Act of 1988, whose purpose amongst others was to acknowledge and apologize for actions of the US against individuals of Japanese ancestry during World War II. The statement from Congress agreed with the Commission on Wartime Relocation and Internment of Civilians, that "a grave injustice was done to both citizens and permanent resident aliens of Japanese ... without adequate security reasons and without any acts of espionage or sabotage documented by the Commission, and were motivated largely by racial prejudice, wartime hysteria, and a failure of political leadership".
In 2015, presidential candidate Donald Trump made a proposal to ban foreign Muslims from entering the United States (as part of the War on Terror); Roosevelt's application of the Alien Enemies Act was cited as a possible justification. The proposal created international controversy, drawing criticism from foreign heads of state that have historically remained uninvolved in United States presidential elections. A former Reagan Administration aide noted that, despite criticism of Trump's proposal to invoke the law, "the Alien Enemies Act ... is still on the books ... (and people) in Congress for many decades (haven't) repealed the law ... (nor has) Barack Obama". Other critics claimed that the proposal violated founding principles, and was unconstitutional for singling out a religion, and not a hostile nation. They included the Pentagon and others, who argued that the proposal (and its citation of the Alien Enemies proclamations as authority) played into the ISIL narrative that the United States was at war with the entire Muslim religion (not just with ISIL and other terrorist entities). On June 26, 2018, in the 5–4 decision "Trump v. Hawaii", the U.S. Supreme Court upheld Presidential Proclamation 9645, the third version of President Trump's travel ban, with the majority opinion being written by Chief Justice John Roberts. | https://en.wikipedia.org/wiki?curid=3173 |
Antinomy
Antinomy (Greek ἀντί, "antí", "against, in opposition to", and νόμος, "nómos", "law") refers to a real or apparent mutual incompatibility of two laws. It is a term used in logic and epistemology, particularly in the philosophy of Kant.
There are many examples of antinomy. A self-contradictory phrase such as "There is no absolute truth" can be considered an antinomy because this statement is suggesting in itself to be an absolute truth, and therefore denies itself any truth in its statement. A paradox such as "this sentence is false" can also be considered to be an antinomy; for the sentence to be true, it must be false, and vice versa.
The term acquired a special significance in the philosophy of Immanuel Kant (1724–1804), who used it to describe the equally rational but contradictory results of applying to the universe of pure thought the categories or criteria of reason that are proper to the universe of sensible perception or experience (phenomena). Empirical reason cannot here play the role of establishing rational truths because it goes beyond possible experience and is applied to the sphere of that which transcends it.
For Kant there are four antinomies, connected with:
In each antinomy, a thesis is contradicted by an antithesis. For example: in the first antinomy, Kant proves the thesis that time must have a beginning by showing that if time had no beginning, then an infinity would have elapsed up until the present moment. This is a manifest contradiction because infinity cannot, by definition, be completed by "successive synthesis"—yet just such a finalizing synthesis would be required by the view that time is infinite; so the thesis is proven. Then he proves the antithesis, that time has no beginning, by showing that if time had a beginning, then there must have been "empty time" out of which time arose. This is incoherent (for Kant) for the following reason: Since, necessarily, no time elapses in this pretemporal void, then there could be no alteration, and therefore nothing (including time) would ever come to be: so the antithesis is proven. Reason makes equal claim to each proof, since they are both correct, so the question of the limits of time must be regarded as meaningless.
This was part of Kant's critical program of determining limits to science and philosophical inquiry. These contradictions are inherent in reason when it is applied to the world as it is in itself, independently of any perception of it (this has to do with the distinction between phenomena and noumena). Kant's goal in his critical philosophy was to identify what claims are and are not justified, and the antinomies are a particularly illustrative example of his larger project.
Kant is not the only philosopher to employ the term, however. Another famous use of antinomy is by Karl Marx, in "Capital" Volume One, in the chapter entitled "The Working Day". On Marx's account, capitalist production sustains "the assertion of a right to an unlimited working day, and the assertion of a right to a limited working day, both with equal justification". Furner emphasizes that the thesis and antithesis of this antinomy are not contradictory opposites, but rather "consist in the assertion of rights to states of affairs that are contradictory opposites". | https://en.wikipedia.org/wiki?curid=3175 |
Ascending chain condition
In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin.
The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler.
A partially ordered set (poset) "P" is said to satisfy the ascending chain condition (ACC) if no (infinite) strictly ascending sequence
of elements of "P" exists.
Equivalently, given any weakly ascending sequence
of elements of "P" there exists a positive integer "n" such that
Similarly, "P" is said to satisfy the descending chain condition (DCC) if there is no infinite descending chain of elements of "P". Equivalently, every weakly descending sequence
of elements of "P" eventually stabilizes.
Consider the ring
of integers. Each ideal of formula_6 consists of all multiples of some number formula_7. For example, the ideal
consists of all multiples of formula_9. Let
be the ideal consisting of all multiples of formula_11. The ideal formula_12 is contained inside the ideal formula_13, since every multiple of formula_9 is also a multiple of formula_11. In turn, the ideal formula_13 is contained in the ideal formula_6, since every multiple of formula_11 is a multiple of formula_19. However, at this point there is no larger ideal; we have "topped out" at formula_6.
In general, if formula_21 are ideals of formula_6 such that formula_23 is contained in formula_24, formula_24 is contained in formula_26, and so on, then there is some formula_7 for which all formula_28. That is, after some point all the ideals are equal to each other. Therefore the ideals of formula_6 satisfy the ascending chain condition, where ideals are ordered by set inclusion. Hence formula_6 is Noetherian ring. | https://en.wikipedia.org/wiki?curid=3189 |
Adin Steinsaltz
Rabbi Adin Even-Israel Steinsaltz () (born 11 July 1937) is an Israeli Chabad Chasidic rabbi, teacher, philosopher, social critic, and publisher.
His "" was originally published in modern Hebrew, with a running commentary to facilitate learning, and has also been translated into English, French, Russian, and Spanish. Beginning in 1989, Steinsaltz published several tractates in Hebrew and English of the Babylonian (Bavli) Talmud in an English-Hebrew edition. The first volume of a new English-Hebrew edition, the Koren Talmud Bavli, was released in May 2012, and has since been brought to completion.
Steinsaltz is a recipient of the Israel Prize for Jewish Studies (1988), the President's Medal (2012), and the Yakir Yerushalayim prize (2017).
Adin Steinsaltz was born in Jerusalem in 1937 to secular parents, Avraham Steinsaltz and Leah (née Krokovitz). His father was a great-grandson of the first Slonimer Rebbe, Avrohom Weinberg, and was a student of Hillel Zeitlin. Avraham and Leah Steinsaltz met through Zeitlin. They immigrated to Israel in 1924. Avraham Steinsaltz, a devoted communist and member of Lehi went to Spain in 1936 to fight with the International Brigades in the Spanish Civil War. Adin was born the following year.
Steinsaltz became a baal teshuva during his teenage years and learned from rabbi Shmuel Elazar Heilprin (Rosh yeshiva of Yeshivas Toras Emes Chabad). He studied mathematics, physics, and chemistry at the Hebrew University, in addition to rabbinical studies at Yeshivas Tomchei Tmimim in Lod and with rabbis Dov Ber Eliezrov and Shmaryahu Noach Sasonkin. Following graduation, he established several experimental schools after an unsuccessful attempt to start a neo-Hassidic community in the Negev desert, and, at the age of 24, became Israel's youngest school principal.
In 1965, he founded the Israel Institute for Talmudic Publications, and began his monumental work on the Talmud, including translation into Hebrew, English, Russian, and various other languages. The Steinsaltz editions of the Talmud include translation from the original Aramaic and a comprehensive commentary. Steinsaltz completed his Hebrew edition of the entire Babylonian Talmud in November 2010, at which time Koren Publishers Jerusalem became the publisher of all of his works, including the Talmud. While not without criticism (such as by Jacob Neusner, 1998), the Steinsaltz edition is widely used throughout Israel, the United States, and the world.
Steinsaltz's classic work of Kabbalah, "The Thirteen Petalled Rose", was first published in 1980, and now appears in eight languages. In all, Steinsaltz has authored some 60 books and hundreds of articles on subjects including Talmud, Jewish mysticism, Jewish philosophy, sociology, historical biography, and philosophy. Many of these works have been translated into English by his close personal friend, now deceased, Yehuda Hanegbi. His memoir-biography on the Lubavitcher Rebbe, rabbi Menachem Mendel Schneerson, was published by Maggid Books (2014).
Continuing his work as a teacher and spiritual mentor, Steinsaltz established Yeshivat Mekor Chaim alongside rabbis Menachem Froman and Shagar in 1984, and Yeshivat Tekoa in 1999. He also serves as president of the Shefa Middle and High Schools. He has served as scholar in residence at the Woodrow Wilson International Center for Scholars in Washington, D. C., and the Institute for Advanced Study in Princeton. His honorary degrees include doctorates from Yeshiva University, Ben Gurion University of the Negev, Bar Ilan University, Brandeis University, and Florida International University. Steinsaltz is also Rosh Yeshiva of Yeshivat Hesder Tekoa.
Being a follower of rabbi Menachem Mendel Schneerson of Chabad-Lubavitch, he went to help Jews in the Soviet Union assisting Chabad's "shluchim" (propagators) network. Steinsaltz serves as the region's "Duchovny Ravin" (Spiritual Rabbi), a historic Russian title which indicates that he is the spiritual mentor of Russian Jewry. In this capacity, Steinsaltz travelled to Russia and the Republics once each month from his home in Jerusalem. During his time in the former Soviet Union, he founded the Jewish University, both in Moscow and Saint Petersburg. The Jewish University is the first degree-granting institution of Jewish studies in the former Soviet Union. In 1991, on Schneersohn's advice, he changed his family name from Steinsaltz to Even-Israel. Besides Chabad, Steinsaltz is also inspired by the teachings of the Kotzker Rebbe. He was in close contact with the fifth Gerrer Rebbe, Yisroel Alter, and his brother and successor, Simcha Bunim Alter.
Steinsaltz has taken a cautious approach to interfaith dialogues. During a visit of a delegation of Roman Catholic cardinals in Manhattan in January 2004, he said that, "You do not have to raise over-expectations of a meeting, as it doesn't signify in itself a breakthrough; however, the opportunity for cardinals and rabbis to speak face to face is valuable. It's part of a process in which we can talk to each other in a friendly way", and called for "a theological dialogue that asks the tough questions, such as whether Catholicism allows for Jews to enter eternal paradise".
Steinsaltz and his wife live in Jerusalem, and have three children and more than ten grandchildren. In 2016, Rav Steinsaltz suffered a stroke, leaving him unable to speak. His son, Rabbi Menachem ("Meni") Even-Israel, is the Executive Director of the Steinsaltz Center, Rabbi Steinsaltz's umbrella organization, located in the Nachlaot neighborhood of Jerusalem.
Steinsaltz accepted the position as Nasi (President) of the 2004 attempt to revive the Sanhedrin. In 2008, he resigned from this position due to differences of opinion.
Steinsaltz is a prolific author and commentator, having written numerous books on Jewish knowledge, tradition and culture, and produced original commentaries on the entirety of Jewish canon: Tanakh (Torah, Prophets, and Writings), the Babylonian Talmud, the Mishna, the Mishneh Torah, and Tanya.
His published works include:
Steinsaltz was invited to speak at the Aspen Institute for Humanistic Studies at Yale University in 1979.
Prior to his stroke, he gave evening seminars in Jerusalem, which, according to Newsweek, usually lasted until 2:00 in the morning, and attracted prominent politicians, such as the former Prime Minister Levi Eshkol and former Finance Minister Pinhas Sapir.
On 21 April 1988, Steinsaltz received the Israel Prize for Jewish Studies.
On 9 February 2012, Steinsaltz was honored by Israeli President Shimon Peres with Israel's first President's Prize alongside Zubin Mehta, Uri Slonim, Henry Kissinger, Judy Feld Carr, and the Rashi Foundation.[15] Steinsaltz was presented with this award for his contribution to the study of Talmud, making it more accessible to Jews worldwide.
Steinsaltz was also presented with the 2012 National Jewish Book Award in the category of Modern Jewish Thought & Experience by the Jewish Book Council for his commentary, translation, and notes in the Koren Babylonian Talmud. The Modern Jewish Thought & Experience award was awarded on 15 January 2013 in memory of Joy Ungerleider Mayerson by the Dorot Foundation.
On 22 May 2017, Jerusalem Mayor Nir Barkat visited Steinsaltz at his home to present him with the Yakir Yerushalayim ("Worthy Citizen of Jerusalem") medal. This medal of achievement was awarded to Steinsaltz for his writing and translating work.
On 10 June 2018, Steinsaltz was honored at a Gala Dinner at the Orient Hotel in Jerusalem for his pedagogical achievements throughout a lifetime dedicated to Jewish education. A limited-edition version of "The Steinsaltz Humash” was presented to the attendees of this event.
Jacob Neusner's "How Adin Steinsaltz Misrepresents the Talmud. Four False Propositions from his "Reference Guide"" (1998) displays strong disagreement. Dr. Jeremy Brown criticized the as having inaccurate scientific information, such as identifying Ursa Major as a star and describing polycythemia vera as a disease causing excessive bleeding from the gums and from ordinary cuts.
Steinsaltz's works were fiercely opposed by parts of the Orthodox Jewish world, with many leading rabbis such as Elazar Shach, Yosef Shalom Eliashiv, and Eliezer Waldenberg harshly condemning his Talmud and other books. Much of the criticism was not focused on the Hebrew Talmud translation per se but stemmed from other works of Steinsaltz and, by extension, Steinsaltz's general worldview. Waldenberg wrote that when "The Essential Talmud" and "Biblical Images" (Hebrew: "דמויות מן המקרא" ו"תלמוד לכל") were brought before him, he was shocked to see the way in which Steinsaltz described the Patriarchs and Talmudic sages, as well as his approach to the Oral Torah. Waldenberg further wrote that these works had the power to "poison the souls" of those who read them.
Aharon Feldman has penned a lengthy critical review of the Steinsaltz Talmud. Among many criticisms, he writes, "Specifically, the work is marred by an extraordinary number of inaccuracies stemming primarily from misreadings of the sources; it fails to explain those difficult passages which the reader would expect it to explain; and it confuses him with notes which are often irrelevant, incomprehensible, and contradictory." Feldman says he fears that, "An intelligent student utilizing the Steinsaltz Talmud as his personal instructor might in fact conclude that Talmud in general is not supposed to make sense." Furthermore, writes Feldman, the Steinsaltz Talmud gives off the impression that the Talmud is intellectually flabby, inconsistent, and often trivial.
While certain members of the Haredi community may have opposition to Steinsaltz's works, other Jewish leaders, rabbis, and authors have spoken or written about their appreciation for Steinsaltz's unique educational approach. Rabbi John Rosove of Temple Israel of Hollywood featured "Opening The Tanya", "Learning the Tanya", and "Understanding the Tanya" on his list of the top ten recommended Jewish books. These volumes are written by rabbi Shneur Zalman of Liadi, the founder of the Chabad Lubavitch movement, and include commentary by Steinsaltz. Through reading the Tanya, readers can explore all aspects of the central text of Chabad movement. Rabbi Elie Kaunfer, a rosh yeshiva and the CEO of Mechon Hadar Yeshiva, discussed his gratitude for Steinsaltz's Global Day of Jewish Learning and the opportunity created by this online platform for learning and creating a deeper connection to Torah, other Jewish text, and Jews worldwide. Rabbi Pinchas Allouche, who studied under Steinsaltz, notes that Steinsaltz "is a world scholar" who "revolutionized the Jewish landscape" through his commentary, other writings, and educational organizations. In 1988, secular Israeli historian Zeev Katz compared Steinsaltz's importance to that of Rashi and Maimonides, two Jewish scholars of medieval times. In addition, Ilana Kurshan, an American-Israeli author, wrote that Steinsaltz's ability to bring "the historical world of the Talmudic stages to life" created an enjoyable Jewish learning experience for her when she was intensely studying Talmud. | https://en.wikipedia.org/wiki?curid=3192 |
American Revolutionary War
The American Revolutionary War (17751783), also known as the American War of Independence, was initiated by the thirteen original colonies against the Kingdom of Great Britain over their objection to Parliament’s direct taxation and its lack of colonial representation. The overthrow of British rule established the United States of America as the first republic in history extending over a large territory.
Early British policy for empire in North America was one of salutary neglect. It largely left the settlers there alone to govern themselves. After 1763 Britain gained a new expanded Empire, and Parliament turned to the Navigation Acts to increase revenues. That provoked unrest among the Thirteen Colonies that continued into the next decade. To punish the 1773 Boston Tea Party, Parliament’s Intolerable Acts closed the port of Boston and suspended their colonial legislature, as Royal Governors then did elsewhere. Twelve colonial house assemblies sent delegates to the First Continental Congress. It coordinated a systematic boycott of British goods, then called for a second congress. The Second Continental Congress appointed George Washington in June 1775 as its commander in chief to create a Continental Army and to oversee the Siege of Boston. Their July 1775 Olive Branch Petition was answered by King George III with a Proclamation of Rebellion. Congress then passed the Declaration of Independence in July 1776.
After evicting the British from Boston in 1775, Congress then sponsored an attack on British Quebec, but it failed. The British commander in chief, General Sir William Howe then launched a British counter-offensive, capturing New York City. Washington retaliated with harassing attacks at Trenton and Princeton. In 1777, the British launched an invasion from Quebec to isolate New England. Howe’s 1777-78 Philadelphia campaign captured the city. But the British lost an army at Saratoga in October 1777. At Valley Forge that winter, Washington built a professional army. The American victory at Saratoga convinced the French to enter into treaties for trade and to defend US independence from Britain in 1778.
Spanish Luisiana Governor Bernardo Gálvez cleared British forces from Spanish territory. This allowed supplies north from the Spanish and American privateers for the 1779 Virginia militia conquest of Western Quebec (later the US Northwest Territory). He then expelled British forces from Mobile and Pensacola, cutting off British military assistance to their Indian allies in the interior South. Howe's replacement, General Sir Henry Clinton, then mounted a 1778 "Southern strategy" from Charleston. After initial success taking Savannah, their losses at King's Mountain and Cowpens led to the British southern army retreat to Yorktown. A decisive French naval victory brought the October 1781 surrender of the second British army lost in the American Revolution.
In early 1782, Parliament voted to end all offensive operations in America, and in December 1782 George III spoke from the British throne for US independence. In April 1783, Congress accepted the British-proposed treaty that met its peace demands including independence and sovereignty west to the Mississippi River. On September 3, 1783, the Treaty of Paris was signed between Great Britain and the United States, recognizing the United States, making peace between the two nations, and formally ending the American Revolution.
Following Cromwell's Interregnum in England after 1649, and then during the Stuart Kings after 1660, early British North American imperial policy was one of benign neglect. And unlike the Spanish colonial government of the same period, the British allowed native-born gentry to become Royal Council members in each colonial legislature. Following the accession of George III to the throne of Great Britain at the end of the Seven Years' War, Parliament increased revenues through the Navigation Acts to fund the war debt. They also were used to pay for greater administrative costs in the expanded British Empire brought by the territorial settlements at the Peace of Paris (1763). American colonists resented the additional levies from Parliament because their local colonial assemblies had taxed them to contribute to the North American part of the British war, and their county militias had fought against the French and their Indian allies on the frontier.
Parliament passed the Stamp Act as a revenue measure in 1765. That began a new direct internal tax without consent of the local colonial assemblies. American colonial legislatures argued that they had exclusive right to impose taxes within their own jurisdictions. Colonists condemned the tax because their rights as Englishmen protected them from any tax from a Parliament in which they had no elected representatives. Parliament argued that the colonies were "virtually represented", an argument that was criticized throughout the British Empire. The act was repealed in 1766, but Parliament also affirmed its right to pass laws that were binding on the colonies. From 1767, Parliament began passing Townshend Acts to raise additional revenue for new royal officials to enforce a merchantile policy. It enacted new taxes on tea, lead, glass, and paper.
Collecting revenues proved difficult, even with new writs of assistance that gave the Crown's enforcement officers the power to make unlimited searches of a suspect without a warrant until the next king took the throne. When the British seized the sloop "Liberty" in 1768 on suspicions of smuggling, it triggered a riot. British troops were landed to occupy Boston and restore order. Seven years after the 1763 Peace, four consecutive Whig governments had overseen continuously declining relations in the American colonies. In 1770, the Tory Lord North gained office; he would be Prime Minister to George III through to the end of the Revolution, when the British defeat at Yorktown forced North's resignation. Enacting Lord North's tougher policy in America, Parliament threatened to extradite colonists to face trial in England as traitors, and tensions escalated when British garrison troops subsequently fired on rock-throwing civilians at the Boston Massacre.
In 1772, Rhode Islanders boarded and burned a customs schooner. In an effort to quiet colonial unrest, Parliament then repealed all taxes except the one on tea, passing the Tea Act in 1773. Colonial objections continued, and landing the tea for sale was compromised in all the licensed ports of Charleston, Philadelphia, New York City and Boston. But Boston was singled out as the instigator for the others.
At the widespread colonial resistance to its supremacy, Parliament reacted with punitive legislation. It closed Boston Harbor and the Royal Governor dissolved the independent colonial legislature. Other Royal Governors followed suit. Further measures allowed the extradition of American colonial officials for trial elsewhere in the British Empire. The Quartering Act allowed occupying British troops in private homes without the owner's permission. The colonists referred to the measures as the "Intolerable Acts", and they argued that the constitutional rights of their English Charters and their natural rights as free men were being violated. The acts were widely opposed among most colonial legislatures, where Patriots gained support from among neutrals, while Loyalist supporters were quieted.
The elected members in the Royal colonial legislatures, those who represented the smaller landowners in the lower-house assemblies, responded by establishing ad hoc provincial legislatures, variously called Congresses, Conventions and Conferences. They effectively removed Crown control within their respective colonies. Twelve sent representatives to the First Continental Congress to develop a joint American response to the crisis. It passed the a compact declaring a trade boycott against Britain.
While the Congress also affirmed that Parliament had no authority over internal American matters, they also acquiesced to trade regulations for the benefit of the empire. Awaiting some measure of reconciliation from Parliament and the King's Tory government, Congress authorized the extralegal committees and conventions of the colonial legislatures to enforce the Congressional boycott. In the event, the boycott was effective, as imports from Britain dropped by 97% in 1775 compared to 1774.
Parliament refused to yield to Congressional proposals. In 1775, it declared Massachusetts to be in a state of rebellion and enforced a blockade of the colony. It then passed the Restraining Acts of 1775 aimed at limiting colonial trade to the British West Indies and the British Isles. New England ships were barred from the Newfoundland cod fisheries. These increasing tensions led to a mutual scramble for ordnance between royal governors and the elected assemblies. British raids on colonial powder magazines pushed the assemblies towards open war. Each assembly was required by law to defend them for the purpose of providing arms and ammunition for frontier defense. Thomas Gage was appointed the British Commander-in-Chief for North America. As military governor of Massachusetts he was ordered to disarm the local militias on April 14, 1775. Congress unanimously appointed George Washington as commander-in-chief of its newly authorized Continental Army on June 15, 1775.
Well known for his accomplishments in the French and Indian War, George Washington was unanimously elected by Congress to command the army. He designed the overall military strategy of the war in cooperation with Congress, established the principle of civilian supremacy in military affairs, personally recruited his senior office corps and kept the states all pointed toward a common goal. For the first three years until after Valley Forge, the Continental Army was largely supplemented by local state militias. At Washington's discretion, the inexperienced officers and untrained troops were employed in a Fabian strategy rather than resorting to frontal assaults against Britain's professional army. During the war Washington lost more battles than he won, but he maintained a fighting force in the face of British field armies and never surrendered his troops.
Acting on intelligence, Gage planned to destroy stores of militia ordnance at Concord by way of Lexington to capture John Hancock and Samuel Adams, the two principle provocateurs of the rebellion. The operation was to commence before midnight while completing their objectives and retreating to Boston before multitudes of patriot militias could respond. However, the patriots also had a good intelligence network, which Paul Revere had helped organize. Subsequently, the Patriots learned of Gage's intentions before he could act, where Revere quickly dispatched this information and alerted Captain John Parker and the patriot forces in Concord.
Fighting broke out during the Battles of Lexington and Concord on April 19, forcing the British troops to conduct a fighting withdrawal to Boston. Overnight, the local militia converged on and laid siege to Boston. On May 25, 4,500 British reinforcements arrived with generals William Howe, John Burgoyne, and Henry Clinton. During the Battle of Bunker Hill the British seized the Charlestown Peninsula on June 17 with a frontal assault costing many officer casualties to American rifle snipers. Surviving British commanders were dismayed at the costly attack which had gained them little, and Gage appealed to London stressing the need for a large army to suppress the revolt. Total British losses killed and wounded exceeded 1,000, leading Howe to replace Gage.
Congressional leader John Adams of Massachusetts nominated Virginia delegate George Washington for commander-in-chief of the Continental Army in June 1775. He had previously commanded Virginia militia regiments in British combat commands during the French and Indian War. Washington proceeded to Boston to assume field command of the ongoing Siege of Boston on July 3. Howe made no effort to attack in a standoff with Washington, who made no plan to assault the city. Instead, the Americans fortified Dorchester Heights. In early March 1776, Colonel Henry Knox arrived with heavy artillery captured from a raid on Fort Ticonderoga. Under cover of darkness Washington placed his artillery atop Dorchester Heights March 5, threatening Boston and the British ships in the harbor. Howe did not want another battle like Gage's Bunker Hill, so he evacuated Boston. The British were permitted to withdraw without further casualties on March 17, and they sailed to Halifax, Nova Scotia. Washington then moved his army south to New York.
Beginning in August 1775, American Privateers had begun to raid villages in Nova Scotia, first at Saint John, then Charlottetown and Yarmouth. They continued in 1776 at Canso and then a land assault on Fort Cumberland.
Meanwhile, British officials in Quebec began negotiating with Indian tribes to support them, while the Americans urged them to maintain neutrality. In April 1775, Congress feared an Anglo-Indian attack from Canada and authorized an invasion of Quebec. Quebec had a largely Francophone population and had been under British rule for only 12 years. A Massachusetts sponsored uprising in Nova Scotia had been disbursed in November, but The Americans expected that they would welcome liberation from the British. The second American expedition into the former French territory was defeated at the Battle of Quebec on December 31. After a loose siege, the Americans withdrew on May 6, 1776. An American failed counter-attack on June 8 ended their operations in Quebec. However, British pursuit was blocked by American ships on Lake Champlain until they were cleared on October 11 at the Battle of Valcour Island. The American troops were forced to withdraw to Ticonderoga, ending the campaign. The invasion cost the Patriots their support in British public opinion, and their aggressive anti-Loyalist policies had diluted Canadian support. No further Patriot attempts to invade were subsequently made.
In Virginia, Royal Governor Lord Dunmore had attempted to disarm the militia as tensions increased, although no fighting broke out. He issued a proclamation on November 7, 1775, promising freedom for slaves who fled their Patriot masters to fight for the Crown. Dunmore's troops were repulsed at the Battle of Great Bridge, and Dunmore fled to British ships anchored off the nearby port at Norfolk. The Third Virginia Convention refused to disband its militia or accept martial law. Speaker Peyton Randolph in the last Royal Virginia Assembly session did not make a response to Lord Dunmore concerning Parliament's Conciliatory Resolution. Negotiations failed in part because Randolph was also President of the Virginia Conventions, and he deferred to Congress, where he was also President. Dunmore ordered the ship's crews to burn Norfolk on January 1, 1776.
Fighting broke out on November 19 in South Carolina between Loyalist and Patriot militias, and the Loyalists were subsequently driven out of the colony. Loyalists were recruited in North Carolina to reassert colonial rule in the South, but they were decisively defeated and Loyalist sentiment was subdued. A troop of British regulars set out to reconquer South Carolina and launched an attack on Charleston during the Battle of Sullivan's Island, on June 28, 1776, but it failed and left the South in Patriot control until 1780.
Shortages in Patriot gunpowder led Congress to authorize an expedition against the Bahamas colony in the British West Indies to secure additional ordnance there. On March 3, 1776, the Americans landed and engaged the British at the Battle of Nassau, but the local militia offered no resistance. The expedition confiscated what supplies they could and sailed for home on March 17. The squadron reached New London, Connecticut, on April 8, after a brief skirmish during the Battle of Block Island with the Royal Navy frigate on April 6.
After fighting began, Congress launched an Olive Branch Petition in another attempt to avert war. George III rejected the offer as insincere because Congress also made contingency plans for muskets and gunpowder. The King answered militia resistance at Bunker Hill with a Proclamation of Rebellion, which further provoked the Patriot faction in Congress. Parliament rejected coercive measures on the colonies by 170 votes. The tentative Whig majority there feared an aggressive policy would drive the Americans towards independence. Tories stiffened their resistance to compromise, and the King himself began micromanaging the war effort. The Irish Parliament pledged to send troops to America, and Irish Catholics were allowed to enlist in the army for the first time.
The initial hostilities in Boston caused a pause in British activity, they remained in New York City awaiting more troops. That inactive response gave the Patriots a political advantage in the colonial assemblies, and the British lost control over every former colony. The army in the British Isles had been deliberately kept small since 1688 to prevent abuses of power by the King. To prepare for war overseas, Parliament signed treaties of subsidy with small German states for additional troops. Within a year it had sent an army of 32,000 men to America.
Thomas Paine's pamphlet "Common Sense" boosted public support for independence throughout the thirteen colonies, and it was widely reprinted. At the rejection of the Olive Branch Petition, Congress appointed the Committee of Five consisting of Thomas Jefferson, John Adams, Benjamin Franklin, Roger Sherman and Robert Livingston to draft a Declaration of Independence to politically separate the United States from Britain. The document argued for government by consent of the governed on the authority of the people of the thirteen colonies as "one people", along with a long list indicting George III as violating English rights. On July 2, Congress voted for independence, and it published the declaration on July 4. Tories saw any subjects of the King who pretended to remove their ruler for whatever reasons as committing treason, and George III was encouraged to convict those responsible with the death penalty. George Washington had the Declaration read to assembled troops in New York City on July 9. Later that evening a mob tore down a lead statue of the King, which was later melted down into musket balls.
Patriots in each state then passed Test Laws that required all residents to swear allegiance to their state. These were meant to identify neutrals or to drive opponents of independence into self-exile. Failure to take the oath meant possible imprisonment, forced exile, or even death. American Tories were barred from public office, forbidden from practicing medicine and law, or forced to pay increased taxes. Some could not execute wills or become guardians. Congress enabled states to confiscate Loyalist property to fund the war, and some Quakers who remained neutral had their property confiscated. States later prevented some Loyalists from collecting any debts that they were owed.
After regrouping at Halifax, William Howe determined to take the fight to the Americans. He set sail in June 1776 and began landing troops on Staten Island near the entrance to New York Harbor on July 2. The Americans rejected Howe's informal attempt to negotiate peace. When Washington split his army to positions on Manhattan Island and across the East River in western Long Island, on August 27 at the Battle of Long Island Howe outflanked Washington and forced him back to Brooklyn Heights, but he did not attempt to encircle Washington's forces.
Through the night of August 28, General Henry Knox bombarded the British. On August 29, an American council of war all agreed to retreat to Manhattan. Washington quickly had his troops assembled and ferried them across the East River to Manhattan on flat-bottomed freight boats without any losses in men or ordnance, with General Thomas Mifflin's regiments in the rear guard.
The Staten Island Peace Conference failed to negotiate peace as the British delegates did not have authority to recognize independence to meet the rebel demands. Howe seized control of New York City on September 15 and unsuccessfully engaged the Americans the following day. He failed to encircle the Americans at the Battle of Pell's Point, then the Americans successfully withdrew. Howe declined to close with Washington's army on October 28 at the Battle of White Plains, but instead concentrated his efforts on a hill that was of no strategic value.
Washington's retreat had left his remaining forces isolated, and the British captured their Fort Washington on November 16. The British victory there took 3,000 prisoners and amounted to Washington's most disastrous defeat. Washington's remaining army on Long Island fell back four days later. Henry Clinton wanted to pursue Washington's disorganized army, but he was required to commit 6,000 troops to first capture Newport, Rhode Island in an operation that he had opposed. The American prisoners were subsequently sent to the infamous prison ships where more American soldiers and sailors died of disease and neglect than died in every battle of the war combined. Charles Cornwallis pursued Washington, but Howe ordered him to halt and Washington marched away unmolested.
The outlook was bleak for the American cause; the reduced army had dwindled to fewer than 5,000 men and that number would be reduced further when enlistments expired at the end of the year. Popular support wavered, morale ebbed away, and Congress abandoned Philadelphia. Loyalist activity surged in the wake of the American defeat, especially in New York.
News of the campaign was well received in Britain with festivities held in London, public support reached a peak, and the King awarded the Order of the Bath to Howe. The successes led to predictions that the British could win within a year. Strategic deficiencies among Patriot forces were evident by Washington's dividing a numerically weaker army in the face of a stronger one, inexperienced staff misreading the situation, and their troops fleeing in the face of enemy fire. In the meantime, the British entered winter quarters and were in a good place to resume campaigning.
On the night of December 25–26, 1776, Washington crossed the ice-choked Delaware River and surprised and overwhelmed the Hessian garrison at Trenton, New Jersey, and taking 900 prisoners. The decisive victory rescued the army's flagging morale and gave a new hope to the Patriot cause. Cornwallis marched to retake Trenton, but his efforts were repulsed on January 2. Washington outmaneuvered Cornwallis that night and defeated his rearguard the following day. The two victories contributed to convincing the French that the Americans were worthwhile allies. Washington entered winter quarters at Morristown, New Jersey on January 6, though a prolonged guerrilla conflict continued. Howe made no attempt to attack, much to Washington's amazement.
In December 1776, John Burgoyne returned to London to set strategy with Lord George Germain. Burgoyne's plan was to isolate New England by establishing control of the Great Lakes from New York to Quebec. Efforts could then concentrate on the southern colonies, where it was believed that Loyalist support was widespread and substantial.
Burgoyne's plan was to maneuver two armies by different routes and rendezvous at Albany, New York. Burgoyne set out along Lake Champlain on June 14, 1777, quickly capturing Ticonderoga on July 5. From there the pace slowed. The Americans blocked roads, destroyed bridges, dammed streams, and stripped the area of food. Meanwhile, Barry St. Ledger's diversionary column along the Mohawk River laid siege to Fort Stanwix. St. Ledger withdrew to Quebec on August 22 after his Indian support abandoned him. On August 16, a Brunswick foraging expedition was soundly defeated at Bennington, and more than 700 troops were captured. The vast majority of Burgoyne's Indian support then abandoned him in the field, but Lord Howe informed him that he would still launch their planned campaign on Philadelphia, but without his support from New York.
Burgoyne continued the advance, and he attempted to flank the American position at Freeman's Farm on September 19 in the First Battle of Saratoga. The British won, but at the cost of 600 casualties. Burgoyne then dug in, but he suffered a constant hemorrhage of deserters, and critical supplies ran low. The Americans repulsed a British reconnaissance in force against the American lines on October 7, with heavy British losses during the second Battle of Saratoga. Burgoyne then withdrew in the face of American pursuit, but he was surrounded by October 13. With supplies exhausted and no hope of relief, Burgoyne surrendered his army on October 17, and the Americans took 6,222 soldiers as prisoners of war.
Meanwhile, Howe took command of a New York-based campaign campaign against Washington. Early feints failed to bring Washington to battle in June 1777. Howe then declined to attack towards Philadelphia further, either overland via New Jersey or by sea via the Delaware Bay, leaving Burgoyne's initiative launched from the interior unsupported.
Later in the fall with additional supplies, Howe recommenced the Philadelphia campaign. This time on advancing, he outflanked and defeated Washington on September 11, but failed to pursue and destroy the defeated Americans on two occasions; once after the Battle of Brandywine, and again after the Battle of Germantown. A British victory at Willistown left Philadelphia defenseless, and Howe captured the city unopposed on September 26. He then moved 9,000 men to Germantown north of Philadelphia. Washington launched a surprise attack there on Howe's garrison on October 4, but he was eventually repulsed. Once again, Howe did not follow up on his victory.
Howe inexplicably ordered a retreat to Philadelphia after several days of probing American defenses at White Marsh, astonishing both sides. He ignored the vulnerable American rear, where an attack might possibly have deprived Washington of his baggage and supplies. On December 19, Washington's army entered winter quarters at Valley Forge. Poor conditions and supply problems there resulted in the deaths of some 2,500 American troops. During Washington's winter encampment at Valley Forge, Baron von Steuben, introduced the latest Prussian methods of drilling and infantry tactics to the entire Continental Army. Meanwhile, Howe resigned and was replaced by Henry Clinton on May 24, 1778.
While the Americans wintered only twenty miles away, Howe made no effort to attack their camp, which critics argue could have ended the war. Following the conclusion of the campaign, Howe resigned his commission, and was replaced by Henry Clinton on May 24, 1778. Clinton received orders to abandon Philadelphia and fortify New York following France's entry into the war. On June 18, the British departed Philadelphia, with the reinvigorated Americans in pursuit. The two armies fought at Monmouth Court House on June 28, with the Americans holding the field, greatly boosting Patriot morale and confidence. By July, both armies were back in the same positions they had been two years prior.
Early in the war, it became clear to Congress that help from France was imperative. First, the British instituted a blockade on the Atlantic seacoast ports against military assistance that could not be challenged. Second, its army troop strength attrited by death, disease and desertion, and the states failed to meet recruitment quotas. Third, the British had a continuing resupply of German auxiliaries to compensate for their losses.
French foreign minister the Comte de Vergennes was strongly anti-British, and he had long sought a pretext for going to war with Britain since the conquest of Canada in 1763. The French public favored war, but Vergennes and King Louis XVI were hesitant, owing to the military and financial risk.
France, however, would not feel compelled to intervene if the colonies were still considering reconciliation with Britain, as France would have nothing to gain in that event. To assure assistance from France, independence would have to be declared, which was effected by Congress in July 1776. The Americans who had been covertly supplied by French merchants through neutral Dutch ports since the onset of the war, were now also supplied directly by the French government. These proved invaluable in the American 1777 Saratoga campaign.
The British defeat at Saratoga caused British anxiety over possible foreign intervention. The North ministry sought reconciliation with the colonies by consenting to their original demands, but without independence. However the Americans were now bolstered by their French trade, and would settle for no terms short of complete independence from Britain. The American victory at Saratoga convinced the French that supporting the Patriots was worthwhile, but doing so too late brought major concerns. King Louis XVI feared that Britain's concessions would be accepted and bring reconciliation with the Colonies. Britain would then be free to strike at French Caribbean possessions. To prevent this, France formally recognized the United States in a trade treaty on February 6, 1778, and followed that with a defensive military alliance guaranteeing American independence. Spain was wary of recognizing a republic of former European colonies, and also of provoking war with Britain before it was well prepared. It opted to covertly supply the Patriots mainly from Havana in Cuba and New Orleans in Spanish Luisiana.
To encourage French participation in the American struggle for independence, diplomat Silas Deane promised promotions and command positions to any French officer who joined the American war effort. However, many of the French officer-adventurers were completely unfit for command. In one outstanding exception, Congress recognized Lafayette's "great zeal to the cause of liberty" and commissioned him a major General. He was immediately instrumental in reconciling some of Washington's rival officers and he aligned some of the delegates in Philadelphia to support Washington in an otherwise indifferent Congress.
Congress also hoped to persuade Spain into an open alliance, as formally extended in the French treaty. The American Commissioners met with the Count of Aranda in 1776. But Spain was still reluctant to make an early commitment due to its Great Power concerns on the Continent. Nevertheless, the following year, Spain affirmed its desire to support the Americans so as to weaken Britain's empire.
Since the outbreak of the conflict, Britain had appealed to its former ally, the neutral Dutch Republic, to lend the use of the Scots Brigade for service in America. But pro-American sentiment there forced its elected representatives to deny the request. Consequently, the British attempted to invoke treaties for outright Dutch military support, but the Republic still refused. At the same time, American troops were being supplied with ordnance by Dutch merchants via their West Indies colonies. French supplies bound for America were also transshipped through Dutch ports. The Republic traded with France following France's declaration of war on Britain, citing a prior concession by Britain on this issue. But despite standing international agreements, Britain responded by confiscating Dutch shipping, and even firing upon it. The Republic joined the First League of Armed Neutrality with Austria, Prussia and Russia to enforce their neutral status. But The Republic had further assisted the rebelling Patriot cause. It had also given sanctuary to American privateers and had drafted a treaty of commerce with the Americans. Britain argued that these actions contravened the Republic's neutral stance and declared war in December 1780.
Meanwhile, George III had given up on subduing America while Britain had a European war to fight. He did not welcome war with France, but he believed that Britain had made all necessary steps to avoid it and cited the British victories over France in the Seven Years' War as a reason to remain optimistic in the event of war with France. Britain tried in vain to find a powerful ally to engage France. It was isolated among the Great Powers, and French strength was not drawn off into Europe as in the Seven Years' War. Britain subsequently changed its focus from one theater, and diverted major military resources away from America. Despite these developments, George III still determined never to recognize American independence and to make war on the American colonies indefinitely, or until they pleaded to return as his subjects.
Following the British defeat at Saratoga in October, 1777, and French entry into the war, Clinton withdrew from Philadelphia to consolidate his forces in New York. French admiral the Comte d'Estaing had been dispatched to America in April 1778 to assist Washington. The Franco-American forces felt that New York's defenses were too formidable for the French fleet, so in August 1778 they launched an attack on Newport at the Battle of Rhode Island under the command of General John Sullivan. The effort failed when the French opted to withdraw, disappointing the Americans. The war then stalemated. Most actions were fought as large skirmishes such as those at Chestnut Neck and Little Egg Harbor. In the summer of 1779, the Americans captured British posts at the Battles of Stony Point and Paulus Hook. In July, Clinton unsuccessfully attempted to coax Washington into a decisive engagement by making a major raid into Connecticut. That month, a large American naval operation attempted to retake Maine, but it resulted in a humiliating defeat. The high frequency of Iroquois raids compelled Washington to mount a punitive expedition which destroyed a large number of Iroquois settlements, but the effort ultimately failed to stop the raids. During the winter of 1779–80, the Continental Army suffered greater hardships than at Valley Forge. Morale was poor, public support fell away in the long war, the national currency was virtually worthless, the army was plagued with supply problems, desertion was common, and whole regiments mutinied over the conditions in early 1780.
In 1780, Clinton launched an attempt to retake New Jersey. On June 7, 6,000 men invaded under Hessian general Wilhelm von Knyphausen, but they met stiff resistance from the local militia at the Battle of Connecticut Farms. The British held the field, but Knyphausen feared a general engagement with Washington's main army and withdrew. A second attempt two weeks later was soundly defeated at Springfield, effectively ending British ambitions in New Jersey. Meanwhile, American general Benedict Arnold turned traitor, joined the British army and attempted to surrender the American West Point fortress. The plot was foiled when British spy-master John André was captured. Arnold fled to British lines in New York where he justified his betrayal by appealing to Loyalist public opinion, but the Patriots strongly condemned him as a coward and turncoat.
The war to the west of the Appalachians was largely confined to skirmishing and raids. In February 1778, an expedition of militia to destroy British military supplies in settlements along the Cuyahoga River was halted by adverse weather. Later in the year, a second campaign was undertaken to seize the Illinois Country from the British. Virginia militia, francophone settlers and Indian allies commanded by Colonel George Rogers Clark captured Kaskaskia on July 4 and then secured Vincennes, although Vincennes was recaptured by Quebec Governor Henry Hamilton. In early 1779, the Americans counter-attacked and retook Vincennes, taking Hamilton prisoner.
On May 25, 1780, the British launched an expedition into Kentucky as part of a wider operation to clear rebel resistance from Quebec to the Gulf coast. Hundreds were killed or captured, but the initiative met with only limited success. The Americans responded with a major offensive along the Mad River in August which met with some success, but it did little to abate the Indian raids on the frontier. French militia attempted to capture Detroit, but it ended in disaster when Miami Indians ambushed and defeated the gathered troops on November 5. The war in the west had become a stalemate; the Americans did not have the manpower to simultaneously defeat the hostile Indian tribes and occupy the land.
The British turned their attention to conquering the South in 1778 after Loyalists in London assured them of a strong Loyalist base there. Squadrons of the Royal Navy would be closer to the British Caribbean colonies to defend against attacking Franco-Spanish fleets. On December 29, 1778, Lord Cornwallis commanded an expeditionary corps from New York to capture Savannah, and British troops then moved inland to recruit Loyalist support.
The initial Loyalist recruitment was promising in early 1779, but then a large Loyalist-only militia was defeated by Patriot militia at Kettle Creek on February 14. That demonstrated Loyalist need for the support of British regulars in major engagements. But the British in turn defeated Patriot militia at Brier Creek on March 3. In June they launched an abortive assault on Charleston, South Carolina. The operation became notorious for its widespread looting by British troops that enraged both Loyalists and Patriots in the Carolinas. In October, a combined Franco-American siege by Admiral d'Estaing and General Benjamin Lincoln failed to recapture Savannah.
The primary British strategy for the following year hinged on a Loyalist uprising in the south. Cornwallis proceeded into North Carolina, gambling his success on a large Loyalist uprising which never materialized. In May 1780, Henry Clinton captured Charleston, taking over 5,000 prisoners and effectively destroying the Continental Army in the south. Organized Patriot resistance in the region collapsed when Banastre Tarleton defeated the withdrawing Americans at Waxhaws on May 29.
British commander-in-chief Clinton returned to New York, leaving General Lord Cornwallis at Charleston to oversee the southern war effort. Few Loyalists joined him there. The initiative was seized by Patriot militias who won July victories at the Fairfield County, Lincolnton, Huck's Defeat, Stanly County, and Lancaster County. These effectively suppressed Loyalist support.
In July, Congress appointed General Horatio Gates with a new command to lead the American effort in the south. By mid-August 16, 1780, he had lost the Battle of Camden, and Cornwallis was poised to invade North Carolina. The British attempted to subjugate the countryside, but Patriot militia continued their attacks. Cornwallis dispatched Major Patrick Ferguson to raise Loyalist forces to cover his left flank as he moved north, but they ranged beyond mutual support. In early October the Tory regulars and militias were defeated at the Battle of Kings Mountain, destroying any significant Loyalist support in the region. Cornwallis advanced into North Carolina despite the setbacks, gambling that he would receive substantial Loyalist support there. Greene evaded combat with Cornwallis, instead wearing the British down through a protracted war of attrition.
Washington replaced General Gates with General Nathanael Greene At the beginning of December 1780. Greene was unable to confront the British directly, so he dispatched a force under Daniel Morgan to recruit additional troops. Morgan then defeated the renowned British Legion, on January 17, 1781, at Cowpens. Cornwallis subsequently aborted his advance and retreated back into South Carolina.
The British launched a surprise offensive in Virginia in January 1781, with Benedict Arnold invading Richmond, Virginia. It met little resistance. Governor Thomas Jefferson escaped Richmond just ahead of the British forces, and the British burned the city to the ground. Although later accused by his enemies of inaction and cowardice, Jefferson sent an emergency dispatch to nearby Colonel Sampson Mathews to check Arnold's advance.
By March, Greene's army had increased in size enough that he felt confident in facing Cornwallis. The two armies engaged at Guilford Courthouse on March 15; Greene was beaten, but Cornwallis's army suffered irreplaceable casualties. The Americans further reduced his army in a war of attrition, and far fewer Loyalists were joining than the British had previously expected. Cornwallis's casualties were such that he was compelled to retreat to Wilmington for reinforcement, leaving the Patriots in control of the interior of the Carolinas and Georgia. >
Greene then proceeded to reclaim the South. On April 25 the American troops suffered a reversal at Hobkirk's Hill due to poor tactical control, but they continued to march 160 miles in 8 days, continually dislodging strategic British posts in the area nonetheless. They recaptured Fort Watson and Fort Motte on April 15. During the Siege of Augusta on June 6, Brigadier general Andrew Pickens reclaimed possession of the last British outpost beyond Charleston and Savannah.
The last British effort to stop Greene occurred at Eutaw Springs on September 8, but the British casualties were so high that they withdrew to Charleston. By the end of 1781, the Americans had effectively confined the British to the Carolina coasts, undoing any progress they had made in the previous year. Minor skirmishes continued there until the end of the war.
The initial civil war between Great Britain and its rebelling thirteen colonies in Congress was broadened into a worldwide conflict between Britain and other European Great Powers. The English colonial insurgents in North America became the "centerpiece of an international coalition" to check and then compromise British preeminence in the North Atlantic. The rebelling Americans needed outside help if they were to be successful. Their "Continental" forces suffered repeated reverses early on. The cause of US independence flickered as its major port cities were either occupied or blockaded, its naval forces proved ineffectual, and its armies suffered repeated defeats in pitched battles at the hands of British regulars and their allied auxiliaries from German principalities.
Initially the Continental Congress persevered with financial contributions from the personal fortunes of the richest colonials to compensate for smaller states refusing to pay their full requisitions, It kept an army in the field by the leadership of George Washington and a flow of recruits from the largest states, especially Virginia, and Pennsylvania. Then came assistance from Dutch financiers and French covert military aid. French Enlightenment freebooters and European soldier-adventurers came to the aid of the embattled revolutionary forces. In 1778, the French Crown recognized the United States, in a trade treaty, followed by a defensive treaty that would become operative if Britain made war on France to interrupt its American trade. Article II of the military treaty included a French guarantee for US independence and it sovereign territory, including any territory conquered in Canada, Quebec, or Bermuda. Were Britain to initiate war with France for trading with the US, it would aid France to protect its West Indian possessions against British attack.
Britain responded by making war on any nation providing military assistance to the US Congress. It reasoned that violated Parliament's mercantile trade restrictions for its Thirteen Colonies. For Britain, the colonial civil war with Congress that began formally at its 1776 Declaration of Independence, now expanded into a war worldwide, beginning when Britain initiated a war with France in 1778. By April 1779, Spain joined France to war against Britain in the secret Treaty of Aranjuez to reclaim the portion of its empire lost to Britain at the last peace, including Gibraltar, Minorca, and the Floridas.
At that, France broke its military treaty of alliance with the US. First, the French-US treaty guaranteed US independence; but Spain neither joined their alliance as formally invited in Article X, nor did Spain guarantee US independence in the Franco-Spanish treaty. Second, the French-US treaty pledged France to war with Britain until US independence in Article XII; but its treaty with Spain pledged France to war with Britain until Spain gained Gibraltar, regardless of whether Britain agreed to US independence beforehand. And the terms of this secret Aranjuez treaty to gain Gibraltar were made without the knowledge or consent of the US as a party to the Franco-Spanish alliance against Britain. That directly violated Article IV of the French-US treaty. Third, in the French-US treaty Article VI, the French renounced all territory belonging to Great Britain. That provided for Britain to cede fishing rights to the US at Newfoundland, which it did at Anglo-US conclusive peace; but the Franco-Spanish treaty stipulates that they will conquer Newfoundland from the British and then share it only between themselves.
In 1780, to deter further aggression by Great Britain at sea, neutral Continental powers with continuing trade among Britain's thirteen rebelling colonies formed the First League of Armed Neutrality, including Austria, Russia, and Prussia. These insisted that the Anglo-Russian Treaty of 1766 provided for free trade among British dominions, only excepting military contraband or in view of a belligerent's stationary blockade on a port.
These additional conflicts around the globe with France and Spain strained Britain's resources for the war in America. The French blockaded Barbados and Jamaica, to damage British trade. The British defeated a French naval force on December 15 and captured St. Lucia. But in 1779, the French began capturing British territories, seizing St. Vincent and Grenada. Britain lost the Battle of Grenada badly in 1779. France and Spain failed to invade England, but a Franco-Spanish fleet decisively defeated a large British convoy bound for the West Indies off the Azores. The defeat was catastrophic for Britain. Spain failed to capture the British naval station at Gibraltar, but the British blockade of Spain and France proved ineffective.
In America east of the Mississippi River, though Spanish Luisiana Territory ran west of it, Governor General Gálvez had been allowing covert aid to George Washington by Pittsburgh via New Orleans. In 1777 Oliver Pollock, a successful merchant in Havana and New Orleans, was appointed US "commercial agent". He personally underwrote an American campaign against the British among the francophone settlements of western Quebec. In the Virginia militia campaign of 1778, General George Rogers Clark founded Louisville, and cleared British forts in the region. Clark's conquest resulted in the creation of Illinois County, Virginia. It was organized with the consent of French-speaking colonials who had been guaranteed protection of the Catholic Church. Voters at their court house in Kaskaskia, were represented for three years in the Virginia General Assembly until the territory was ceded to the US Congress.
At the Spanish declaration of war with France in 1779, Governor Galvez raised an army in Spanish Luisiana to initiate offensive operations against British outposts. First, he cleared British garrisons in Baton Rouge, Fort Bute and Natchez, capturing five forts. In this first maneuver Gálvez opened navigation on the Mississippi River north to US settlement in Pittsburg. His Spanish military assistance to Oliver Pollock for transport up the Mississippi River became an alternative supply to Washington's Continental Army, bypassing the British-blockaded Atlantic Coast. In 1781, Governor Galvez and Pollack campaigned east along the Gulf Coast to secure West Florida including British-held Mobile and Pensacola. The Spanish operations crippled the British supply of armaments to British Indian allies, effectively suspending a military alliance to attack settlers between the Mississippi River and Appalachian Mountains.
In April 1782 at the Battle of the Saintes, the British parried the French-Spanish invasion of Jamaica, then dominated the Caribbean Sea. In February 1783 Spanish lifted their siege of Gibraltar. A Spanish-US fleet captured Bahamas was returned at the peace. The belligerents had all lost heart for continued warfare. After George III announced for US independence in a Speech before the Throne before a joint session of Parliament in December 1782, the British proffered terms to the Americans in Paris, which were then approved by Congress April 1783. British "American settlement" allowed US fishing rights in Newfoundland and the Gulf of Mexico, along with "perpetual access" to the Mississippi River. The two British treaties with France and Spain settled their three-way swaps of imperial territory in September. The British settled their Fourth Anglo-Dutch War the next year.
In 1781, the British commander-in-chief in America was General Clinton, who was garrisoned in New York City. He had failed to construct a coherent strategy for British operations that year, owing to his difficult relationship with his naval counterpart Admiral Marriot Arbuthnot. Arbuthnot had in turn had failed to detect the arrival of French naval forces in July. In Charleston, Cornwallis independently developed a plan for a campaign in Virginia to cut supply to Greene's army in the Carolinas, expecting the Patriot resistance in the South would then collapse. Lord Germain, Cabinet Secretary of State for America in London agreed, but neither official informed Clinton.
Cornwallis maneuvered to Yorktown to establish a fortified a sea-base of supply. But at the same time Lafayette was maneuvering south with a Franco-American army. The British dug in at Yorktown and awaited the Royal Navy. As Lafayette's army closed with Cornwallis, the British made no early attempt to sally out to engage the Americans before siege lines could be dug, despite the repeated urging of his subordinate officers.
Expecting relief from Admiral Arbuthnot shortly to facilitate his withdrawal off the Virginia Peninsula, Cornwallis prematurely abandoned his outer defenses. These were promptly occupied by the besiegers, serving to hasten the British defeat.
The British had dispatched a fleet from New York under Thomas Graves to rendezvous with Cornwallis. As they approached the entry to the Chesapeake Bay on September 5, the French fleet commanded by Admiral de Grasse decisively defeated Graves at the Battle of the Chesapeake, giving the French control of the seas around Yorktown and cutting off Cornwallis from further reinforcements or relief. On the unexpected of the arrival of the French fleet, Cornwallis then failed in an attempt to break out of the siege by crossing the York River at Gloucester Point when a storm hit. Cornwallis and his subordinates were under heavy bombardment and facing dwindling supplies, they agreed that their situation was untenable. On October 17, 1781, after twelve hours of negotiations, the terms of surrender were finalized.
Lord North had been the King's Prime Minister in Parliament since 1770. By the end of 1777 with the loss of the first British army, King George III had determined that in the event of his initiating a separate war with France, he would have to redeploy most of the British and German troops in America to threaten French and Spanish Caribbean settlements. In the King's judgment, Britain could not possibly fight on all three fronts without becoming weak everywhere. At the news of the French-US treaties for trade and defense arrived at London, British negotiators proposed a second peace settlement to Congress.
The Carlisle Peace Commission was sent across the Atlantic to make a formal presentation to Congress. Firstly, virtual self-government by a kind of "home-rule" was contemplated. It would recognize Congress, suspend all objectionable acts of Parliament, surrender Parliament's the right to taxation, and perhaps allow American representatives to the House of Commons. But secondly, all property would be restored to loyal subjects, their debts honored, with locally enforced martial law, Parliament to regulate trade, and the Declaration of Independence withdrawn. Parliament's commission was rebuffed by a Congress which knew the British were about to evacuate Philadelphia. Before it returned to London in November 1778, the commission directed a change in British war policy. Sir Henry Clinton, the new British Commander-in-Chief in America was to stop treating rebels as subjects whose loyalty might be regained. Now they were to be enemies fought with ruthless hate. Those standing orders would be in effect for three years until Clinton was relieved.
Prior to the surrender of Cornwallis's army at Yorktown, George III still had hoped for victory in the South. He believed a majority of American colonists supported him, especially in the South and among thousands of black slaves. But after Valley Forge, the Continental Army was an efficient fighting force. After a two-week siege at Yorktown by Washington's army, a successful French fleet, French regulars and local reinforcements, the British surrendered on October 19, 1781. Lord North exclaimed, “Oh God! It is all over." Nevertheless, Lord North rebutted the Whig resolution in Commons to end offensive operations in America. The speech postponed the inevitable several weeks.
But the mood of the country of Great Britain had changed since the 1770s. Member of Parliament Edward Gibbon had believed the King's cause in America to be just, and the British and German soldiers there to have fought bravely. But after Yorktown, he concluded, "It is better to be humbled than ruined." There was no point in spending more money on Britain's most expensive war, with no hope of success. Whig William Pitt argued that war on American colonists had brought nothing but ineffective victories or severe defeats. He condemned effort to retain the Americans as a "most accursed, wicked, barbarous, cruel, unjust and diabolical war." Lord North resigned. George III never forgave him.
From the time London learned of the surrender of a second British army, it was only two weeks before the Whig Opposition motion to end offensive war in America which was defeated by only one vote. Three days later, on December 15, George III made a Speech from the Throne to a joint session of Parliament announcing for American independence, peace and trade. Less than two months later on February 27, 1782, the Commons carried the motion by 19 votes. At a vote of no confidence against Lord North, the Rockingham Whigs came to power and opened negotiations for peace with the Americans. Rockingham died and was succeeded by the Earl of Shelburne. The British troops remaining in America were garrisoned in the three port cities of New York, Charleston, and Savannah. General Clinton was recalled and replaced by Guy Carleton who was ordered to suspend offensive operations.
At the outset of hostilities in the American Revolutionary War against the British Empire, most advantages lay with the Tory Parliament trying to regulate trade within its empire against thirteen colonies. Patriots asserted both an old English right to local self-government in their states, and a new right to constitutional revolution, declaring independence in their Congress as the United States of America. But Congress had no national government, army or navy. It had no financial system, banks, or established credit. It had no national administration. Delegates from thirteen states in Congress sought to handle its affairs through legislative committees that proved largely inefficient. The colonies relied heavily on ocean travel and shipping, but that was now shut down by the British blockade, and the Americans had to rely on slow overland travel.
However, Congress had multiple advantages. Their prosperous state populations depended on local production for food and supplies rather than on imports from a Mother Country that lay six to twelve weeks away by sail. They were spread across most of the North American Atlantic seaboard stretching 1000 miles. Most farms were remote from the seaports; control of four or five major ports did not give British armies control over the inland areas. Each state had established internal distribution systems.
Each state had a long-established system of local militia, combat tested in support of British regulars thirteen years before to secure an expanded British Empire. Together they took away French claims in North America west to the Mississippi River. The state legislatures independently funded and controlled their local militias. They would train and provide Continental Line regiments to the regular army, each with their own state officer corps. Motivation was also a major asset. Each colonial capital had its own newspapers and printers. The Patriots had more popular support than the Loyalists. British hoped for the Loyalists to do much of the fighting, but they did much less than expected.
When the war began, Congress lacked a professional army or navy, and each colony maintained only local militias. Militiamen were lightly armed, had little training, and usually without uniforms. Their units served for only a few weeks or months at a time and lacked the training and discipline of soldiers with more experience. Local county militias were reluctant to travel far from home and they were unavailable for extended operations. However, if properly employed their numbers could help the Continental armies overwhelm smaller British forces, as at Concord, Boston, Bennington, and Saratoga. Both sides used partisan warfare, but the state militias effectively suppressed Loyalist activity when British regulars were not in the area. The Congress established a regular army on June 14, 1775, and appointed George Washington as commander-in-chief. The development of the Continental Army was always a work in progress, and Washington used both his regulars and state militia throughout the war.
Though Congress had responsibility for the war effort and getting supplies to the troops, Washington took it upon himself to pressure the Congress and state legislatures to provide the essentials. There was never nearly enough. Congress evolved in its committee oversight, establishing the Board of War which included members of the military. But the Board of War was also a committee ensnared with its own internal procedures, so Congress created the post of Secretary of War, appointing Major General Benjamin Lincoln in February, 1781. Washington worked closely with Lincoln in coordinating civilian and military authorities and took charge of training and supplying the army.
The new Continental Army suffered significantly from a lack of an effective training program and from largely inexperienced officers and sergeants. The inexperience of its officers was somewhat offset by a few senior officers. Each state legislature appointed officers for both county and state militias and their regimental Continental Line officers, but Washington was permitted to choose and command his own generals, although sometimes he was required to accept Congressional appointments.
Eventually, the Continental Army found capable officers such as Nathanael Greene, Daniel Morgan, Henry Knox (chief of artillery), and Alexander Hamilton (chief of staff). One of Washington's most successful recruits to general officer was Baron Friedrich Wilhelm von Steuben, a veteran of the Prussian general staff who wrote the Revolutionary War Drill Manual. Over the winter of 1777–78 at Valley Forge, von Steuben was instrumental in training the Continental Army in the essentials of infantry field maneuvers with military discipline, drills, tactics, and strategy.
The American armies were small by European standards of the era, largely attributable, to limitations such as lack of powder and other logistical capabilities. At the beginning of 1776, Washington commanded 20,000 men, with two-thirds enlisted in the Continental Army and the other third in the various state militias. About 250,000 men served as regulars or as militiamen for the Revolutionary cause in the eight years of the war, but there were never more than 90,000 men under arms at one time.
Over the entire course of the war, American officers as a whole never equaled their opponents in tactics and maneuver, and they lost most of the pitched battles. The great successes at Boston (1776), Saratoga (1777), and Yorktown (1781) came from trapping the British far from base with much larger numbers of troops. Nevertheless, after 1778, Washington's army was transformed into a more disciplined and effective force. Immediately after the Army emerged from Valley Forge, it proved its ability to match the British troops in action at the Battle of Monmouth, including a black Rhode Island regiment fending off a British bayonet attack then counter-charging for the first time in Washington's army.
During the first summer of the war, Washington began outfitting schooners and other small sea-going vessels to prey on ships supplying the British in Boston.
Congress established the Continental Navy on October 13, 1775, and appointed Esek Hopkins as the Navy's first commander. The following month, Marines were organized on November 10, 1775. The Continental Navy was a handful of small frigates and sloops throughout the Revolution for the most part. Congress primarily commissioned privateers as a cost savings, and to take advantage of the large proportion of colonial sailors found in the British Empire. Overall, they included 1,700 ships, and these successfully captured 2,283 enemy ships to damage the British effort and to enrich themselves with the proceeds from the sale of cargo and the ship itself.
John Paul Jones became the first great American naval hero, capturing HMS "Drake" on April 24, 1778, the first victory for any American military vessel in British waters. The last was by the frigate USS "Alliance" commanded by Captain John Barry. On March 10, 1783, the "Alliance" outgunned HMS "Sybil" in a 45-minute duel while escorting Spanish gold from Havana to Congress.
For example, in what was known as the Whaleboat War, American privateers mainly from New Jersey, Brooklyn and Connecticut attacked and robbed British merchant ships and raided and robbed coastal communities of Long Island reputed to have Loyalist sympathies.
About 55,000 sailors served aboard American privateers during the war. After Yorktown, all US Navy ships were sold or given away. For the first time in America's history she had no fighting forces on the high seas.
At the onset of the war, the Second Continental Congress realized that they would need foreign alliances and intelligence-gathering capability to defeat a world power like Britain. To this end, they formed the Committee of Secret Correspondence which operated from 1775 to 1776. The Committee shared information and forged alliances with persons in France, England and throughout America. It employed secret agents in Europe to gather foreign intelligence, conduct undercover operations, analyze foreign publications, and initiate American propaganda campaigns to gain Patriot support. Members included Thomas Paine, the committee's secretary, and Silas Deane who was instrumental in securing French aid in Paris.
Facing off against the British at New York City, Washington realized that he needed advance information to deal with disciplined British regular troops. On August 12, 1776, Thomas Knowlton was given orders to form an elite group for reconnaissance and secret missions. Knowlton's Rangers became the Army's first intelligence unit. Among the Rangers was Nathan Hale. When the British landed on Long Island with overwhelming force, the American army retreated to New York City on Manhattan Island. Washington directed volunteer Hale to spy on enemy activity behind their lines in Brooklyn. After the British attack on September 15, Hale was captured and with sketches of British fortifications and troop positions. Howe ordered Hale summarily hung without trial the next day (September 22).
After Washington was driven out of New York, he sought to professionalize military intelligence with the aid of Benjamin Tallmadge. They created the Culper spy ring of six men. Washington promised members of the ring that their identities and activities would never be revealed. All name references had a number code, and the spies used vanishing ink for their messages. Among the more notable achievements of the ring was exposing Benedict Arnold's plans to capture West Point, along with his collaborator John André, Britain's head spymaster, and later they intercepted and deciphered coded messages between Cornwallis and Clinton during the Siege of Yorktown. During the war, Washington spent more than 10 percent of military funds on intelligence operations. Some historians maintain that, without the efforts of Washington and the Culper Spy Ring, the British would never have been defeated.
The population of Great Britain and Ireland in 1780 was approximately 12.6 million, while the Thirteen Colonies held a population of some 2.8 million, including some 500,000 slaves. Theoretically, Britain had the advantage; however, many factors inhibited the procurement of a large army for an unpopular war at home.
Suppressing a rebellion in America presented the British with major problems. The key issue was distance; it could take up to three months to cross the Atlantic, and orders from London were often outdated by the time that they arrived. The colonies had never been formally united prior to the conflict and there was no centralized area of ultimate strategic importance. Traditionally, the fall of a capital city often signaled the end of a conflict, yet the war continued unabated even after the fall of major settlements such as New York, Philadelphia (which was the Patriot capital), and Charleston. Britain's ability to project its power overseas lay chiefly in the power of the Royal Navy, allowing her to control major coastal settlements with relative ease and to enforce a strong blockade of colonial ports. However, the overwhelming majority of the American population was agrarian, not urban, and the American economy proved resilient enough to withstand the blockade's effects.
The vastness of the American countryside and the limited manpower available meant that the British could never simultaneously defeat the Americans and occupy captured territory. One British statesman described the attempt as "like trying to conquer a map".
In 1775, the standing British Army, exclusive of militia, comprised 45,123 men worldwide, made up of 38,254 infantry and 6,869 cavalry. Their Army had approximately eighteen regiments of foot, some 8,500 men, stationed in North America.
The British army at home had been deliberately kept small in peacetime to prevent abuses of power by the King. Despite this, eighteenth century armies were not welcome guests among British civilian populations, and were regarded with scorn and contempt by the press and public of the New and Old World alike, derided as enemies of liberty. The idle peacetime Army fell into corruption and inefficiency, resulting in many administrative difficulties once campaigning began.
By the end of hostilities in America at the close of 1781, the British Army numbered approximately 121,000 men globally, 48,000 of whom were stationed throughout the Americas. Of the 171,000 sailors who served in the Royal Navy throughout the conflict, around a quarter were pressed. This same proportion, approximately 42,000 men, deserted during the conflict. At its height, the Navy had 94 ships-of-the-line, 104 frigates and 37 sloops in service.
Britain had three commanders-in-chief from initial days to final conclusion. First was General Sir William Howe, commanding British forces in North America 1775-1778 following the Battle of Bunker Hill, but still during the London policy of "soft war", trying to reconcile the American colonists to pre-1776 King-in-Parliament rule. At the loss of an army at Saratoga and France initiating war with Britain, Congress rejected the peace offer at the Carlisle Commission, and Howe's replacement as British commander-in-chief in 1778 was General Sir Henry Clinton for the duration of the fighting. London changed its war policy with orders to ruthlessly pursue victory against the colonists as enemies. Clinton's tenure ended at the loss of a second British army at Yorktown, and he was replaced by Sir Guy Carlton in early 1782 after the British-American armistice. Carlton then successfully managed the British evacuation of American port cities in Savannah, Charleston and New York City.
Howe made several strategic errors that cost the British opportunities for a complete victory early on. After securing control of New York, Howe dispatched Henry Clinton to capture Newport against Clinton's judgement that his command could have been put to better use pursuing Washington's retreating army. Despite the bleak outlook for the revolutionary cause and the surge of Loyalist activity in the wake of Washington's defeats, Howe made no attempt to mount an attack upon Washington while the Americans settled down into winter quarters, much to their surprise.
During his planning for the Saratoga campaign, Howe was to choose between committing his army to support Burgoyne, or capturing Philadelphia, the rebel capital. Howe decided upon the latter, determining that Washington was of a greater threat. When Howe launched his campaign, he approached Philadelphia round-about through the Chesapeake Bay, rather than directly overland through New Jersey, or by sea through the nearby Delaware Bay. The passage via the Virginia Capes left him unable to assist Burgoyne even if it was required of him. That decision so angered Tories on both sides of the Atlantic, that Howe was accused in Parliament of treason.
At the Battle of White Marsh, Howe failed to exploit the vulnerable American rear, and then he inexplicably ordered a retreat to Philadelphia after only minor skirmishes. His withdrawal astonished both sides. However, there were strategic factors at play which compromised any aggressive action. Howe may have been dissuaded from direct assaults by the memory of the grievous losses the British suffered at Bunker Hill. During the major campaigns in New York and Philadelphia, Howe often wrote of the scarcity of adequate provisions available by local foraging, which hampered his ability to mount effective campaigns. Howe's tardiness in launching the New York campaign awaiting supplies, and his reluctance to allow Cornwallis to vigorously pursue Washington's beaten army, have both been attributed food shortages.
During the winter of 1776–1777, Howe split his army into scattered cantonments. This decision dangerously exposed the individual forces to defeat in detail, as the distance between them was such that they could not mutually support each other. But the quantity of available food supplies in New York City warehouses was so low that Howe had been compelled to take such a decision. The garrisons were widely spaced so their respective foraging parties would not interfere with each other's efforts. This strategic failure allowed the Americans to achieve victory at the Battle of Trenton, and the concurrent Battle of Princeton. Howe's difficulties during the Philadelphia campaign were also greatly exacerbated by the poor quality and quantity of resupply directly from Britain.
Like Howe before him, Clinton's efforts to campaign suffered from chronic supply issues. In 1778, Clinton wrote to Germain complaining of the lack of supplies, even after the arrival of a convoy from Ireland. That winter, the supply issue had deteriorated so badly, that Clinton expressed considerable anxiety over how the troops were going to be properly fed. Clinton was largely inactive in the North throughout 1779, launching few major campaigns. This inactivity was partially due to the shortage of food. By 1780, the situation had not improved. Clinton wrote a frustrated correspondence to Germain, voicing concern that a "fatal consequence will ensue" if matters did not improve. By October that year, Clinton again wrote to Germain, angered that the troops in New York had not received "an ounce" of that year's allotted stores from Britain.
To emphasize his disappointment, Clinton had asked London that Admiral Mariot Arbuthnot be recalled. Arbuthnot's relief was meant to be Admiral Sir George Rodney from the Leeward Islands command in late 1780, but Arbuthnot appealed to the admiralty. The replacement was upheld and Rodney took command in New York, but not before Arbuthnot narrowly turned back a French navy attempt in March 1781 to reinforce Lafayette in Virginia at the Battle of Cape Henry.
Anticipating the 1781 campaign year, General Lord Cornwallis with the British army Southern command in Charleston had written to both to Clinton, his Commander-in-Chief for America, and to War Minister Lord Germain in London. Cornwallis proposed an invasion into Virginia from Charleston to force a collapse of Patriot support throughout the South. Clinton objected, either counter-proposing that Cornwallis send reinforcements to New York City, or favoring a campaign farther north in the Chesapeake Bay region. Lord Germain wrote to Cornwallis to approve the General's plan, but Germain neglected to include Clinton in the decision-making, even though Clinton was Cornwallis's superior officer,
Cornwallis then decided to move into Virginia without informing Clinton. Once learning that Cornwallis had chosen Yorktown as a fortified forward base, Clinton delayed sending reinforcements, because his intelligence led him to believe that the bulk of Washington's army was still outside New York City. Admiral Romney was sent with a fleet to cover Cornwallis redeployment to Yorktown, but the British were turned away by a French fleet at the Battle of the Chesapeake on September 5, and French resupply of the besieging Washington and Rochambeau was landed successfully. Although scheduled to depart with a relief force for New York on October 5, Clinton was delayed. It was not until two weeks later on the day of surrender that 6,000 troops under Clinton departed New York sailing to relieve Yorktown on October 19, 1781.
Logistical organization of eighteenth century armies was chaotic at best, and the British Army was no exception. No logistical corps existed in the modern sense; while on campaign in foreign territories such as America, horses, wagons, and drivers were frequently requisitioned from the locals, often by impressment or by hire. No centrally organized medical corps existed. It was common for surgeons to have no formal medical education, and no diploma or entry examination was required. Nurses sometimes were apprentices to surgeons, but many were drafted from the women who followed the army. Army surgeons and doctors were poorly paid and were regarded as social inferiors to other officers.
The heavy personal equipment and wool uniform of the regular infantrymen were wholly unsuitable for combat in America, and the outfit was especially ill-suited to comfort and agile movement. During the Battle of Monmouth in late June 1778, the temperature exceeded 100°F (37.8°C), and heat stroke claimed more lives than actual combat. The standard-issue firearm of the British Army was the Land Pattern Musket. Some officers preferred their troops to fire careful, measured shots (around two per minute), rather than rapid firing. A bayonet made firing difficult, as its cumbersome shape hampered ramming down the charge into the barrel. British troops had a tendency to fire impetuously, resulting in inaccurate fire, a trait for which John Burgoyne criticized them during the Saratoga campaign. Burgoyne instead encouraged bayonet charges to break up enemy formations, which was a preferred tactic in most European armies at the time.
Every battalion in America had organized its own rifle company by the end of the war, although rifles were not formally issued to the army until the Baker Rifle in 1801. Flintlocks were heavily dependent on the weather; high winds could blow the gunpowder from the flash pan, while heavy rain could soak the paper cartridge, ruining the powder and rendering the musket unable to fire. Furthermore, flints used in British muskets were of notoriously poor quality; they could only be fired around six times before requiring resharpening, while American flints could fire sixty. This led to a common expression among the British: "Yankee flint was as good as a glass of grog".
Provisioning troops and sailors proved to be an immense challenge, as the majority of food stores had to be shipped overseas from Britain. The need to maintain Loyalist support prevented the Army from living off the land. Other factors also impeded this option; the countryside was too sparsely populated and the inhabitants were largely hostile or indifferent, the network of roads and bridges was poorly developed, and the area which the British controlled was so limited that foraging parties were frequently in danger of being ambushed. After France entered the war, the threat of the French navy increased the difficulty of transporting supplies to America. Food supplies were frequently in bad condition. The climate was also against the British in the southern colonies and the Caribbean, where the intense summer heat caused food supplies to sour and spoil.
Life at sea was little better. Sailors and passengers were issued a daily food ration, largely consisting of hardtack and beer. The hardtack was often infested by weevils and was so tough that it earned the nicknames "molar breakers" and "worm castles", and it sometimes had to be broken up with cannon shot. Meat supplies often spoiled on long voyages. The lack of fresh fruit and vegetables gave rise to scurvy, one of the biggest killers at sea.
Parliament suffered chronic difficulties in obtaining sufficient manpower, and found it impossible to fill the quotas they had set. The Army was a deeply unpopular profession, one contentious issue being pay. The rate of pay in the army was insufficient to meet the rising costs of living, turning off potential recruits, as service was nominally for life.
To entice voluntary enrollment, Parliament offered a bounty of £1.10s for every recruit. As the war dragged on, Parliament became desperate for manpower; criminals were offered military service to escape legal penalties, and deserters were pardoned if they re-joined their units.
Impressment, essentially conscription by the "press gang", was a favored recruiting method, though it was unpopular with the public, leading many to enlist in local militias to avoid regular service. Attempts were made to draft such levies, much to the chagrin of the militia commanders. Competition between naval and army press gangs, and even between rival ships or regiments, frequently resulted in brawls between the gangs in order to secure recruits for their unit. Men would maim themselves to avoid the press gangs, while many deserted at the first opportunity. Pressed men were militarily unreliable; regiments with large numbers of such men were deployed to remote garrisons such as Gibraltar or the West Indies, to make it harder to desert.
Discipline was harsh in the armed forces, and the lash was used to punish even trivial offences—and not used sparingly. For instance, two redcoats received 1,000 lashes each for robbery during the Saratoga campaign, while another received 800 lashes for striking a superior officer. Flogging was a common punishment in the Royal Navy and came to be associated with the stereotypical hardiness of sailors.
Despite the harsh discipline, a distinct lack of self-discipline pervaded all ranks of the British forces. Soldiers had an intense passion for gambling, reaching such excesses that troops would often wager their own uniforms. Many drank heavily, and this was not exclusive to the lower ranks. Some reports indicated that British troops were generally scrupulous in their treatment of non-combatants.
Britain had a difficult time appointing a determined senior military leadership in America. Thomas Gage, Commander-in-Chief of North America at the outbreak of the war, was criticized for being too lenient on the rebellious colonists. Jeffrey Amherst was appointed Commander-in-Chief of the Forces in 1778, but he refused a direct command in America because he was unwilling to take sides in the war. Admiral Augustus Keppel similarly opposed a command: "I cannot draw the sword in such a cause". The Earl of Effingham resigned his commission when his regiment was posted to America, while William Howe and John Burgoyne were opposed to military solutions to the crisis. Howe and Henry Clinton both stated that they were unwilling participants and were only following orders.
Officers in British service could purchase commissions to ascend the ranks, and the practice was common in the Army. Values of commissions varied but were usually in line with social and military prestige; for example, regiments such as the Guards commanded the highest prices. Wealthy individuals lacking any formal military education or practical experience often found their way into positions of high responsibility, diluting the effectiveness of a regiment.
Heavy drinking among senior British officers is well documented. William Howe was said to have seen many "crapulous mornings" while campaigning in New York. John Burgoyne drank heavily on a nightly basis towards the end of the Saratoga campaign. The two generals were also reported to have found solace with the wives of subordinate officers to ease the stressful burdens of command. During the Philadelphia campaign, British officers deeply offended local Quakers by entertaining their mistresses in the houses where they had been quartered.
In 1775, without sufficient popular support at home to supply enlistments for the British Army overseas, London had to look elsewhere to find the number of troops required put down an expanding revolt in the Thirteen Colonies. Britain unsuccessfully attempted to secure 20,000 mercenaries from Russia, and then it was denied use of the Scots Brigade from the Dutch Republic. Parliament finally managed to negotiate treaties of subsidy with certain mercenary German princes in exchange for auxiliary troops to serve in America. In total, 29,875 troops were hired for British service from six German states.
The presence of foreign soldiers speaking German-only caused considerable anxiety among the colonists, both Patriot and Loyalist. Newspaper accounts viewed them as brutal mercenaries. It was also true that diaries of Hessian soldiers voiced objections to occasionally bad treatment of colonists at the hands of the British Army. Some officers had ordered property destruction and prisoner execution.
British soldiers were themselves often contemptuous in their treatment of Hessian troops, despite orders from General Howe that "the English should treat the Germans as brothers". The order only began to have any real effect when the Hessians learned to speak a minimal degree of English, which was seen as a prerequisite for the British troops to accord them any respect.
Wealthy Loyalists wielded great influence in London and they were successful in convincing the British government that the majority view in the colonies was sympathetic toward the Crown. Consequently, British military planners pinned the success of their strategies on popular uprisings of Loyalists that never materialized.
Recruiting adequate numbers of Loyalist militia to support British military plans in America was made difficult by intensive local Patriot opposition nearly everywhere. To bolster Loyalist militia numbers in the South, the British promised freedom and grants of land to slaves who fought for them. Approximately 25,000 Loyalists fought for the British throughout the war.
From early on, the British were faced with a major dilemma. Any significant level of organized Loyalist activity required a continued presence of British regulars. The available manpower that the British commands had in America was insufficient to protect Loyalist territory while at the same time countering American offensives. The Loyalist militias in the South were vulnerable to strings of defeats by their Patriot militia neighbors. The most critical combat between the two partisan militias was at Kings Mountain. The Patriot victory there irreversibly crippled any further Loyalist militia capability in the South.
During the early war policy administered by General Lord Howe, the need to maintain Loyalist support prevented the British from using the harsh methods of suppressing revolts that they had used in Scotland and Ireland. The Crown's cause suffered when British troops looted and pillaged the locals during an aborted attack on Charleston in 1779, enraging both Patriots and Loyalists. After Congress rejected the Carlisle Commission settlement offer in 1778 and London turning to "hard war" during General Lord Clinton's command, neutral colonists in the Carolinas were often driven into the ranks of the Patriots whenever brutal combat broke out between Tories and Whigs. Conversely, Loyalists were often emboldened when Patriots resorted to intimidating suspected Tories by destroying property or tarring and feathering.
One outstanding Loyalist militia unit provided some of the best troops in the British service. Their British Legion was a mixed regiment of 250 dragoons and 200 infantry, supported by batteries of flying artillery Under the command of Banastre Tarleton in the South, it gained a fearsome reputation in the colonies for "brutality and needless slaughter". Nevertheless, in May 1779 the Loyalist British Legion was one of five regiments taken into British Army regular service as the American Establishment. After the Battle of Cowpens in January 1781, British Legion survivors amounting to 14 percent of those engaged were consolidated into the British garrison at Charleston.
Through 1775, the British leadership discovered it had overestimated the capabilities of its own troops, while underestimating those of the colonists. Strategic and tactical reassessments began in London and British America. The immediate replacement of General Gage with General Howe followed the large casualties suffered in a frontal assault against shallow entrenchments at Bunker Hill. Both British military and civil officials soon acknowledged that their initial responses to the rebellion had allowed the initiative to shift to the Patriots, as British authorities rapidly lost control over every colony.
As a part of the Anglo-French Second Hundred Years' War, beginning 1778–9, France and Spain again declared war on Britain. The British were forced to severely limit the number of troops and warships that they sent to America in order to defend the British homeland and key overseas territories. The immediate strategic focus of the three greatest European colonial powers, Britain, France, and Spain, all shifted to Jamaica. King George abandoned any hope of subduing America militarily while simultaneously contending with two European Great Powers alone.
The small size of Britain's army left them unable to concentrate their resources primarily in one theater of war with a Great Power ally as they had done before in the Seven Years' War allied with Prussia. That left them at a critical disadvantage. London was compelled to disperse troops from America to Europe and the East Indies. These forces were unable to mutually support one other, exposing them to defeat worldwide.
Nevertheless, the British secured a preliminary peace settlement in America, and it was agreed to in Congress April 1783. British military successes worldwide from 1782 to 1784 led to their ability to dictate their Treaty of Versailles (1783) with France, their Treaty of Versailles (1783) with Spain, and their Treaty of Paris (1784) with the Dutch Republic. Following the end of British engagement in conflicts worldwide 1775–1784, the Empire had lost some of her most populous colonies in the short term. But in the long term, the economic effects were negligible. With expanding trade in America with the US, and expanding colonial territory worldwide, she became a global superpower 32 years after the end of her many conflicts throughout the American Revolution and Napoleonic Eras.
Debate persists over whether a British defeat in America was a guaranteed outcome. Ferling argues that long odds made the defeat of Britain nothing short of a miracle. Ellis, however, considers that the odds always favored the Americans. He holds that the British squandered their only opportunities for a decisive success in 1777 because William Howe's strategic decisions relied on local Tory militias while underestimating Patriot capabilities. Ellis concludes that once Howe failed, the opportunity for a British victory "would never come again". Conversely, the US military history published by the US Army argues that an additional British commitment of 10,000 fresh troops in 1780 would have placed British victory within the realm of possibility.
To begin with, the Americans had no major international allies. Battles such as the Battle of Bennington, the Battles of Saratoga, and even defeats such as the Battle of Germantown proved decisive in gaining the attention and support of powerful European nations such as France and Spain, who moved from covertly supplying the Americans with weapons and supplies to overtly supporting them.
The decisive American victory at Saratoga spurred France to offer a defensive treaty of alliance with the United States to guarantee its independence from Britain. It was conditioned on Britain initiating a war on France to stop it from trading with the US. Spain and the Netherlands were invited to join by both France and the United States in the treaty, but neither made a formal reply.
On June 13, 1778, France declared war on Great Britain, and it invoked the French military alliance with the US. That ensured additional US privateer support for French possessions in the Caribbean. King George III feared that the war's prospects would make it unlikely he could reclaim the North American colonies. During the later years of the Revolution, the British were drawn into numerous other conflicts about the globe.
Washington worked closely with the soldiers and navy that France would send to America, primarily through Lafayette on his staff. French assistance made critical contributions required to defeat Cornwallis at Yorktown in 1781. The final elements for US victory over Britain and US independence was assured by direct military intervention from France, as well as ongoing French supply and commercial trade over the final three years of the war.
African Americans—slave and free—served on both sides during the war. The British recruited slaves belonging to Patriot masters and promised freedom to those who served by act of Lord Dunmore's Proclamation. Because of manpower shortages, George Washington lifted the ban on black enlistment in the Continental Army in January 1776. Small all-black units were formed in Rhode Island and Massachusetts; many slaves were promised freedom for serving. Some of the men promised freedom were sent back to their masters, after the war was over, out of political convenience. Another all-black unit came from Saint-Domingue with French colonial forces. At least 5,000 black soldiers fought for the Revolutionary cause.
Tens of thousands of slaves escaped during the war and joined British lines; others simply moved off in the chaos. For instance, in South Carolina, nearly 25,000 slaves (30% of the enslaved population) fled, migrated or died during the disruption of the war. This greatly disrupted plantation production during and after the war. When they withdrew their forces from Savannah and Charleston, the British also evacuated 10,000 slaves belonging to Loyalists. Altogether, the British evacuated nearly 20,000 blacks at the end of the war. More than 3,000 of them were freedmen and most of these were resettled in Nova Scotia; other blacks were sold in the West Indies. About 8,000 to 10,000 slaves gained freedom. About 4,000 freed slaves went to Nova Scotia and 1,200 blacks remained slaves.
Most American Indians east of the Mississippi River were affected by the war, and many tribes were divided over the question of how to respond to the conflict. A few tribes were on friendly terms with the other Americans, but most Indians opposed the union of the Colonies as a potential threat to their territory. Approximately 13,000 Indians fought on the British side, with the largest group coming from the Iroquois tribes, who fielded around 1,500 men. The powerful Iroquois Confederacy was shattered as a result of the conflict, whatever side they took; the Seneca, Onondaga, and Cayuga tribes sided with the British. Members of the Mohawks fought on both sides. Many Tuscarora and Oneida sided with the Americans. The Continental Army sent the Sullivan Expedition on raids throughout New York to cripple the Iroquois tribes that had sided with the British. Mohawk leaders Joseph Louis Cook and Joseph Brant sided with the Americans and the British respectively, and this further exacerbated the split.
Farther west, conflicts between settlers and Indians led to lasting distrust. In the Treaty of Paris, Great Britain ceded control of the disputed lands between the Great Lakes and the Ohio River, but the Indian inhabitants were not a part of the peace negotiations. Tribes in the Northwest Territory banded together and allied with the British to resist American settlement; their conflict continued after the Revolutionary War as the Northwest Indian War.
Early in July 1776, Cherokee allies of Britain attacked the western frontier areas of North Carolina. Their defeat resulted in a splintering of the Cherokee settlements and people and was directly responsible for the rise of the Chickamauga Cherokee, bitter enemies of the American settlers who carried on a frontier war for decades following the end of hostilities with Britain. Creek and Seminole allies of Britain fought against Americans in Georgia and South Carolina. In 1778, a force of 800 Creeks destroyed American settlements along the Broad River in Georgia. Creek warriors also joined Thomas Brown's raids into South Carolina and assisted Britain during the Siege of Savannah. Many Indians were involved in the fighting between Britain and Spain on the Gulf Coast and up the Mississippi River, mostly on the British side. Thousands of Creeks, Chickasaws, and Choctaws fought in major battles such as the Battle of Fort Charlotte, the Battle of Mobile, and the Siege of Pensacola.
Women played various roles during the Revolutionary War. Some women accompanied their husbands when permitted. Martha Washington was known to visit the American camp, for example, and Frederika Charlotte Riedesel documented the Saratoga campaign. Women also acted as spies on both sides of the Revolutionary War. In some cases women served in the American Army in the war, some of them disguised as men. Deborah Sampson fought until her sex was discovered and she was discharged, and Sally St. Clare died in the war. Anna Maria Lane joined her husband in the Army, and she was wearing men's clothes by the time of the Battle of Germantown. According to the Virginia General Assembly, Lane "performed extraordinary military services, and received a severe wound at the battle of Germantown", fighting dressed as a man and "with the courage of a soldier". Other women fought or directly supported fighting while dressed as women, such as the legendary or mythical Molly Pitcher. On April 26, 1777, Sybil Ludington rode to alert militia forces of Putnam County, New York and Danbury, Connecticut, warning of the approach of the British regular forces. She is referred to as the female Paul Revere. Other women also accompanied armies as camp followers, selling goods and performing necessary services. They were a necessary part of 18th century armies, and they numbered in the thousands during the war.
After the surrender at Yorktown Washington expressed astonishment that the Americans had won a war against a leading world power, referring to the American victory as "little short of a standing miracle". On April 9, 1783, Washington issued orders that he had long waited to give, that "all acts of hostility" were to cease immediately. That same day, by arrangement with Washington, General Carleton issued a similar order to British troops. British troops, however, were not to disband until a prisoner of war exchange occurred, an effort that involved much negotiation and would take some seven months to effect.
From 1782 to 1784 there were many diplomats in and out of Paris who were directly involved with international peace negotiations. They deliberated over three wars among four principle belligerents: first, the American Revolutionary War among Britain, the US and their French allies; second, the Anglo-French War (1778) among Britain, the French and their Spanish allies; and third, the Fourth Anglo-Dutch War (1780) between Britain and the Netherlands. In addition, the diplomats of Great Power nations among the First League of Armed Neutrality consulted one another and exchanged various proposals from their respective governments, especially those of Russia and Austria that Britain had invited to be mediators among the Great Powers.
The British-Thirteen Colony conflict had lasted over six years from 1775 Lexington to 1781 Yorktown. About three years into the conflict, France and the US struck an agreement at the Treaty of Alliance (1778) that promised those two would consult before concluding peace with Britain for US independence. Then the next year, France and Spain consorted in secret at the Treaty of Aranjuez (1779) to promise those two would fight until Spain gained Gibraltar, at the choke-point passage between the Mediterranean and the Atlantic. After the Yorktown defeat and Parliament's resolution to end American fighting, British Prime Minister Shelburne sought to separate the US from warring France by strengthening the American peace settlement so that in the future, the US would not depend militarily on France. He also sought to strengthen Britain with continued to trade with the future US. French Foreign Minister Vergennes sought to influence the "American Settlement" for the long term interests of France. He wanted to weaken the US militarily to ensure its future dependence on France in a perpetual military alliance against Britain.
In Paris, the three Great Power belligerents in the Anglo-French War floated distinctly different proposals for a mutual "American Settlement" apportioning territory for the United States. The first map shown is the French, the most restrictive of the US, with a western boundary at the Appalachian Mountains to match the British 1763 Proclamation Line, an item used to indict George III in the US Declaration of Independence. The second, Spanish map allows for additional Mississippi River Basin upland just west of the Appalachians for the US. But it also requires that the British cede its colony of Georgia to Spain in violation of the Franco-American alliance of 1778, and contrary to the British announcement for US independence by George III in December 1782. The third, British map was accepted by Congress in April 1783, with US territory west to the middle of the Mississippi River as a preliminary agreement. Congress had interpreted its national interest to be found in the peace treaty that ceded the most expansive territory considered by the European Great Powers. It trusted British treaty guarantees with bonds of history, family and trade over remonstrances from the ministers of France and Spain who were motivated by a secret treaty that the US had not agreed to.
The "definitive" British-US Treaty of Paris was signed on September 3, 1783, just before the out-maneuvered French and Spanish ended the Anglo-French War of 1778 in their respective treaties with Great Britain at Versailles Palace. The US ministers negotiating the British-US peace were John Adams, Benjamin Franklin, John Jay, and for Britain, David Hartley of Parliament and Richard Oswald, Britain's Peace Commissioner. Adams, who was a leading participant drafting the treaty, maintained that its negotiations represented "one of the most important political events that ever happened on the globe".
Following the British defeat at Yorktown, English public will evaporated for continuing the government's war to suppress the Thirteen Colony rebellion. Three months later on February 2, 1782 the House of Commons voted against further offensive war against the US. Six weeks more, American General George Washington and British General Sir Guy Carleton entered into an end of hostilities between the belligerents at New York City.
Britain was under attack around the world from the navies of France, Spain and the Netherlands. Prime Minister Lord Shelburne sought to bring an early end to the American Revolutionary War by accepting American independence, and in hopes of separating the US from France, he met their demands for territory west to the Mississippi River. The British government could then commit the British garrisons at New York and Charleston to attack French and Spanish West Indies. To speed the US negotiators, Britain offered Newfoundland fishing rights to the US, denying France exclusive rights; France and Spain would now sign their treaties after the Anglo-American fait accompli. The two separately negotiated the treaties of Versailles with France and with Spain addressed issues of mutual concern, such as a European “continental balance of power", reciprocal colonial territory swaps, and trade agreements among their respective worldwide colonial empires.
As for British Indian allies in America, Britain never consulted them at any time prior to treaty negotiations, then it forced them to reluctantly accept the treaty. But the following year Britain underwrote formerly allied Indians for attacks against US settlers west of the Appalachians on territory Britain had ceded by treaty. The largest sustained British ally Indian war of this period was the Northwest Indian War 1785–1795. Britain's extended war policy on the US continued to try to establish an Indian buffer state below the Great Lakes as late as 1814 during the War of 1812. However, the last uniformed troops departed east coast port cities, on November 25, 1783, marking the end of British occupation in the new United States.
The US armies had been furloughed home, disbanded as of Washington's General Orders of Monday June 2, 1783. .
The total loss of life throughout the conflict is largely unknown. As was typical in wars of the era, diseases such as smallpox claimed more lives than battle. Between 1775 and 1782, a smallpox epidemic broke out throughout North America, killing 40 people in Boston alone. Historian Joseph Ellis suggests that Washington's decision to have his troops inoculated against the disease was one of his most important decisions.
Between 25,000 and 70,000 American Patriots died during active military service. Of these, approximately 6,800 were killed in battle, while at least 17,000 died from disease. The majority of the latter died while prisoners of war of the British, mostly in the prison ships in New York Harbor. The number of Patriots seriously wounded or disabled by the war has been estimated from 8,500 to 25,000.
The French suffered 2,112 killed in combat in America. The Spanish lost a total of just 124 killed and 247 wounded in West Florida.
A British report in 1781 puts their total Army deaths at 6,046 in North America (1775–1779). Approximately 7,774 Germans died in British service in addition to 4,888 deserters; of the former, it is estimated 1,800 were killed in combat.
Around 171,000 sailors served in the Royal Navy during British conflicts 1775–1784; approximately a quarter of whom had been pressed into service. Around 1,240 were killed in battle, while an estimated 18,500 died from disease (1776–1780). The greatest killer at sea was scurvy, a disease caused by vitamin C deficiency. Around 42,000 sailors deserted worldwide during the era. The impact on merchant shipping was substantial; 2,283 were taken by American privateers.
Congress had immense difficulties financing its war effort. As the circulation of hard currency declined, the Americans had to rely on loans from France, Spain, and the Netherlands, saddling the young nation and its states with crippling debts. Congress attempted to remedy this by printing vast amounts of paper money and bills of credit to raise revenue, but the effect was disastrous: inflation skyrocketed and the paper money became virtually worthless. The inflation spawned a popular phrase that anything of little value was "not worth a continental".
At the start of the war, the economy was flourishing in the colonies in spite of the British blockade. By 1779, however, the economy had almost collapsed. By 1791, the United States had accumulated a national debt of approximately $75.5 million. The French spent approximately 1.3 billion livres aiding the Americans, equivalent to 100 million pounds sterling (13.33 livres to the pound).
Britain spent around £80 million and ended with a national debt of £250 million (£ in today's money), generating a yearly interest of £9.5 million annually. The debts piled upon that which it had already accumulated from the Seven Years' War.
These are some of the standard works about the war in general that are not listed above; books about specific campaigns, battles, units, and individuals can be found in those articles. | https://en.wikipedia.org/wiki?curid=771 |
Ampere
The ampere ( or (UK), symbol: A), often shortened to "amp", is the base unit of electric current in the International System of Units (SI). It is named after André-Marie Ampère (1775–1836), French mathematician and physicist, considered the father of electrodynamics.
The International System of Units defines the ampere in terms of other base units by measuring the electromagnetic force between electrical conductors carrying electric current. The earlier CGS measurement system had two different definitions of current, one essentially the same as the SI's and the other using electric charge as the base unit, with the unit of charge defined by measuring the force between two charged metal plates. The ampere was then defined as one coulomb of charge per second. In SI, the unit of charge, the coulomb, is defined as the charge carried by one ampere during one second.
New definitions, in terms of invariant constants of nature, specifically the elementary charge, took effect on 20 May 2019.
The ampere is defined by taking the fixed numerical value of the elementary charge to be 1.602 176 634 × 10−19 when expressed in the unit C, which is equal to A⋅s, where the second is defined in terms of , the unperturbed ground state hyperfine transition frequency of the caesium-133 atom.
The SI unit of charge, the coulomb, "is the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge is determined by steady current flowing for a time as .
Constant, instantaneous and average current are expressed in amperes (as in "the charging current is 1.2 A") and the charge accumulated (or passed through a circuit) over a period of time is expressed in coulombs (as in "the battery charge is "). The relation of the ampere (C/s) to the coulomb is the same as that of the watt (J/s) to the joule.
The ampere is named for French physicist and mathematician André-Marie Ampère (1775–1836), who studied electromagnetism and laid the foundation of electrodynamics. In recognition of Ampère's contributions to the creation of modern electrical science, an international convention, signed at the 1881 International Exposition of Electricity, established the ampere as a standard unit of electrical measurement for electric current.
The ampere was originally defined as one tenth of the unit of electric current in the centimetre–gram–second system of units. That unit, now known as the abampere, was defined as the amount of current that generates a force of two dynes per centimetre of length between two wires one centimetre apart. The size of the unit was chosen so that the units derived from it in the MKSA system would be conveniently sized.
The "international ampere" was an early realization of the ampere, defined as the current that would deposit of silver per second from a silver nitrate solution. Later, more accurate measurements revealed that this current is .
Since power is defined as the product of current and voltage, the ampere can alternatively be expressed in terms of the other units using the relationship , and thus 1 A = 1 W/V. Current can be measured by a multimeter, a device that can measure electrical voltage, current, and resistance.
Until 2019, the SI defined the ampere as follows:
The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed one metre apart in vacuum, would produce between these conductors a force equal to newtons per metre of length.
Ampère's force law states that there is an attractive or repulsive force between two parallel wires carrying an electric current. This force is used in the formal definition of the ampere.
The SI unit of charge, the coulomb, was then defined as "the quantity of electricity carried in 1 second by a current of 1 ampere". Conversely, a current of one ampere is one coulomb of charge going past a given point per second:
In general, charge "Q" was determined by steady current "I" flowing for a time "t" as .
The standard ampere is most accurately realized using a Kibble balance, but is in practice maintained via Ohm's law from the units of electromotive force and resistance, the volt and the ohm, since the latter two can be tied to physical phenomena that are relatively easy to reproduce, the Josephson junction and the quantum Hall effect, respectively.
At present, techniques to establish the realization of an ampere have a relative uncertainty of approximately a few parts in 10, and involve realizations of the watt, the ohm and the volt. | https://en.wikipedia.org/wiki?curid=772 |
Algorithm
In mathematics and computer science, an algorithm () is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms are always unambiguous and are used as specifications for performing calculations, data processing, automated reasoning, and other tasks.
As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
The concept of algorithm has existed since antiquity. Arithmetic algorithms, such as a division algorithm, was used by ancient Babylonian mathematicians c. 2500 BC and Egyptian mathematicians c. 1550 BC. Greek mathematicians later used algorithms in the sieve of Eratosthenes for finding prime numbers, and the Euclidean algorithm for finding the greatest common divisor of two numbers. Arabic mathematicians such as al-Kindi in the 9th century used cryptographic algorithms for code-breaking, based on frequency analysis.
The word "algorithm" itself is derived from the 9th-century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized "Algoritmi". A partial formalization of what would become the modern concept of algorithm began with attempts to solve the "Entscheidungsproblem " (decision problem) posed by David Hilbert in 1928. Later formalizations were framed as attempts to define "effective calculability" or "effective method". Those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, and Alan Turing's Turing machines of 1936–37 and 1939.
The word 'algorithm' has its roots in Latinizing the name of mathematician Muhammad ibn Musa al-Khwarizmi in the first steps to "algorismus". Al-Khwārizmī (, c. 780–850) was a mathematician, astronomer, geographer, and scholar in the House of Wisdom in Baghdad, whose name means 'the native of Khwarazm', a region that was part of Greater Iran and is now in Uzbekistan.
About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, which was translated into Latin during the 12th century under the title "Algoritmi de numero Indorum". This title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name. Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through another of his books, the Algebra. In late medieval Latin, "algorismus", English 'algorism', the corruption of his name, simply meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός ("arithmos"), 'number' ("cf." 'arithmetic'), the Latin word was altered to "algorithmus", and the corresponding English term 'algorithm' is first attested in the 17th century; the modern sense was introduced in the 19th century.
In English, it was first used in about 1230 and then by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English.
Another early use of the word is from 1240, in a manual titled "Carmen de Algorismo" composed by Alexandre de Villedieu. It begins with:
which translates to:
The poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that precisely defines a sequence of operations", which would include all computer programs, including programs that do not perform numeric calculations, and (for example) any prescribed bureaucratic procedure.
In general, a program is only an algorithm if it stops eventually.
A prototypical example of an algorithm is the Euclidean algorithm, which is used to determine the maximum common divisor of two integers; an example (there are others) is described by the flowchart above and as an example in a later section.
No human being can write fast enough, or long enough, or small enough† ( †"smaller and smaller without limit … you'd be trying to write on molecules, on atoms, on electrons") to list all members of an enumerably infinite set by writing out their names, one after another, in some notation. But humans can do something equally useful, in the case of certain enumerably infinite sets: They can give "explicit instructions for determining the nth member of the set", for arbitrary finite "n". Such instructions are to be given quite explicitly, in a form in which "they could be followed by a computing machine", or by a "human who is capable of carrying out only very elementary operations on symbols."
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus, Boolos and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an "arbitrary" "input" integer or integers that, in theory, can be arbitrarily large. For example, an algorithm can be an algebraic equation such as "y = m + n" (i.e., two arbitrary "input variables" "m" and "n" that produce an output "y"), but various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of (for the addition example):
The concept of "algorithm" is also used to define the notion of decidability—a notion that is central for explaining how formal systems come into being starting from a small set of axioms and rules. In logic, the time that an algorithm requires to complete cannot be measured, as it is not apparently related to the customary physical dimension. From such uncertainties, that characterize ongoing work, stems the unavailability of a definition of "algorithm" that suits both concrete (in some sense) and abstract usage of the term.
Algorithms are essential to the way computers process data. Many computer programs contain algorithms that detail the specific instructions a computer should perform—in a specific order—to carry out a specified task, such as calculating employees' paychecks or printing students' report cards. Thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Authors who assert this thesis include Minsky (1967), Savage (1987) and Gurevich (2000):
Turing machines can define computational processes that do not terminate. The informal definitions of algorithms generally require that the algorithm always terminates. This requirement renders the task of deciding whether a formal procedure is an algorithm impossible in the general case—due to a major theorem of computability theory known as the halting problem.
Typically, when an algorithm is associated with processing information, data can be read from an input source, written to an output device and stored for further processing. Stored data are regarded as part of the internal state of the entity performing the algorithm. In practice, the state is stored in one or more data structures.
For some of these computational process, the algorithm must be rigorously defined: specified in the way it applies in all possible circumstances that could arise. This means that any conditional steps must be systematically dealt with, case-by-case; the criteria for each case must be clear (and computable).
Because an algorithm is a precise list of precise steps, the order of computation is always crucial to the functioning of the algorithm. Instructions are usually assumed to be listed explicitly, and are described as starting "from the top" and going "down to the bottom"—an idea that is described more formally by "flow of control".
So far, the discussion on the formalization of an algorithm has assumed the premises of imperative programming. This is the most common conception—one which attempts to describe a task in discrete, "mechanical" means. Unique to this conception of formalized algorithms is the assignment operation, which sets the value of a variable. It derives from the intuition of "memory" as a scratchpad. An example of such an assignment can be found below.
For some alternate conceptions of what constitutes an algorithm, see functional programming and logic programming.
Algorithms can be expressed in many kinds of notation, including natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables (processed by interpreters). Natural language expressions of algorithms tend to be verbose and ambiguous, and are rarely used for complex or technical algorithms. Pseudocode, flowcharts, drakon-charts and control tables are structured ways to express algorithms that avoid many of the ambiguities common in the statements based on natural language. Programming languages are primarily intended for expressing algorithms in a form that can be executed by a computer, but are also often used as a way to define or document algorithms.
There is a wide variety of representations possible and one can express a given Turing machine program as a sequence of machine tables (see finite-state machine, state transition table and control table for more), as flowcharts and drakon-charts (see state diagram for more), or as a form of rudimentary machine code or assembly code called "sets of quadruples" (see Turing machine for more).
Representations of algorithms can be classed into three accepted levels of Turing machine description, as follows:
For an example of the simple algorithm "Add m+n" described in all three levels, see Algorithm#Examples.
Algorithm design refers to a method or a mathematical process for problem-solving and engineering algorithms. The design of algorithms is part of many solution theories of operation research, such as dynamic programming and divide-and-conquer. Techniques for designing and implementing algorithm designs are also called algorithm design patterns, with examples including the template method pattern and the decorator pattern.
One of the most important aspects of algorithm design lies in the creation of algorithm that has an efficient run-time, also known as its Big O.
Typical steps in the development of algorithms:
Most algorithms are intended to be implemented as computer programs. However, algorithms are also implemented by other means, such as in a biological neural network (for example, the human brain implementing arithmetic or an insect looking for food), in an electrical circuit, or in a mechanical device.
In computer systems, an algorithm is basically an instance of logic written in software by software developers, to be effective for the intended "target" computer(s) to produce "output" from given (perhaps null) "input". An optimal algorithm, even running in old hardware, would produce faster results than a non-optimal (higher time complexity) algorithm for the same purpose, running in more efficient hardware; that is why algorithms, like computer hardware, are considered technology.
""Elegant" (compact) programs, "good" (fast) programs ": The notion of "simplicity and elegance" appears informally in Knuth and precisely in Chaitin:
Chaitin prefaces his definition with: "I'll show you can't prove that a program is 'elegant—such a proof would solve the Halting problem (ibid).
"Algorithm versus function computable by an algorithm": For a given function multiple algorithms may exist. This is true, even without expanding the available instruction set available to the programmer. Rogers observes that "It is ... important to distinguish between the notion of "algorithm", i.e. procedure and the notion of "function computable by algorithm", i.e. mapping yielded by procedure. The same function may have several different algorithms".
Unfortunately, there may be a tradeoff between goodness (speed) and elegance (compactness)—an elegant program may take more steps to complete a computation than one less elegant. An example that uses Euclid's algorithm appears below.
"Computers (and computors), models of computation": A computer (or human "computor") is a restricted type of machine, a "discrete deterministic mechanical device" that blindly follows its instructions. Melzak's and Lambek's primitive models reduced this notion to four elements: (i) discrete, distinguishable "locations", (ii) discrete, indistinguishable "counters" (iii) an agent, and (iv) a list of instructions that are "effective" relative to the capability of the agent.
Minsky describes a more congenial variation of Lambek's "abacus" model in his "Very Simple Bases for Computability". Minsky's machine proceeds sequentially through its five (or six, depending on how one counts) instructions, unless either a conditional IF–THEN GOTO or an unconditional GOTO changes program flow out of sequence. Besides HALT, Minsky's machine includes three "assignment" (replacement, substitution) operations: ZERO (e.g. the contents of location replaced by 0: L ← 0), SUCCESSOR (e.g. L ← L+1), and DECREMENT (e.g. L ← L − 1). Rarely must a programmer write "code" with such a limited instruction set. But Minsky shows (as do Melzak and Lambek) that his machine is Turing complete with only four general "types" of instructions: conditional GOTO, unconditional GOTO, assignment/replacement/substitution, and HALT. However, a few different assignment instructions (e.g. DECREMENT, INCREMENT, and ZERO/CLEAR/EMPTY for a Minsky machine) are also required for Turing-completeness; their exact specification is somewhat up to the designer. The unconditional GOTO is a convenience; it can be constructed by initializing a dedicated location to zero e.g. the instruction " Z ← 0 "; thereafter the instruction IF Z=0 THEN GOTO xxx is unconditional.
"Simulation of an algorithm: computer (computor) language": Knuth advises the reader that "the best way to learn an algorithm is to try it . . . immediately take pen and paper and work through an example". But what about a simulation or execution of the real thing? The programmer must translate the algorithm into a language that the simulator/computer/computor can "effectively" execute. Stone gives an example of this: when computing the roots of a quadratic equation the computor must know how to take a square root. If they don't, then the algorithm, to be effective, must provide a set of rules for extracting a square root.
This means that the programmer must know a "language" that is effective relative to the target computing agent (computer/computor).
But what model should be used for the simulation? Van Emde Boas observes "even if we base complexity theory on abstract instead of concrete machines, arbitrariness of the choice of a model remains. It is at this point that the notion of "simulation" enters". When speed is being measured, the instruction set matters. For example, the subprogram in Euclid's algorithm to compute the remainder would execute much faster if the programmer had a "modulus" instruction available rather than just subtraction (or worse: just Minsky's "decrement").
"Structured programming, canonical structures": Per the Church–Turing thesis, any algorithm can be computed by a model known to be Turing complete, and per Minsky's demonstrations, Turing completeness requires only four instruction types—conditional GOTO, unconditional GOTO, assignment, HALT. Kemeny and Kurtz observe that, while "undisciplined" use of unconditional GOTOs and conditional IF-THEN GOTOs can result in "spaghetti code", a programmer can write structured programs using only these instructions; on the other hand "it is also possible, and not too hard, to write badly structured programs in a structured language". Tausworthe augments the three Böhm-Jacopini canonical structures: SEQUENCE, IF-THEN-ELSE, and WHILE-DO, with two more: DO-WHILE and CASE. An additional benefit of a structured program is that it lends itself to proofs of correctness using mathematical induction.
"Canonical flowchart symbols": The graphical aide called a flowchart, offers a way to describe and document an algorithm (and a computer program of one). Like the program flow of a Minsky machine, a flowchart always starts at the top of a page and proceeds down. Its primary symbols are only four: the directed arrow showing program flow, the rectangle (SEQUENCE, GOTO), the diamond (IF-THEN-ELSE), and the dot (OR-tie). The Böhm–Jacopini canonical structures are made of these primitive shapes. Sub-structures can "nest" in rectangles, but only if a single exit occurs from the superstructure. The symbols, and their use to build the canonical structures are shown in the diagram.
One of the simplest algorithms is to find the largest number in a list of numbers of random order. Finding the solution requires looking at every number in the list. From this follows a simple algorithm, which can be stated in a high-level description in English prose, as:
"High-level description:"
"(Quasi-)formal description:"
Written in prose but much closer to the high-level language of a computer program, the following is the more formal coding of the algorithm in pseudocode or pidgin code:
Euclid's algorithm to compute the greatest common divisor (GCD) to two numbers appears as Proposition II in Book VII ("Elementary Number Theory") of his "Elements". Euclid poses the problem thus: "Given two numbers not prime to one another, to find their greatest common measure". He defines "A number [to be] a multitude composed of units": a counting number, a positive integer not including zero. To "measure" is to place a shorter measuring length "s" successively ("q" times) along longer length "l" until the remaining portion "r" is less than the shorter length "s". In modern words, remainder "r" = "l" − "q"×"s", "q" being the quotient, or remainder "r" is the "modulus", the integer-fractional part left over after the division.
For Euclid's method to succeed, the starting lengths must satisfy two requirements: (i) the lengths must not be zero, AND (ii) the subtraction must be “proper”; i.e., a test must guarantee that the smaller of the two numbers is subtracted from the larger (or the two can be equal so their subtraction yields zero).
Euclid's original proof adds a third requirement: the two lengths must not be prime to one another. Euclid stipulated this so that he could construct a reductio ad absurdum proof that the two numbers' common measure is in fact the "greatest". While Nicomachus' algorithm is the same as Euclid's, when the numbers are prime to one another, it yields the number "1" for their common measure. So, to be precise, the following is really Nicomachus' algorithm.
Only a few instruction "types" are required to execute Euclid's algorithm—some logical tests (conditional GOTO), unconditional GOTO, assignment (replacement), and subtraction.
The following algorithm is framed as Knuth's four-step version of Euclid's and Nicomachus', but, rather than using division to find the remainder, it uses successive subtractions of the shorter length "s" from the remaining length "r" until "r" is less than "s". The high-level description, shown in boldface, is adapted from Knuth 1973:2–4:
INPUT:
E0: [Ensure "r" ≥ "s".]
E1: [Find remainder]: Until the remaining length "r" in R is less than the shorter length "s" in S, repeatedly subtract the measuring number "s" in S from the remaining length "r" in R.
E2: [Is the remainder zero?]: EITHER (i) the last measure was exact, the remainder in R is zero, and the program can halt, OR (ii) the algorithm must continue: the last measure left a remainder in R less than measuring number in S.
E3: [Interchange "s" and "r"]: The nut of Euclid's algorithm. Use remainder "r" to measure what was previously smaller number "s"; L serves as a temporary location.
OUTPUT:
DONE:
The flowchart of "Elegant" can be found at the top of this article. In the (unstructured) Basic language, the steps are numbered, and the instruction LET [] = [] is the assignment instruction symbolized by ←.
"How "Elegant" works": In place of an outer "Euclid loop", "Elegant" shifts back and forth between two "co-loops", an A > B loop that computes A ← A − B, and a B ≤ A loop that computes B ← B − A. This works because, when at last the minuend M is less than or equal to the subtrahend S (Difference = Minuend − Subtrahend), the minuend can become "s" (the new measuring length) and the subtrahend can become the new "r" (the length to be measured); in other words the "sense" of the subtraction reverses.
The following version can be used with object-oriented languages:
// Euclid's algorithm for greatest common divisor
int euclidAlgorithm (int A, int B){
Does an algorithm do what its author wants it to do? A few test cases usually give some confidence in the core functionality. But tests are not enough. For test cases, one source uses 3009 and 884. Knuth suggested 40902, 24140. Another interesting case is the two relatively prime numbers 14157 and 5950.
But "exceptional cases" must be identified and tested. Will "Inelegant" perform properly when R > S, S > R, R = S? Ditto for "Elegant": B > A, A > B, A = B? (Yes to all). What happens when one number is zero, both numbers are zero? ("Inelegant" computes forever in all cases; "Elegant" computes forever when A = 0.) What happens if "negative" numbers are entered? Fractional numbers? If the input numbers, i.e. the domain of the function computed by the algorithm/program, is to include only positive integers including zero, then the failures at zero indicate that the algorithm (and the program that instantiates it) is a partial function rather than a total function. A notable failure due to exceptions is the Ariane 5 Flight 501 rocket failure (June 4, 1996).
"Proof of program correctness by use of mathematical induction": Knuth demonstrates the application of mathematical induction to an "extended" version of Euclid's algorithm, and he proposes "a general method applicable to proving the validity of any algorithm". Tausworthe proposes that a measure of the complexity of a program be the length of its correctness proof.
"Elegance (compactness) versus goodness (speed)": With only six core instructions, "Elegant" is the clear winner, compared to "Inelegant" at thirteen instructions. However, "Inelegant" is "faster" (it arrives at HALT in fewer steps). Algorithm analysis indicates why this is the case: "Elegant" does "two" conditional tests in every subtraction loop, whereas "Inelegant" only does one. As the algorithm (usually) requires many loop-throughs, "on average" much time is wasted doing a "B = 0?" test that is needed only after the remainder is computed.
"Can the algorithms be improved?": Once the programmer judges a program "fit" and "effective"—that is, it computes the function intended by its author—then the question becomes, can it be improved?
The compactness of "Inelegant" can be improved by the elimination of five steps. But Chaitin proved that compacting an algorithm cannot be automated by a generalized algorithm; rather, it can only be done heuristically; i.e., by exhaustive search (examples to be found at Busy beaver), trial and error, cleverness, insight, application of inductive reasoning, etc. Observe that steps 4, 5 and 6 are repeated in steps 11, 12 and 13. Comparison with "Elegant" provides a hint that these steps, together with steps 2 and 3, can be eliminated. This reduces the number of core instructions from thirteen to eight, which makes it "more elegant" than "Elegant", at nine steps.
The speed of "Elegant" can be improved by moving the "B=0?" test outside of the two subtraction loops. This change calls for the addition of three instructions (B = 0?, A = 0?, GOTO). Now "Elegant" computes the example-numbers faster; whether this is always the case for any given A, B, and R, S would require a detailed analysis.
It is frequently important to know how much of a particular resource (such as time or storage) is theoretically required for a given algorithm. Methods have been developed for the analysis of algorithms to obtain such quantitative answers (estimates); for example, the sorting algorithm above has a time requirement of O("n"), using the big O notation with "n" as the length of the list. At all times the algorithm only needs to remember two values: the largest number found so far, and its current position in the input list. Therefore, it is said to have a space requirement of "O(1)", if the space required to store the input numbers is not counted, or O("n") if it is counted.
Different algorithms may complete the same task with a different set of instructions in less or more time, space, or 'effort' than others. For example, a binary search algorithm (with cost O(log n) ) outperforms a sequential search (cost O(n) ) when used for table lookups on sorted lists or arrays.
The analysis, and study of algorithms is a discipline of computer science, and is often practiced abstractly without the use of a specific programming language or implementation. In this sense, algorithm analysis resembles other mathematical disciplines in that it focuses on the underlying properties of the algorithm and not on the specifics of any particular implementation. Usually pseudocode is used for analysis as it is the simplest and most general representation. However, ultimately, most algorithms are usually implemented on particular hardware/software platforms and their algorithmic efficiency is eventually put to the test using real code. For the solution of a "one off" problem, the efficiency of a particular algorithm may not have significant consequences (unless n is extremely large) but for algorithms designed for fast interactive, commercial or long life scientific usage it may be critical. Scaling from small n to large n frequently exposes inefficient algorithms that are otherwise benign.
Empirical testing is useful because it may uncover unexpected interactions that affect performance. Benchmarks may be used to compare before/after potential improvements to an algorithm after program optimization.
Empirical tests cannot replace formal analysis, though, and are not trivial to perform in a fair manner.
To illustrate the potential improvements possible even in well-established algorithms, a recent significant innovation, relating to FFT algorithms (used heavily in the field of image processing), can decrease processing time up to 1,000 times for applications like medical imaging. In general, speed improvements depend on special properties of the problem, which are very common in practical applications. Speedups of this magnitude enable computing devices that make extensive use of image processing (like digital cameras and medical equipment) to consume less power.
There are various ways to classify algorithms, each with its own merits.
One way to classify algorithms is by implementation means.
Another way of classifying algorithms is by their design methodology or paradigm. There is a certain number of paradigms, each different from the other. Furthermore, each of these categories includes many different types of algorithms. Some common paradigms are:
For optimization problems there is a more specific classification of algorithms; an algorithm for such problems may fall into one or more of the general categories described above as well as into one of the following:
George B. Dantzig and Mukund N. Thapa. 2003. "Linear Programming 2: Theory and Extensions". Springer-Verlag. Problems that can be solved with linear programming include the maximum flow problem for directed graphs. If a problem additionally requires that one or more of the unknowns must be an integer then it is classified in integer programming. A linear programming algorithm can solve such a problem if it can be proved that all restrictions for integer values are superficial, i.e., the solutions satisfy these restrictions anyway. In the general case, a specialized algorithm or an algorithm that finds approximate solutions is used, depending on the difficulty of the problem.
Every field of science has its own problems and needs efficient algorithms. Related problems in one field are often studied together. Some example classes are search algorithms, sorting algorithms, merge algorithms, numerical algorithms, graph algorithms, string algorithms, computational geometric algorithms, combinatorial algorithms, medical algorithms, machine learning, cryptography, data compression algorithms and parsing techniques.
Fields tend to overlap with each other, and algorithm advances in one field may improve those of other, sometimes completely unrelated, fields. For example, dynamic programming was invented for optimization of resource consumption in industry but is now used in solving a broad range of problems in many fields.
Algorithms can be classified by the amount of time they need to complete compared to their input size:
Some problems may have multiple algorithms of differing complexity, while other problems might have no algorithms or no known efficient algorithms. There are also mappings from some problems to other problems. Owing to this, it was found to be more suitable to classify the problems themselves instead of the algorithms into equivalence classes based on the complexity of the best possible algorithms for them.
The adjective "continuous" when applied to the word "algorithm" can mean:
Algorithms, by themselves, are not usually patentable. In the United States, a claim consisting solely of simple manipulations of abstract concepts, numbers, or signals does not constitute "processes" (USPTO 2006), and hence algorithms are not patentable (as in Gottschalk v. Benson). However practical applications of algorithms are sometimes patentable. For example, in Diamond v. Diehr, the application of a simple feedback algorithm to aid in the curing of synthetic rubber was deemed patentable. The patenting of software is highly controversial, and there are highly criticized patents involving algorithms, especially data compression algorithms, such as Unisys' LZW patent.
Additionally, some cryptographic algorithms have export restrictions (see export of cryptography).
The earliest evidence of algorithms is found in the Babylonian mathematics of ancient Mesopotamia (modern Iraq). A Sumerian clay tablet found in Shuruppak near Baghdad and dated to circa 2500 BC described the earliest division algorithm. During the Hammurabi dynasty circa 1800-1600 BC, Babylonian clay tablets described algorithms for computing formulas. Algorithms were also used in Babylonian astronomy. Babylonian clay tablets describe and employ algorithmic procedures to compute the time and place of significant astronomical events.
Algorithms for arithmetic are also found in ancient Egyptian mathematics, dating back to the Rhind Mathematical Papyrus circa 1550 BC. Algorithms were later used in ancient Hellenistic mathematics. Two examples are the Sieve of Eratosthenes, which was described in the "Introduction to Arithmetic" by Nicomachus, and the Euclidean algorithm, which was first described in "Euclid's Elements" (c. 300 BC).
Tally-marks: To keep track of their flocks, their sacks of grain and their money the ancients used tallying: accumulating stones or marks scratched on sticks or making discrete symbols in clay. Through the Babylonian and Egyptian use of marks and symbols, eventually Roman numerals and the abacus evolved (Dilson, p. 16–41). Tally marks appear prominently in unary numeral system arithmetic used in Turing machine and Post–Turing machine computations.
Muhammad ibn Mūsā al-Khwārizmī, a Persian mathematician, wrote the "Al-jabr" in the 9th century. The terms "algorism" and "algorithm" are derived from the name al-Khwārizmī, while the term "algebra" is derived from the book "Al-jabr". In Europe, the word "algorithm" was originally used to refer to the sets of rules and techniques used by Al-Khwarizmi to solve algebraic equations, before later being generalized to refer to any set of rules or techniques. This eventually culminated in Leibniz's notion of the calculus ratiocinator (ca 1680):
The first cryptographic algorithm for deciphering encrypted code was developed by Al-Kindi, a 9th-century Arab mathematician, in "A Manuscript On Deciphering Cryptographic Messages". He gave the first description of cryptanalysis by frequency analysis, the earliest codebreaking algorithm.
"The clock": Bolter credits the invention of the weight-driven clock as "The key invention [of Europe in the Middle Ages]", in particular, the verge escapement that provides us with the tick and tock of a mechanical clock. "The accurate automatic machine" led immediately to "mechanical automata" beginning in the 13th century and finally to "computational machines"—the difference engine and analytical engines of Charles Babbage and Countess Ada Lovelace, mid-19th century. Lovelace is credited with the first creation of an algorithm intended for processing on a computer—Babbage's analytical engine, the first device considered a real Turing-complete computer instead of just a calculator—and is sometimes called "history's first programmer" as a result, though a full implementation of Babbage's second device would not be realized until decades after her lifetime.
"Logical machines 1870 – Stanley Jevons' "logical abacus" and "logical machine"": The technical problem was to reduce Boolean equations when presented in a form similar to what is now known as Karnaugh maps. Jevons (1880) describes first a simple "abacus" of "slips of wood furnished with pins, contrived so that any part or class of the [logical] combinations can be picked out mechanically ... More recently, however, I have reduced the system to a completely mechanical form, and have thus embodied the whole of the indirect process of inference in what may be called a "Logical Machine"" His machine came equipped with "certain moveable wooden rods" and "at the foot are 21 keys like those of a piano [etc] ...". With this machine he could analyze a "syllogism or any other simple logical argument".
This machine he displayed in 1870 before the Fellows of the Royal Society. Another logician John Venn, however, in his 1881 "Symbolic Logic", turned a jaundiced eye to this effort: "I have no high estimate myself of the interest or importance of what are sometimes called logical machines ... it does not seem to me that any contrivances at present known or likely to be discovered really deserve the name of logical machines"; see more at Algorithm characterizations. But not to be outdone he too presented "a plan somewhat analogous, I apprehend, to Prof. Jevon's "abacus" ... [And] [a]gain, corresponding to Prof. Jevons's logical machine, the following contrivance may be described. I prefer to call it merely a logical-diagram machine ... but I suppose that it could do very completely all that can be rationally expected of any logical machine".
"Jacquard loom, Hollerith punch cards, telegraphy and telephony – the electromechanical relay": Bell and Newell (1971) indicate that the Jacquard loom (1801), precursor to Hollerith cards (punch cards, 1887), and "telephone switching technologies" were the roots of a tree leading to the development of the first computers. By the mid-19th century the telegraph, the precursor of the telephone, was in use throughout the world, its discrete and distinguishable encoding of letters as "dots and dashes" a common sound. By the late 19th century the ticker tape (ca 1870s) was in use, as was the use of Hollerith cards in the 1890 U.S. census. Then came the teleprinter (ca. 1910) with its punched-paper use of Baudot code on tape.
"Telephone-switching networks" of electromechanical relays (invented 1835) was behind the work of George Stibitz (1937), the inventor of the digital adding device. As he worked in Bell Laboratories, he observed the "burdensome' use of mechanical calculators with gears. "He went home one evening in 1937 intending to test his idea... When the tinkering was over, Stibitz had constructed a binary adding device".
Davis (2000) observes the particular importance of the electromechanical relay (with its two "binary states" "open" and "closed"):
"Symbols and rules": In rapid succession, the mathematics of George Boole (1847, 1854), Gottlob Frege (1879), and Giuseppe Peano (1888–1889) reduced arithmetic to a sequence of symbols manipulated by rules. Peano's "The principles of arithmetic, presented by a new method" (1888) was "the first attempt at an axiomatization of mathematics in a symbolic language".
But Heijenoort gives Frege (1879) this kudos: Frege's is "perhaps the most important single work ever written in logic. ... in which we see a " 'formula language', that is a "lingua characterica", a language written with special symbols, "for pure thought", that is, free from rhetorical embellishments ... constructed from specific symbols that are manipulated according to definite rules". The work of Frege was further simplified and amplified by Alfred North Whitehead and Bertrand Russell in their Principia Mathematica (1910–1913).
"The paradoxes": At the same time a number of disturbing paradoxes appeared in the literature, in particular, the Burali-Forti paradox (1897), the Russell paradox (1902–03), and the Richard Paradox. The resultant considerations led to Kurt Gödel's paper (1931)—he specifically cites the paradox of the liar—that completely reduces rules of recursion to numbers.
"Effective calculability": In an effort to solve the Entscheidungsproblem defined precisely by Hilbert in 1928, mathematicians first set about to define what was meant by an "effective method" or "effective calculation" or "effective calculability" (i.e., a calculation that would succeed). In rapid succession the following appeared: Alonzo Church, Stephen Kleene and J.B. Rosser's λ-calculus a finely honed definition of "general recursion" from the work of Gödel acting on suggestions of Jacques Herbrand (cf. Gödel's Princeton lectures of 1934) and subsequent simplifications by Kleene. Church's proof that the Entscheidungsproblem was unsolvable, Emil Post's definition of effective calculability as a worker mindlessly following a list of instructions to move left or right through a sequence of rooms and while there either mark or erase a paper or observe the paper and make a yes-no decision about the next instruction. Alan Turing's proof of that the Entscheidungsproblem was unsolvable by use of his "a- [automatic-] machine"—in effect almost identical to Post's "formulation", J. Barkley Rosser's definition of "effective method" in terms of "a machine". S.C. Kleene's proposal of a precursor to "Church thesis" that he called "Thesis I", and a few years later Kleene's renaming his Thesis "Church's Thesis" and proposing "Turing's Thesis".
Emil Post (1936) described the actions of a "computer" (human being) as follows:
His symbol space would be
Alan Turing's work preceded that of Stibitz (1937); it is unknown whether Stibitz knew of the work of Turing. Turing's biographer believed that Turing's use of a typewriter-like model derived from a youthful interest: "Alan had dreamt of inventing typewriters as a boy; Mrs. Turing had a typewriter, and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'". Given the prevalence of Morse code and telegraphy, ticker tape machines, and teletypewriters we might conjecture that all were influences.
Turing—his model of computation is now called a Turing machine—begins, as did Post, with an analysis of a human computer that he whittles down to a simple set of basic motions and "states of mind". But he continues a step further and creates a machine as a model of computation of numbers.
Turing's reduction yields the following:
"It may be that some of these change necessarily invoke a change of state of mind. The most general single operation must, therefore, be taken to be one of the following:
A few years later, Turing expanded his analysis (thesis, definition) with this forceful expression of it:
J. Barkley Rosser defined an 'effective [mathematical] method' in the following manner (italicization added):
Rosser's footnote No. 5 references the work of (1) Church and Kleene and their definition of λ-definability, in particular Church's use of it in his "An Unsolvable Problem of Elementary Number Theory" (1936); (2) Herbrand and Gödel and their use of recursion in particular Gödel's use in his famous paper "On Formally Undecidable Propositions of Principia Mathematica and Related Systems I" (1931); and (3) Post (1936) and Turing (1936–37) in their mechanism-models of computation.
Stephen C. Kleene defined as his now-famous "Thesis I" known as the Church–Turing thesis. But he did this in the following context (boldface in original):
A number of efforts have been directed toward further refinement of the definition of "algorithm", and activity is on-going because of issues surrounding, in particular, foundations of mathematics (especially the Church–Turing thesis) and philosophy of mind (especially arguments about artificial intelligence). For more, see Algorithm characterizations. | https://en.wikipedia.org/wiki?curid=775 |
Anthophyta
The anthophytes were thought to be a clade comprising plants bearing flower-like structures. The group contained the angiosperms - the extant flowering plants, such as roses and grasses - as well as the Gnetales and the extinct Bennettitales.
Detailed morphological and molecular studies have shown that the group is not actually monophyletic, with proposed floral homologies of the gnetophytes and the angiosperms having evolved in parallel. This makes it easier to reconcile molecular clock data that suggests that the angiosperms diverged from the gymnosperms around .
Some more recent studies have used the word anthophyte to describe a group which includes the angiosperms and a variety of fossils (glossopterids, "Pentoxylon", Bennettitales, and "Caytonia"), but not the Gnetales. | https://en.wikipedia.org/wiki?curid=779 |
Mouthwash
Mouthwash, mouth rinse, oral rinse, or mouth bath is a liquid which is held in the mouth passively or swilled around the mouth by contraction of the perioral muscles and/or movement of the head, and may be gargled, where the head is tilted back and the liquid bubbled at the back of the mouth.
Usually mouthwashes are antiseptic solutions intended to reduce the microbial load in the oral cavity, although other mouthwashes might be given for other reasons such as for their analgesic, anti-inflammatory or anti-fungal action. Additionally, some rinses act as saliva substitutes to neutralize acid and keep the mouth moist in xerostomia (dry mouth). Cosmetic mouthrinses temporarily control or reduce bad breath and leave the mouth with a pleasant taste.
Rinsing with water or mouthwash after brushing with a fluoride toothpaste can reduce the availability of salivary fluoride. This can lower the anti-cavity re-mineralization and antibacterial effects of fluoride. Fluoridated mouthwash may mitigate this effect or in high concentrations increase available fluoride. A group of experts discussing post brushing rinsing in 2012 found that although there was clear guidance given in many public health advice publications to "spit, avoid rinsing with water/excessive rinsing with water" they believed there was a limited evidence base for best practice.
Common use involves rinsing the mouth with about 20-50 ml (2/3 fl oz) of mouthwash. The wash is typically swished or gargled for about half a minute and then spat out. Most companies suggest not drinking water immediately after using mouthwash. In some brands, the expectorate is stained, so that one can see the bacteria and debris.
Mouthwash should not be used immediately after brushing the teeth so as not to wash away the beneficial fluoride residue left from the toothpaste. Similarly, the mouth should not be rinsed out with water after brushing. Patients were told to "spit don't rinse" after toothbrushing as part of a National Health Service campaign in the UK.
Gargling is where the head is tilted back, allowing the mouthwash to sit in the back of the mouth while exhaling, causing the liquid to bubble. Gargling is practiced in Japan for perceived prevention of viral infection. One commonly used way is with infusions or tea. In some cultures, gargling is usually done in private, typically in a bathroom at a sink so the liquid can be rinsed away.
The most common use of mouthwash is commercial antiseptics, which are used at home as part of an oral hygiene routine. Examples of commercial mouthwashes companies include Cēpacol, Colgate, Corsodyl, Dentyl pH, Listerine, , Oral-B, Sarakan, Scope, Tantum verde, and Biotene. Mouthwashes combine ingredients to treat a variety of oral conditions. Variations are common, and mouthwash has no standard formulation so its use and recommendation involves concerns about patient safety. Some manufacturers of mouthwash state that antiseptic and anti-plaque mouth rinse kill the bacterial plaque that causes cavities, gingivitis, and bad breath. It is, however, generally agreed that the use of mouthwash does not eliminate the need for both brushing and flossing. The American Dental Association asserts that regular brushing and proper flossing are enough in most cases, in addition to regular dental check-ups, although they approve many mouthwashes.
For many patients, however, the mechanical methods could be tedious and time-consuming and additionally some local conditions may render them especially difficult. Chemotherapeutic agents, including mouthrinses, could have a key role as adjuncts to daily home care, preventing and controlling supragingival plaque, gingivitis and oral malodor.
Minor and transient side effects of mouthwashes are very common, such as taste disturbance, tooth staining, sensation of a dry mouth, etc. Alcohol-containing mouthwashes may make dry mouth and halitosis worse since it dries out the mouth. Soreness, ulceration and redness may sometimes occur (e.g. aphthous stomatitis, allergic contact stomatitis) if the person is allergic or sensitive to mouthwash ingredients such as preservatives, coloring, flavors and fragrances. Such effects might be reduced or eliminated by diluting the mouthwash with water, using a different mouthwash (e.g. salt water), or foregoing mouthwash entirely.
Prescription mouthwashes are used prior to and after oral surgery procedures such as tooth extraction or to treat the pain associated with mucositis caused by radiation therapy or chemotherapy. They are also prescribed for aphthous ulcers, other oral ulcers, and other mouth pain. Magic mouthwashes are prescription mouthwashes compounded in a pharmacy from a list of ingredients specified by a doctor. Despite a lack of evidence that prescription mouthwashes are more effective in decreasing the pain of oral lesions, many patients and prescribers continue to use them. There has been only one controlled study to evaluate the efficacy of magic mouthwash; it shows no difference in efficacy among the most common formulation and commercial mouthwashes such as chlorhexidine or a saline/baking soda solution. Current guidelines suggest that saline solution is just as effective as magic mouthwash in pain relief or shortening of healing time of oral mucositis from cancer therapies.
The first known references to mouth rinsing is in Ayurveda for treatment of gingivitis. Later, in the Greek and Roman periods, mouth rinsing following mechanical cleansing became common among the upper classes, and Hippocrates recommended a mixture of salt, alum, and vinegar. The Jewish Talmud, dating back about 1,800 years, suggests a cure for gum ailments containing "dough water" and olive oil.
Before Europeans came to the Americas, Native North American and Mesoamerican cultures used mouthwashes, often made from plants such as "Coptis trifolia". Indeed, Aztec dentistry was more advanced than European dentistry of the age. Peoples of the Americas used salt water mouthwashes for sore throats, and other mouthwashes for problems such as teething and mouth ulcers.
Anton van Leeuwenhoek, the famous 17th century microscopist, discovered living organisms (living, because they were mobile) in deposits on the teeth (what we now call dental plaque). He also found organisms in water from the canal next to his home in Delft. He experimented with samples by adding vinegar or brandy and found that this resulted in the immediate immobilization or killing of the organisms suspended in water. Next he tried rinsing the mouth of himself and somebody else with a mouthwash containing vinegar or brandy and found that living organisms remained in the dental plaque. He concluded—correctly—that the mouthwash either did not reach, or was not present long enough, to kill the plaque organisms.
In 1892, German Richard Seifert invented mouthwash product Odol, which was produced by company founder Karl August Lingner (1861–1916) in Dresden.
That remained the state of affairs until the late 1960s when Harald Loe (at the time a professor at the Royal Dental College in Aarhus, Denmark) demonstrated that a chlorhexidine compound could prevent the build-up of dental plaque. The reason for chlorhexidine's effectiveness is that it strongly adheres to surfaces in the mouth and thus remains present in effective concentrations for many hours.
Since then commercial interest in mouthwashes has been intense and several newer products claim effectiveness in reducing the build-up in dental plaque and the associated severity of gingivitis, in addition to fighting bad breath. Many of these solutions aim to control the Volatile Sulfur Compound (VSC)-creating anaerobic bacteria that live in the mouth and excrete substances that lead to bad breath and unpleasant mouth taste. For example, the number of mouthwash variants in the United States of America has grown from 15 (1970) to 66 (1998) to 113 (2012).
Research in the field of microbiotas shows that only a limited set of microbes cause tooth decay, with most of the bacteria in the human mouth being harmless. Focused attention on cavity-causing bacteria such as "Streptococcus mutans" has led research into new mouthwash treatments that prevent these bacteria from initially growing. While current mouthwash treatments must be used with a degree of frequency to prevent this bacteria from regrowing, future treatments could provide a viable long-term solution.
Alcohol is added to mouthwash not to destroy bacteria but to act as a carrier agent for essential active ingredients such as menthol, eucalyptol and thymol which help to penetrate plaque. Sometimes a significant amount of alcohol (up to 27% vol) is added, as a carrier for the flavor, to provide "bite". Because of the alcohol content, it is possible to fail a breathalyzer test after rinsing although breath alcohol levels return to normal after 10 minutes. In addition, alcohol is a drying agent, which encourages bacterial activity in the mouth, releasing more malodorous volatile sulfur compounds. Therefore, alcohol-containing mouthwash may temporarily worsen halitosis in those who already have it, or indeed be the sole cause of halitosis in other individuals.
It is hypothesized that alcohol mouthwashes acts as a carcinogen (cancer-inducing). Generally, there is no scientific consensus about this. One review stated:
The same researchers also state that the risk of acquiring oral cancer rises almost five times for users of alcohol-containing mouthwash who neither smoke nor drink (with a higher rate of increase for those who do). In addition, the authors highlight side effects from several mainstream mouthwashes that included dental erosion and accidental poisoning of children. The review garnered media attention and conflicting opinions from other researchers. Yinka Ebo of Cancer Research UK disputed the findings, concluding that "there is still not enough evidence to suggest that using mouthwash that contains alcohol will increase the risk of mouth cancer". Studies conducted in 1985, 1995, 2003, and 2012 did not support an association between alcohol-containing mouth rinses and oral cancer. Andrew Penman, chief executive of The Cancer Council New South Wales, called for further research on the matter. In a March 2009 brief, the American Dental Association said "the available evidence does not support a connection between oral cancer and alcohol-containing mouthrinse". Many newer brands of mouthwash are alcohol free, not just in response to consumer concerns about oral cancer, but also to cater for religious groups who abstain from alcohol consumption.
In painful oral conditions such as aphthous stomatitis, analgesic mouthrinses (e.g. benzydamine mouthwash, or "Difflam") are sometimes used to ease pain, commonly used before meals to reduce discomfort while eating.
Acts as a buffer
Betamethasone is sometimes used as an anti-inflammatory, corticosteroid mouthwash. It may be used for severe inflammatory conditions of the oral mucosa such as the severe forms of aphthous stomatitis.
Cetylpyridinium chloride containing mouthwash (e.g. 0.05%) is used in some specialized mouthwashes for halitosis. Cetylpyridinium chloride mouthwash has less anti-plaque effect than chlorhexidine and may cause staining of teeth, or sometimes an oral burning sensation or ulceration.
Chlorhexidine digluconate is a chemical antiseptic and is used in a 0.12–0.2% solution as a mouthwash. However, there is no evidence to support that higher concentrations are more effective in controlling dental plaque and gingivitis. It has anti-plaque action, but also some anti-fungal action. It is especially effective against Gram-negative rods. The proportion of Gram-negative rods increase as gingivitis develops so it is also used to reduce gingivitis. It is sometimes used as an adjunct to prevent dental caries and to treat gingivitis periodontal disease, although it does not penetrate into periodontal pockets well. Chlorhexidine mouthwash alone is unable to prevent plaque, so it is not a substitute for regular toothbrushing and flossing. Instead, chlorhexidine is more effective used as an adjunctive treatment with tooth brushing and flossing. In the short term, if toothbrushing is impossible due to pain, as may occur in primary herpetic gingivostomatitis, chlorhexidine is used as temporary substitute for other oral hygiene measures. It is not suited for use in acute necrotizing ulcerative gingivitis, however. Rinsing with chlorhexidine mouthwash before a tooth extraction reduces the risk of dry socket, a painful condition where the blood clot is lost from an extraction socket and bone is exposed to the oral cavity. Other uses of chlorhexidine mouthwash include prevention of oral candidiasis in immunocompromised persons, treatment of denture-related stomatitis, mucosal ulceration/erosions and oral mucosal lesions, general burning sensation and many other uses.
Chlorhexidine has good "substantivity" (the ability of a mouthwash to bind to hard and soft tissues in the mouth). However, chlorhexidine binds to tannins, meaning that prolonged use in persons who consume coffee, tea or red wine is associated with extrinsic staining (i.e. removable staining) of teeth. Chlorhexidine mouthwash can also cause taste disturbance or alteration. Chlorhexidine is rarely associated with other issues like overgrowth of enterobacteria in persons with leukemia, desquamation and irritation of oral mucosa, salivary gland pain and swelling, and hypersensitivity reactions including anaphylaxis. A randomized clinical trial conducted in Rabat university in Morocco found better results in plaque inhibition when chlorohexidine with alcohol base 0.12% was used, when compared to an alcohol free 0.1% chlorhexidine mouthrinse. Chlorhexidine mouthrinses increase staining score of teeth over a period of time.
Hexetidine also has anti-plaque, analgesic, astringent and anti-malodor properties but is considered as an inferior alternative to Chlorhexidine.
In traditional Ayurvedic medicine, the use of oil mouthwashes is called "Kavala" ("oil swishing") or "Gandusha", and this practice has more recently been re-marketed by the complementary and alternative medicine industry as "oil pulling". Its promoters claim it works by "pulling out" "toxins", which are known as ama in Ayurvedic medicine, and thereby reducing inflammation. Ayurvedic literature suggests oil pulling is capable of improving oral and systemic health, including a benefit in conditions such as headaches, migraines, diabetes mellitus, asthma, and acne, as well as whitening teeth.
Oil pulling has received little study and there is little evidence to support claims made by the technique's advocates. When compared with chlorhexidine in one small study, it was found to be less effective at reducing oral bacterial load, otherwise the health claims of oil pulling have failed scientific verification or have not been investigated. There is a report of lipid pneumonia caused by accidental inhalation of the oil during oil pulling.
The mouth is rinsed with approximately one tablespoon of oil for 10–20 minutes then spat out. Sesame oil, coconut oil and ghee are traditionally used, but newer oils such as sunflower oil are also used.
Phenolic compounds include essential oil constituents that have some antibacterial properties, like phenol, thymol, eugenol, or eucalyptol.
Essential oils are oils which have been extracted from plants. Mouthwashes based on essential oils could be more effective than traditional mouthcare - for anti-gingival treatments. They have been found effective in reducing halitosis, and are being used in several commercial mouthwashes.
Anti-cavity mouth rinses use fluoride to protect against tooth decay. Most people using fluoridated toothpastes do not require fluoride-containing mouth rinses, rather fluoride mouthwashes are sometimes used in individuals who are at high risk of dental decay, due to dental caries ("cavities") or people with xerostomia.
Flavoring agents include sweeteners such as sorbitol, sucralose, sodium saccharin, and xylitol, which stimulate salivary function due to their sweetness and taste and helps restore the mouth to a neutral level of acidity.
Xylitol rinses double as a bacterial inhibitor and have been used as substitute for Alcohol to avoid dryness of mouth associated with Alcohol.
Hydrogen peroxide can be used as an oxidizing mouthwash (e.g. Peroxyl, 1.5%). It kills anaerobic bacteria, and also has a mechanical cleansing action when it froths as it comes into contact with debris in mouth. It is often used in the short term to treat acute necrotising ulcerative gingivitis. Side effects with prolonged use might occur, including hypertrophy of the lingual papillae.
Enzymes and proteins such as Lactoperoxidase, Lysozyme, Lactoferrin have been used in mouthrinses (e.g. Biotene) to reduce oral bacteria and hence the acid produced by bacteria.
Oral lidocaine is useful for the treatment of mucositis symptoms (inflammation of mucous membranes) that is induced by radiation or chemotherapy. There is evidence that lidocaine anesthetic mouthwash has the potential to be systemically absorbed when it was tested in patients with oral mucositis who underwent a bone marrow transplant.
Methyl salicylate functions as an anti-septic, anti-inflammatory, analgesic, flavoring, and fragrance. Methyl salicylate has some anti-plaque action, but less than chlorhexidine. Methyl salicylate does not stain teeth.
Nystatin suspension is an antifungal ingredient used for the treatment of oral candidiasis.
A randomized clinical trial found promising results in controlling and reducing dentine hypersensitivity when potassium oxalate mouthrinse was used in conjugation with toothbrushing.
A 2005 study found that gargling three times a day with simple water or with a Povidone-iodine solution was effective in preventing upper respiratory infection and decreasing the severity of symptoms if contracted. Other sources attribute the benefit to a simple placebo effect.
Sanguinarine-containing mouthwashes are marketed as anti-plaque and anti-malodor. It is a toxic alkaloid herbal extract, obtained from plants such as "Sanguinaria canadensis" (Bloodroot), "Argemone mexicana" (Mexican Prickly Poppy) and others. However, its use is strongly associated with development of leukoplakia (a white patch in the mouth), usually in the buccal sulcus. This type of leukoplakia has been termed "sanguinaria-associated keratosis" and more than 80% of people with leukoplakia in the vestibule of the mouth have used this substance. Upon stopping contact with the causative substance, the lesions may persist for years. Although this type of leukoplakia may show dysplasia, the potential for malignant transformation is unknown. Ironically, elements within the complementary and alternative medicine industry promote the use of sanguinaria as a therapy for cancer.
Sodium bicarbonate is sometimes combined with salt to make a simple homemade mouthwash, indicated for any of the reasons that a salt water mouthwash might be used. Pre-mixed mouthwashes of 1% sodium bicarbonate and 1.5% sodium chloride in aqueous solution are marketed, although pharmacists will easily be able to produce such a formulation from the base ingredients when required. Sodium bicarbonate mouthwash is sometimes used to remove viscous saliva and to aid visualization of the oral tissues during examination of the mouth.
Salt water mouth wash is made by dissolving 0.5–1 teaspoon of table salt into a cup of water, which is as hot as possible without causing discomfort in the mouth. Saline has a mechanical cleansing action and an antiseptic action as it is a hypertonic solution in relation to bacteria, which undergo lysis. The heat of the solution produces a therapeutic increase in blood flow (hyperemia) to the surgical site, promoting healing. Hot salt water mouthwashes also encourage the draining of pus from dental abscesses. Conversely, if heat is applied on the side of the face (e.g., hot water bottle) rather than inside the mouth, it may cause a dental abscess to drain extra-orally, which is later associated with an area of fibrosis on the face (see cutaneous sinus of dental origin). Gargling with salt water is said to reduce the symptoms of a sore throat.
Hot salt water mouth baths (or hot salt water mouth washes, sometimes abbreviated to "HSWMW") are also routinely used after oral surgery, to keep food debris out of healing wounds and to prevent infection. Some oral surgeons consider salt water mouthwashes the mainstay of wound cleanliness after surgery. In dental extractions, hot salt water mouthbaths should start about 24 hours after a dental extraction. The term "mouth bath" implies that the liquid is passively held in the mouth rather than vigorously swilled around, which could dislodge a blood clot. Once the blood clot has stabilized, the mouth wash can be used more vigorously. These mouthwashes tend to be advised about 6 times per day, especially after meals to remove food from the socket.
Sodium lauryl sulfate (SLS) is used as a foaming agent in many oral hygiene products including many mouthwashes. Some may suggest that it is probably advisable to use mouthwash at least an hour after brushing with toothpaste when the toothpaste contains SLS, since the anionic compounds in the SLS toothpaste can deactivate cationic agents present in the mouthrinse.
Sucralfate is a mucosal coating agent, composed of an aluminum salt of sulfated sucrose. It is not recommended for use in the prevention of oral mucositis in head and neck cancer patients receiving radiotherapy or chemoradiation due to a lack of efficacy found in a well-designed, randomized controlled trial.
Tetracycline is an antibiotic which may sometimes be used as a mouthwash in adults (it causes red staining of teeth in children). It is sometimes use for herpetiforme ulceration (an uncommon type of aphthous stomatitis), but prolonged use may lead to oral candidiasis as the fungal population of the mouth overgrows in the absence of enough competing bacteria. Similarly, Minocycline mouthwashes of 0.5% concentrations can relieve symptoms of recurrent aphthous stomatitis. Erythromycin is similar.
4.8% tranexamic acid solution is sometimes used as an antifibrinolytic mouthwash to prevent bleeding during and after oral surgery in persons with coagulopathies (clotting disorders) or who are taking anticoagulants (blood thinners such as warfarin).
Triclosan is a non-ionic chlorinate bisphenol antiseptic found in some mouthwashes. When used in mouthwash (e.g. 0.03%), there is moderate substantivity, broad spectrum anti-bacterial action, some anti-fungal action and significant anti-plaque effect, especially when combined with copolymer or zinc citrate. Triclosan does not cause staining of the teeth. The safety of triclosan has been questioned.
Astringents like zinc chloride provide a pleasant-tasting sensation and shrink tissues. Zinc when used in combination with other anti-septic agents can limit the build-up of tartar
While mouthwash is traditionally mint flavored, Listerine sells citrus flavored mouthwash as well, and Oxyfresh sells mouthwash flavored as lemon mint and unflavored. The company Closys sells unflavored mouthwash with flavor control, with a special mouthwash for seniors. | https://en.wikipedia.org/wiki?curid=782 |
Alexander the Great
Alexander III of Macedon (, ; 20/21 July 356 BC – 10/11 June 323 BC), commonly known as Alexander the Great (, ), was a king ("basileus") of the ancient Greek kingdom of Macedon and a member of the Argead dynasty. He was born in Pella in 356 BC and succeeded his father Philip II to the throne at the age of 20. He spent most of his ruling years on an unprecedented military campaign through western Asia and northeast Africa, and by the age of thirty, he had created one of the largest empires of the ancient world, stretching from Greece to northwestern India. He was undefeated in battle and is widely considered one of history's most successful military commanders.
During his youth, Alexander was tutored by Aristotle until age 16. After Philip's assassination in 336 BC, he succeeded his father to the throne and inherited a strong kingdom and an experienced army. Alexander was awarded the generalship of Greece and used this authority to launch his father's pan-Hellenic project to lead the Greeks in the conquest of Persia. In 334 BC, he invaded the Achaemenid Empire (Persian Empire) and began a series of campaigns that lasted 10 years. Following the conquest of Anatolia, Alexander broke the power of Persia in a series of decisive battles, most notably the battles of Issus and Gaugamela. He subsequently overthrew Persian King Darius III and conquered the Achaemenid Empire in its entirety. At that point, his empire stretched from the Adriatic Sea to the Beas River.
Alexander endeavoured to reach the "ends of the world and the Great Outer Sea" and invaded India in 326 BC, winning an important victory over the Pauravas at the Battle of the Hydaspes. He eventually turned back at the demand of his homesick troops, dying in Babylon in 323 BC, the city that he planned to establish as his capital, without executing a series of planned campaigns that would have begun with an invasion of Arabia. In the years following his death, a series of civil wars tore his empire apart, resulting in the establishment of several states ruled by the Diadochi, Alexander's surviving generals and heirs.
Alexander's legacy includes the cultural diffusion and syncretism which his conquests engendered, such as Greco-Buddhism. He founded some twenty cities that bore his name, most notably Alexandria in Egypt. Alexander's settlement of Greek colonists and the resulting spread of Greek culture in the east resulted in a new Hellenistic civilization, aspects of which were still evident in the traditions of the Byzantine Empire in the mid-15th century AD and the presence of Greek speakers in central and far eastern Anatolia until the Greek genocide of the 1920s. Alexander became legendary as a classical hero in the mould of Achilles, and he features prominently in the history and mythic traditions of both Greek and non-Greek cultures. He was undefeated in battle and became the measure against which military leaders compared themselves. Military academies throughout the world still teach his tactics. He is often ranked among the most influential people in history.
Alexander was born in Pella, the capital of the Kingdom of Macedon, on the sixth day of the ancient Greek month of Hekatombaion, which probably corresponds to 20 July 356 BC, although the exact date is uncertain. He was the son of the king of Macedon, Philip II, and his fourth wife, Olympias, the daughter of Neoptolemus I, king of Epirus. Although Philip had seven or eight wives, Olympias was his principal wife for some time, likely because she gave birth to Alexander.
Several legends surround Alexander's birth and childhood. According to the ancient Greek biographer Plutarch, on the eve of the consummation of her marriage to Philip, Olympias dreamed that her womb was struck by a thunderbolt that caused a flame to spread "far and wide" before dying away. Sometime after the wedding, Philip is said to have seen himself, in a dream, securing his wife's womb with a seal engraved with a lion's image. Plutarch offered a variety of interpretations of these dreams: that Olympias was pregnant before her marriage, indicated by the sealing of her womb; or that Alexander's father was Zeus. Ancient commentators were divided about whether the ambitious Olympias promulgated the story of Alexander's divine parentage, variously claiming that she had told Alexander, or that she dismissed the suggestion as impious.
On the day Alexander was born, Philip was preparing a siege on the city of Potidea on the peninsula of Chalcidice. That same day, Philip received news that his general Parmenion had defeated the combined Illyrian and Paeonian armies and that his horses had won at the Olympic Games. It was also said that on this day, the Temple of Artemis in Ephesus, one of the Seven Wonders of the World, burnt down. This led Hegesias of Magnesia to say that it had burnt down because Artemis was away, attending the birth of Alexander. Such legends may have emerged when Alexander was king, and possibly at his instigation, to show that he was superhuman and destined for greatness from conception.
In his early years, Alexander was raised by a nurse, Lanike, sister of Alexander's future general Cleitus the Black. Later in his childhood, Alexander was tutored by the strict Leonidas, a relative of his mother, and by Lysimachus of Acarnania. Alexander was raised in the manner of noble Macedonian youths, learning to read, play the lyre, ride, fight, and hunt.
When Alexander was ten years old, a trader from Thessaly brought Philip a horse, which he offered to sell for thirteen talents. The horse refused to be mounted, and Philip ordered it away. Alexander, however, detecting the horse's fear of its own shadow, asked to tame the horse, which he eventually managed. Plutarch stated that Philip, overjoyed at this display of courage and ambition, kissed his son tearfully, declaring: "My boy, you must find a kingdom big enough for your ambitions. Macedon is too small for you", and bought the horse for him. Alexander named it Bucephalas, meaning "ox-head". Bucephalas carried Alexander as far as India. When the animal died (because of old age, according to Plutarch, at age thirty), Alexander named a city after him, Bucephala.
When Alexander was 13, Philip began to search for a tutor, and considered such academics as Isocrates and Speusippus, the latter offering to resign from his stewardship of the Academy to take up the post. In the end, Philip chose Aristotle and provided the Temple of the Nymphs at Mieza as a classroom. In return for teaching Alexander, Philip agreed to rebuild Aristotle's hometown of Stageira, which Philip had razed, and to repopulate it by buying and freeing the ex-citizens who were slaves, or pardoning those who were in exile.
Mieza was like a boarding school for Alexander and the children of Macedonian nobles, such as Ptolemy, Hephaistion, and Cassander. Many of these students would become his friends and future generals, and are often known as the 'Companions'. Aristotle taught Alexander and his companions about medicine, philosophy, morals, religion, logic, and art. Under Aristotle's tutelage, Alexander developed a passion for the works of Homer, and in particular the "Iliad"; Aristotle gave him an annotated copy, which Alexander later carried on his campaigns.
During his youth, Alexander was also acquainted with Persian exiles at the Macedonian court, who received the protection of Philip II for several years as they opposed Artaxerxes III. Among them were Artabazos II and his daughter Barsine, future mistress of Alexander, who resided at the Macedonian court from 352 to 342 BC, as well as Amminapes, future satrap of Alexander, or a Persian nobleman named Sisines. This gave the Macedonian court a good knowledge of Persian issues, and may even have influenced some of the innovations in the management of the Macedonian state.
Suda writes that, also, Anaximenes of Lampsacus was one of his teachers. Anaximenes, also accompanied him on his campaigns.
At age 16, Alexander's education under Aristotle ended. Philip waged war against Byzantion, leaving Alexander in charge as regent and heir apparent. During Philip's absence, the Thracian Maedi revolted against Macedonia. Alexander responded quickly, driving them from their territory. He colonized it with Greeks, and founded a city named Alexandropolis.
Upon Philip's return, he dispatched Alexander with a small force to subdue revolts in southern Thrace. Campaigning against the Greek city of Perinthus, Alexander is reported to have saved his father's life. Meanwhile, the city of Amphissa began to work lands that were sacred to Apollo near Delphi, a sacrilege that gave Philip the opportunity to further intervene in Greek affairs. Still occupied in Thrace, he ordered Alexander to muster an army for a campaign in southern Greece. Concerned that other Greek states might intervene, Alexander made it look as though he was preparing to attack Illyria instead. During this turmoil, the Illyrians invaded Macedonia, only to be repelled by Alexander.
Philip and his army joined his son in 338 BC, and they marched south through Thermopylae, taking it after stubborn resistance from its Theban garrison. They went on to occupy the city of Elatea, only a few days' march from both Athens and Thebes. The Athenians, led by Demosthenes, voted to seek alliance with Thebes against Macedonia. Both Athens and Philip sent embassies to win Thebes' favour, but Athens won the contest. Philip marched on Amphissa (ostensibly acting on the request of the Amphictyonic League), capturing the mercenaries sent there by Demosthenes and accepting the city's surrender. Philip then returned to Elatea, sending a final offer of peace to Athens and Thebes, who both rejected it.
As Philip marched south, his opponents blocked him near Chaeronea, Boeotia. During the ensuing Battle of Chaeronea, Philip commanded the right wing and Alexander the left, accompanied by a group of Philip's trusted generals. According to the ancient sources, the two sides fought bitterly for some time. Philip deliberately commanded his troops to retreat, counting on the untested Athenian hoplites to follow, thus breaking their line. Alexander was the first to break the Theban lines, followed by Philip's generals. Having damaged the enemy's cohesion, Philip ordered his troops to press forward and quickly routed them. With the Athenians lost, the Thebans were surrounded. Left to fight alone, they were defeated.
After the victory at Chaeronea, Philip and Alexander marched unopposed into the Peloponnese, welcomed by all cities; however, when they reached Sparta, they were refused, but did not resort to war. At Corinth, Philip established a "Hellenic Alliance" (modelled on the old anti-Persian alliance of the Greco-Persian Wars), which included most Greek city-states except Sparta. Philip was then named "Hegemon" (often translated as "Supreme Commander") of this league (known by modern scholars as the League of Corinth), and announced his plans to attack the Persian Empire.
When Philip returned to Pella, he fell in love with and married Cleopatra Eurydice in 338 BC, the niece of his general Attalus. The marriage made Alexander's position as heir less secure, since any son of Cleopatra Eurydice would be a fully Macedonian heir, while Alexander was only half-Macedonian. During the wedding banquet, a drunken Attalus publicly prayed to the gods that the union would produce a legitimate heir.
In 337 BC, Alexander fled Macedon with his mother, dropping her off with her brother, King Alexander I of Epirus in Dodona, capital of the Molossians. He continued to Illyria, where he sought refuge with one or more Illyrian kings, perhaps with Glaukias, and was treated as a guest, despite having defeated them in battle a few years before. However, it appears Philip never intended to disown his politically and militarily trained son. Accordingly, Alexander returned to Macedon after six months due to the efforts of a family friend, Demaratus, who mediated between the two parties.
In the following year, the Persian satrap (governor) of Caria, Pixodarus, offered his eldest daughter to Alexander's half-brother, Philip Arrhidaeus. Olympias and several of Alexander's friends suggested this showed Philip intended to make Arrhidaeus his heir. Alexander reacted by sending an actor, Thessalus of Corinth, to tell Pixodarus that he should not offer his daughter's hand to an illegitimate son, but instead to Alexander. When Philip heard of this, he stopped the negotiations and scolded Alexander for wishing to marry the daughter of a Carian, explaining that he wanted a better bride for him. Philip exiled four of Alexander's friends, Harpalus, Nearchus, Ptolemy and Erigyius, and had the Corinthians bring Thessalus to him in chains.
In summer 336 BC, while at Aegae attending the wedding of his daughter Cleopatra to Olympias's brother, Alexander I of Epirus, Philip was assassinated by the captain of his bodyguards, Pausanias. As Pausanias tried to escape, he tripped over a vine and was killed by his pursuers, including two of Alexander's companions, Perdiccas and Leonnatus. Alexander was proclaimed king on the spot by the nobles and army at the age of 20.
Alexander began his reign by eliminating potential rivals to the throne. He had his cousin, the former Amyntas IV, executed. He also had two Macedonian princes from the region of Lyncestis killed, but spared a third, Alexander Lyncestes. Olympias had Cleopatra Eurydice and Europa, her daughter by Philip, burned alive. When Alexander learned about this, he was furious. Alexander also ordered the murder of Attalus, who was in command of the advance guard of the army in Asia Minor and Cleopatra's uncle.
Attalus was at that time corresponding with Demosthenes, regarding the possibility of defecting to Athens. Attalus also had severely insulted Alexander, and following Cleopatra's murder, Alexander may have considered him too dangerous to leave alive. Alexander spared Arrhidaeus, who was by all accounts mentally disabled, possibly as a result of poisoning by Olympias.
News of Philip's death roused many states into revolt, including Thebes, Athens, Thessaly, and the Thracian tribes north of Macedon. When news of the revolts reached Alexander, he responded quickly. Though advised to use diplomacy, Alexander mustered 3,000 Macedonian cavalry and rode south towards Thessaly. He found the Thessalian army occupying the pass between Mount Olympus and Mount Ossa, and ordered his men to ride over Mount Ossa. When the Thessalians awoke the next day, they found Alexander in their rear and promptly surrendered, adding their cavalry to Alexander's force. He then continued south towards the Peloponnese.
Alexander stopped at Thermopylae, where he was recognized as the leader of the Amphictyonic League before heading south to Corinth. Athens sued for peace and Alexander pardoned the rebels. The famous encounter between Alexander and Diogenes the Cynic occurred during Alexander's stay in Corinth. When Alexander asked Diogenes what he could do for him, the philosopher disdainfully asked Alexander to stand a little to the side, as he was blocking the sunlight. This reply apparently delighted Alexander, who is reported to have said "But verily, if I were not Alexander, I would like to be Diogenes." At Corinth, Alexander took the title of "Hegemon" ("leader") and, like Philip, was appointed commander for the coming war against Persia. He also received news of a Thracian uprising.
Before crossing to Asia, Alexander wanted to safeguard his northern borders. In the spring of 335 BC, he advanced to suppress several revolts. Starting from Amphipolis, he travelled east into the country of the "Independent Thracians"; and at Mount Haemus, the Macedonian army attacked and defeated the Thracian forces manning the heights. The Macedonians marched into the country of the Triballi, and defeated their army near the Lyginus river (a tributary of the Danube). Alexander then marched for three days to the Danube, encountering the Getae tribe on the opposite shore. Crossing the river at night, he surprised them and forced their army to retreat after the first cavalry skirmish.
News then reached Alexander that Cleitus, King of Illyria, and King Glaukias of the Taulantii were in open revolt against his authority. Marching west into Illyria, Alexander defeated each in turn, forcing the two rulers to flee with their troops. With these victories, he secured his northern frontier.
While Alexander campaigned north, the Thebans and Athenians rebelled once again. Alexander immediately headed south. While the other cities again hesitated, Thebes decided to fight. The Theban resistance was ineffective, and Alexander razed the city and divided its territory between the other Boeotian cities. The end of Thebes cowed Athens, leaving all of Greece temporarily at peace. Alexander then set out on his Asian campaign, leaving Antipater as regent.
According to ancient writers Demosthenes called Alexander "Margites" () and a boy. Greeks used the word Margites to describe fool and useless people, on account of the Margites.
In 336 BC Philip II had already sent Parmenion, with Amyntas, Andromenes and Attalus, and an army of 10,000 men into Anatolia to make preparations for an invasion to free the Greeks living on the western coast and islands from Achaemenid rule. At first, all went well. The Greek cities on the western coast of Anatolia revolted until the news arrived that Philip had been murdered and had been succeeded by his young son Alexander. The Macedonians were demoralized by Philip's death and were subsequently defeated near Magnesia by the Achaemenids under the command of the mercenary Memnon of Rhodes.
Taking over the invasion project of Philip II, Alexander's army crossed the Hellespont in 334 BC with approximately 48,100 soldiers, 6,100 cavalry and a fleet of 120 ships with crews numbering 38,000, drawn from Macedon and various Greek city-states, mercenaries, and feudally raised soldiers from Thrace, Paionia, and Illyria. He showed his intent to conquer the entirety of the Persian Empire by throwing a spear into Asian soil and saying he accepted Asia as a gift from the gods. This also showed Alexander's eagerness to fight, in contrast to his father's preference for diplomacy.
After an initial victory against Persian forces at the Battle of the Granicus, Alexander accepted the surrender of the Persian provincial capital and treasury of Sardis; he then proceeded along the Ionian coast, granting autonomy and democracy to the cities. Miletus, held by Achaemenid forces, required a delicate siege operation, with Persian naval forces nearby. Further south, at Halicarnassus, in Caria, Alexander successfully waged his first large-scale siege, eventually forcing his opponents, the mercenary captain Memnon of Rhodes and the Persian satrap of Caria, Orontobates, to withdraw by sea. Alexander left the government of Caria to a member of the Hecatomnid dynasty, Ada, who adopted Alexander.
From Halicarnassus, Alexander proceeded into mountainous Lycia and the Pamphylian plain, asserting control over all coastal cities to deny the Persians naval bases. From Pamphylia onwards the coast held no major ports and Alexander moved inland. At Termessos, Alexander humbled but did not storm the Pisidian city. At the ancient Phrygian capital of Gordium, Alexander "undid" the hitherto unsolvable Gordian Knot, a feat said to await the future "king of Asia". According to the story, Alexander proclaimed that it did not matter how the knot was undone and hacked it apart with his sword.
In spring 333 BC, Alexander crossed the Taurus into Cilicia. After a long pause due to an illness, he marched on towards Syria. Though outmanoeuvered by Darius' significantly larger army, he marched back to Cilicia, where he defeated Darius at Issus. Darius fled the battle, causing his army to collapse, and left behind his wife, his two daughters, his mother Sisygambis, and a fabulous treasure. He offered a peace treaty that included the lands he had already lost, and a ransom of 10,000 talents for his family. Alexander replied that since he was now king of Asia, it was he alone who decided territorial divisions.
Alexander proceeded to take possession of Syria, and most of the coast of the Levant. In the following year, 332 BC, he was forced to attack Tyre, which he captured after a long and difficult siege. The men of military age were massacred and the women and children sold into slavery.
When Alexander destroyed Tyre, most of the towns on the route to Egypt quickly capitulated. However, Alexander met with resistance at Gaza. The stronghold was heavily fortified and built on a hill, requiring a siege. When "his engineers pointed out to him that because of the height of the mound it would be impossible... this encouraged Alexander all the more to make the
attempt". After three unsuccessful assaults, the stronghold fell, but not before Alexander had received a serious shoulder wound. As in Tyre, men of military age were put to the sword and the women and children were sold into slavery.
Alexander advanced on Egypt in later 332 BC, where he was regarded as a liberator. He was pronounced son of the deity Amun at the Oracle of Siwa Oasis in the Libyan desert. Henceforth, Alexander often referred to Zeus-Ammon as his true father, and after his death, currency depicted him adorned with the Horns of Ammon as a symbol of his divinity. During his stay in Egypt, he founded Alexandria-by-Egypt, which would become the prosperous capital of the Ptolemaic Kingdom after his death.
Leaving Egypt in 331 BCE, Alexander marched eastward into Achaemenid Assyria in Upper Mesopotamia (now northern Iraq) and defeated Darius again at the Battle of Gaugamela. Darius once more fled the field, and Alexander chased him as far as Arbela. Gaugamela would be the final and decisive encounter between the two. Darius fled over the mountains to Ecbatana (modern Hamadan) while Alexander captured Babylon.
From Babylon, Alexander went to Susa, one of the Achaemenid capitals, and captured its treasury. He sent the bulk of his army to the Persian ceremonial capital of Persepolis via the Persian Royal Road. Alexander himself took selected troops on the direct route to the city. He then stormed the pass of the Persian Gates (in the modern Zagros Mountains) which had been blocked by a Persian army under Ariobarzanes and then hurried to Persepolis before its garrison could loot the treasury.
On entering Persepolis, Alexander allowed his troops to loot the city for several days. Alexander stayed in Persepolis for five months. During his stay a fire broke out in the eastern palace of Xerxes I and spread to the rest of the city. Possible causes include a drunken accident or deliberate revenge for the burning of the Acropolis of Athens during the Second Persian War by Xerxes; Plutarch and Diodorus allege that Alexander's companion, the hetaera Thaïs, instigated and started the fire. Even as he watched the city burn, Alexander immediately began to regret his decision. Plutarch claims that he ordered his men to put out the fires, but that the flames had already spread to most of the city. Curtius claims that Alexander did not regret his decision until the next morning. Plutarch recounts an anecdote in which Alexander pauses and talks to a fallen statue of Xerxes as if it were a live person:
Alexander then chased Darius, first into Media, and then Parthia. The Persian king no longer controlled his own destiny, and was taken prisoner by Bessus, his Bactrian satrap and kinsman. As Alexander approached, Bessus had his men fatally stab the Great King and then declared himself Darius' successor as Artaxerxes V, before retreating into Central Asia to launch a guerrilla campaign against Alexander. Alexander buried Darius' remains next to his Achaemenid predecessors in a regal funeral. He claimed that, while dying, Darius had named him as his successor to the Achaemenid throne. The Achaemenid Empire is normally considered to have fallen with Darius.
Alexander viewed Bessus as a usurper and set out to defeat him. This campaign, initially against Bessus, turned into a grand tour of central Asia. Alexander founded a series of new cities, all called Alexandria, including modern Kandahar in Afghanistan, and Alexandria Eschate ("The Furthest") in modern Tajikistan. The campaign took Alexander through Media, Parthia, Aria (West Afghanistan), Drangiana, Arachosia (South and Central Afghanistan), Bactria (North and Central Afghanistan), and Scythia.
In 329 BC, Spitamenes, who held an undefined position in the satrapy of Sogdiana, betrayed Bessus to Ptolemy, one of Alexander's trusted companions, and Bessus was executed. However, when, at some point later, Alexander was on the Jaxartes dealing with an incursion by a horse nomad army, Spitamenes raised Sogdiana in revolt. Alexander personally defeated the Scythians at the Battle of Jaxartes and immediately launched a campaign against Spitamenes, defeating him in the Battle of Gabai. After the defeat, Spitamenes was killed by his own men, who then sued for peace.
During this time, Alexander adopted some elements of Persian dress and customs at his court, notably the custom of "proskynesis", either a symbolic kissing of the hand, or prostration on the ground, that Persians showed to their social superiors. The Greeks regarded the gesture as the province of deities and believed that Alexander meant to deify himself by requiring it. This cost him the sympathies of many of his countrymen, and he eventually abandoned it.
A plot against his life was revealed, and one of his officers, Philotas, was executed for failing to alert Alexander. The death of the son necessitated the death of the father, and thus Parmenion, who had been charged with guarding the treasury at Ecbatana, was assassinated at Alexander's command, to prevent attempts at vengeance. Most infamously, Alexander personally killed the man who had saved his life at Granicus, Cleitus the Black, during a violent drunken altercation at Maracanda (modern day Samarkand in Uzbekistan), in which Cleitus accused Alexander of several judgmental mistakes and most especially, of having forgotten the Macedonian ways in favour of a corrupt oriental lifestyle.
Later, in the Central Asian campaign, a second plot against his life was revealed, this one instigated by his own royal pages. His official historian, Callisthenes of Olynthus, was implicated in the plot, and in the "Anabasis of Alexander", Arrian states that Callisthenes and the pages were then tortured on the rack as punishment, and likely died soon after. It remains unclear if Callisthenes was actually involved in the plot, for prior to his accusation he had fallen out of favour by leading the opposition to the attempt to introduce proskynesis.
When Alexander set out for Asia, he left his general Antipater, an experienced military and political leader and part of Philip II's "Old Guard", in charge of Macedon. Alexander's sacking of Thebes ensured that Greece remained quiet during his absence. The one exception was a call to arms by Spartan king Agis III in 331 BC, whom Antipater defeated and killed in the battle of Megalopolis. Antipater referred the Spartans' punishment to the League of Corinth, which then deferred to Alexander, who chose to pardon them. There was also considerable friction between Antipater and Olympias, and each complained to Alexander about the other.
In general, Greece enjoyed a period of peace and prosperity during Alexander's campaign in Asia. Alexander sent back vast sums from his conquest, which stimulated the economy and increased trade across his empire. However, Alexander's constant demands for troops and the migration of Macedonians throughout his empire depleted Macedon's strength, greatly weakening it in the years after Alexander, and ultimately led to its subjugation by Rome after the Third Macedonian War (171–168 BC).
After the death of Spitamenes and his marriage to Roxana (Raoxshna in Old Iranian) to cement relations with his new satrapies, Alexander turned to the Indian subcontinent. He invited the chieftains of the former satrapy of Gandhara (a region presently straddling eastern Afghanistan and northern Pakistan), to come to him and submit to his authority. Omphis (Indian name Ambhi), the ruler of Taxila, whose kingdom extended from the Indus to the Hydaspes (Jhelum), complied, but the chieftains of some hill clans, including the Aspasioi and Assakenoi sections of the Kambojas (known in Indian texts also as Ashvayanas and Ashvakayanas), refused to submit. Ambhi hastened to relieve Alexander of his apprehension and met him with valuable presents, placing himself and all his forces at his disposal. Alexander not only returned Ambhi his title and the gifts but he also presented him with a wardrobe of "Persian robes, gold and silver ornaments, 30 horses and 1,000 talents in gold". Alexander was emboldened to divide his forces, and Ambhi assisted Hephaestion and Perdiccas in constructing a bridge over the Indus where it bends at Hund, supplied their troops with provisions, and received Alexander himself, and his whole army, in his capital city of Taxila, with every demonstration of friendship and the most liberal hospitality.
On the subsequent advance of the Macedonian king, Taxiles accompanied him with a force of 5,000 men and took part in the battle of the Hydaspes River. After that victory he was sent by Alexander in pursuit of Porus (Indian name Puru), to whom he was charged to offer favourable terms, but narrowly escaped losing his life at the hands of his old enemy. Subsequently, however, the two rivals were reconciled by the personal mediation of Alexander; and Taxiles, after having contributed zealously to the equipment of the fleet on the Hydaspes, was entrusted by the king with the government of the whole territory between that river and the Indus. A considerable accession of power was granted him after the death of Philip, son of Machatas; and he was allowed to retain his authority at the death of Alexander himself (323 BC), as well as in the subsequent partition of the provinces at Triparadisus, 321 BC.
In the winter of 327/326 BC, Alexander personally led a campaign against the Aspasioi of Kunar valleys, the Guraeans of the Guraeus valley, and the Assakenoi of the Swat and Buner valleys. A fierce contest ensued with the Aspasioi in which Alexander was wounded in the shoulder by a dart, but eventually the Aspasioi lost. Alexander then faced the Assakenoi, who fought against him from the strongholds of Massaga, Ora and Aornos.
The fort of Massaga was reduced only after days of bloody fighting, in which Alexander was wounded seriously in the ankle. According to Curtius, "Not only did Alexander slaughter the entire population of Massaga, but also did he reduce its buildings to rubble." A similar slaughter followed at Ora. In the aftermath of Massaga and Ora, numerous Assakenians fled to the fortress of Aornos. Alexander followed close behind and captured the strategic hill-fort after four bloody days.
After Aornos, Alexander crossed the Indus and fought and won an epic battle against King Porus, who ruled a region lying between the Hydaspes and the Acesines (Chenab), in what is now the Punjab, in the Battle of the Hydaspes in 326 BC. Alexander was impressed by Porus' bravery, and made him an ally. He appointed Porus as satrap, and added to Porus' territory land that he did not previously own, towards the south-east, up to the Hyphasis (Beas). Choosing a local helped him control these lands so distant from Greece. Alexander founded two cities on opposite sides of the Hydaspes river, naming one Bucephala, in honour of his horse, who died around this time. The other was Nicaea (Victory), thought to be located at the site of modern-day Mong, Punjab. Philostratus the Elder in the Life of Apollonius of Tyana writes that in the army of Porus there was an elephant who fought brave against Alexander's army and Alexander dedicated it to the Helios (Sun) and named it Ajax, because he thought that a so great animal deserved a great name. The elephant had gold rings around its tusks and an inscription was on them written in Greek: "Alexander the son of Zeus dedicates Ajax to the Helios" (ΑΛΕΞΑΝΔΡΟΣ Ο ΔΙΟΣ ΤΟΝ ΑΙΑΝΤΑ ΤΩΙ ΗΛΙΩΙ).
East of Porus' kingdom, near the Ganges River, was the Nanda Empire of Magadha, and further east, the Gangaridai Empire of Bengal region of the Indian subcontinent. Fearing the prospect of facing other large armies and exhausted by years of campaigning, Alexander's army mutinied at the Hyphasis River (Beas), refusing to march farther east. This river thus marks the easternmost extent of Alexander's conquests.
Alexander tried to persuade his soldiers to march farther, but his general Coenus pleaded with him to change his opinion and return; the men, he said, "longed to again see their parents, their wives and children, their homeland". Alexander eventually agreed and turned south, marching along the Indus. Along the way his army conquered the Malhi (in modern-day Multan) and other Indian tribes and Alexander sustained an injury during the siege.
Alexander sent much of his army to Carmania (modern southern Iran) with general Craterus, and commissioned a fleet to explore the Persian Gulf shore under his admiral Nearchus, while he led the rest back to Persia through the more difficult southern route along the Gedrosian Desert and Makran. Alexander reached Susa in 324 BC, but not before losing many men to the harsh desert.
Discovering that many of his satraps and military governors had misbehaved in his absence, Alexander executed several of them as examples on his way to Susa. As a gesture of thanks, he paid off the debts of his soldiers, and announced that he would send over-aged and disabled veterans back to Macedon, led by Craterus. His troops misunderstood his intention and mutinied at the town of Opis. They refused to be sent away and criticized his adoption of Persian customs and dress and the introduction of Persian officers and soldiers into Macedonian units.
After three days, unable to persuade his men to back down, Alexander gave Persians command posts in the army and conferred Macedonian military titles upon Persian units. The Macedonians quickly begged forgiveness, which Alexander accepted, and held a great banquet for several thousand of his men at which he and they ate together. In an attempt to craft a lasting harmony between his Macedonian and Persian subjects, Alexander held a mass marriage of his senior officers to Persian and other noblewomen at Susa, but few of those marriages seem to have lasted much beyond a year. Meanwhile, upon his return to Persia, Alexander learned that guards of the tomb of Cyrus the Great in Pasargadae had desecrated it, and swiftly executed them. Alexander admired Cyrus the Great, from an early age reading Xenophon's "Cyropaedia", which described Cyrus's heroism in battle and governance as a king and legislator. During his visit to Pasargadae Alexander ordered his architect Aristobulus to decorate the interior of the sepulchral chamber of Cyrus' tomb.
Afterwards, Alexander travelled to Ecbatana to retrieve the bulk of the Persian treasure. There, his closest friend and possible lover, Hephaestion, died of illness or poisoning. Hephaestion's death devastated Alexander, and he ordered the preparation of an expensive funeral pyre in Babylon, as well as a decree for public mourning. Back in Babylon, Alexander planned a series of new campaigns, beginning with an invasion of Arabia, but he would not have a chance to realize them, as he died shortly after Hephaestion.
On either 10 or 11 June 323 BC, Alexander died in the palace of Nebuchadnezzar II, in Babylon, at age 32. There are two different versions of Alexander's death and details of the death differ slightly in each. Plutarch's account is that roughly 14 days before his death, Alexander entertained admiral Nearchus, and spent the night and next day drinking with Medius of Larissa. He developed a fever, which worsened until he was unable to speak. The common soldiers, anxious about his health, were granted the right to file past him as he silently waved at them. In the second account, Diodorus recounts that Alexander was struck with pain after downing a large bowl of unmixed wine in honour of Heracles, followed by 11 days of weakness; he did not develop a fever and died after some agony. Arrian also mentioned this as an alternative, but Plutarch specifically denied this claim.
Given the propensity of the Macedonian aristocracy to assassination, foul play featured in multiple accounts of his death. Diodorus, Plutarch, Arrian and Justin all mentioned the theory that Alexander was poisoned. Justin stated that Alexander was the victim of a poisoning conspiracy, Plutarch dismissed it as a fabrication, while both Diodorus and Arrian noted that they mentioned it only for the sake of completeness. The accounts were nevertheless fairly consistent in designating Antipater, recently removed as Macedonian viceroy, and at odds with Olympias, as the head of the alleged plot. Perhaps taking his summons to Babylon as a death sentence, and having seen the fate of Parmenion and Philotas, Antipater purportedly arranged for Alexander to be poisoned by his son Iollas, who was Alexander's wine-pourer. There was even a suggestion that Aristotle may have participated.
The strongest argument against the poison theory is the fact that twelve days passed between the start of his illness and his death; such long-acting poisons were probably not available. However, in a 2003 BBC documentary investigating the death of Alexander, Leo Schep from the New Zealand National Poisons Centre proposed that the plant white hellebore ("Veratrum album"), which was known in antiquity, may have been used to poison Alexander. In a 2014 manuscript in the journal "Clinical Toxicology", Schep suggested Alexander's wine was spiked with "Veratrum album", and that this would produce poisoning symptoms that match the course of events described in the "Alexander Romance". "Veratrum album" poisoning can have a prolonged course and it was suggested that if Alexander was poisoned, "Veratrum album" offers the most plausible cause. Another poisoning explanation put forward in 2010 proposed that the circumstances of his death were compatible with poisoning by water of the river Styx (modern-day Mavroneri in Arcadia, Greece) that contained calicheamicin, a dangerous compound produced by bacteria.
Several natural causes (diseases) have been suggested, including malaria and typhoid fever. A 1998 article in the "New England Journal of Medicine" attributed his death to typhoid fever complicated by bowel perforation and ascending paralysis. Another recent analysis suggested pyogenic (infectious) spondylitis or meningitis. Other illnesses fit the symptoms, including acute pancreatitis and West Nile virus. Natural-cause theories also tend to emphasize that Alexander's health may have been in general decline after years of heavy drinking and severe wounds. The anguish that Alexander felt after Hephaestion's death may also have contributed to his declining health.
Alexander's body was laid in a gold anthropoid sarcophagus that was filled with honey, which was in turn placed in a gold casket. According to Aelian, a seer called Aristander foretold that the land where Alexander was laid to rest "would be happy and unvanquishable forever". Perhaps more likely, the successors may have seen possession of the body as a symbol of legitimacy, since burying the prior king was a royal prerogative.
While Alexander's funeral cortege was on its way to Macedon, Ptolemy seized it and took it temporarily to Memphis. His successor, Ptolemy II Philadelphus, transferred the sarcophagus to Alexandria, where it remained until at least late Antiquity. Ptolemy IX Lathyros, one of Ptolemy's final successors, replaced Alexander's sarcophagus with a glass one so he could convert the original to coinage. The recent discovery of an enormous tomb in northern Greece, at Amphipolis, dating from the time of Alexander the Great has given rise to speculation that its original intent was to be the burial place of Alexander. This would fit with the intended destination of Alexander's funeral cortege. However, the memorial was found to be dedicated to the dearest friend of Alexander the Great, Hephaestion.
Pompey, Julius Caesar and Augustus all visited the tomb in Alexandria, where Augustus, allegedly, accidentally knocked the nose off. Caligula was said to have taken Alexander's breastplate from the tomb for his own use. Around AD 200, Emperor Septimius Severus closed Alexander's tomb to the public. His son and successor, Caracalla, a great admirer, visited the tomb during his own reign. After this, details on the fate of the tomb are hazy.
The so-called "Alexander Sarcophagus", discovered near Sidon and now in the Istanbul Archaeology Museum, is so named not because it was thought to have contained Alexander's remains, but because its bas-reliefs depict Alexander and his companions fighting the Persians and hunting. It was originally thought to have been the sarcophagus of Abdalonymus (died 311 BC), the king of Sidon appointed by Alexander immediately following the battle of Issus in 331. However, more recently, it has been suggested that it may date from earlier than Abdalonymus' death.
Demades likened the Macedonian army, after the death of Alexander, to the blinded Cyclop, due to the many random and disorderly movements that it made. In addition, Leosthenes, also, likened the anarchy between the generals, after Alexander's death, to the blinded Cyclop "who after he had lost his eye went feeling and groping about with his hands before him, not knowing where to lay them".
Alexander's death was so sudden that when reports of his death reached Greece, they were not immediately believed. Alexander had no obvious or legitimate heir, his son Alexander IV by Roxane being born after Alexander's death. According to Diodorus, Alexander's companions asked him on his deathbed to whom he bequeathed his kingdom; his laconic reply was "tôi kratistôi"—"to the strongest". Another theory is that his successors willfully or erroneously misheard "tôi Kraterôi"—"to Craterus", the general leading his Macedonian troops home and newly entrusted with the regency of Macedonia.
Arrian and Plutarch claimed that Alexander was speechless by this point, implying that this was an apocryphal story. Diodorus, Curtius and Justin offered the more plausible story that Alexander passed his signet ring to Perdiccas, a bodyguard and leader of the companion cavalry, in front of witnesses, thereby nominating him.
Perdiccas initially did not claim power, instead suggesting that Roxane's baby would be king, if male; with himself, Craterus, Leonnatus, and Antipater as guardians. However, the infantry, under the command of Meleager, rejected this arrangement since they had been excluded from the discussion. Instead, they supported Alexander's half-brother Philip Arrhidaeus. Eventually, the two sides reconciled, and after the birth of Alexander IV, he and Philip III were appointed joint kings, albeit in name only.
Dissension and rivalry soon afflicted the Macedonians, however. The satrapies handed out by Perdiccas at the Partition of Babylon became power bases each general used to bid for power. After the assassination of Perdiccas in 321 BC, Macedonian unity collapsed, and 40 years of war between "The Successors" ("Diadochi") ensued before the Hellenistic world settled into four stable power blocs: Ptolemaic Egypt, Seleucid Mesopotamia and Central Asia, Attalid Anatolia, and Antigonid Macedon. In the process, both Alexander IV and Philip III were murdered.
Diodorus stated that Alexander had given detailed written instructions to Craterus some time before his death. Craterus started to carry out Alexander's commands, but the successors chose not to further implement them, on the grounds they were impractical and extravagant. Nevertheless, Perdiccas read Alexander's will to his troops.
Alexander's will called for military expansion into the southern and western Mediterranean, monumental constructions, and the intermixing of Eastern and Western populations. It included:
Alexander earned the epithet "the Great" due to his unparalleled success as a military commander. He never lost a battle, despite typically being outnumbered. This was due to use of terrain, phalanx and cavalry tactics, bold strategy, and the fierce loyalty of his troops. The Macedonian phalanx, armed with the sarissa, a spear long, had been developed and perfected by Philip II through rigorous training, and Alexander used its speed and maneuverability to great effect against larger but more disparate Persian forces. Alexander also recognized the potential for disunity among his diverse army, which employed various languages and weapons. He overcame this by being personally involved in battle, in the manner of a Macedonian king.
In his first battle in Asia, at Granicus, Alexander used only a small part of his forces, perhaps 13,000 infantry with 5,000 cavalry, against a much larger Persian force of 40,000. Alexander placed the phalanx at the center and cavalry and archers on the wings, so that his line matched the length of the Persian cavalry line, about . By contrast, the Persian infantry was stationed behind its cavalry. This ensured that Alexander would not be outflanked, while his phalanx, armed with long pikes, had a considerable advantage over the Persians' scimitars and javelins. Macedonian losses were negligible compared to those of the Persians.
At Issus in 333 BC, his first confrontation with Darius, he used the same deployment, and again the central phalanx pushed through. Alexander personally led the charge in the center, routing the opposing army. At the decisive encounter with Darius at Gaugamela, Darius equipped his chariots with scythes on the wheels to break up the phalanx and equipped his cavalry with pikes. Alexander arranged a double phalanx, with the center advancing at an angle, parting when the chariots bore down and then reforming. The advance was successful and broke Darius' center, causing the latter to flee once again.
When faced with opponents who used unfamiliar fighting techniques, such as in Central Asia and India, Alexander adapted his forces to his opponents' style. Thus, in Bactria and Sogdiana, Alexander successfully used his javelin throwers and archers to prevent outflanking movements, while massing his cavalry at the center. In India, confronted by Porus' elephant corps, the Macedonians opened their ranks to envelop the elephants and used their sarissas to strike upwards and dislodge the elephants' handlers.
Greek biographer Plutarch () describes Alexander's appearance as:
The semi-legendary "Alexander Romance" also suggests that Alexander exhibited heterochromia iridum: that one eye was dark and the other light.
British historian Peter Green provided a description of Alexander's appearance, based on his review of statues and some ancient documents:
Historian and Egyptologist Joann Fletcher has said that the Macedonian ruler Alexander the Great had blond hair.
Ancient authors recorded that Alexander was so pleased with portraits of himself created by Lysippos that he forbade other sculptors from crafting his image. Lysippos had often used the contrapposto sculptural scheme to portray Alexander and other characters such as Apoxyomenos, Hermes and Eros. Lysippos' sculpture, famous for its naturalism, as opposed to a stiffer, more static pose, is thought to be the most faithful depiction.
Some of Alexander's strongest personality traits formed in response to his parents. His mother had huge ambitions, and encouraged him to believe it was his destiny to conquer the Persian Empire. Olympias' influence instilled a sense of destiny in him, and Plutarch tells how his ambition "kept his spirit serious and lofty in advance of his years". However, his father Philip was Alexander's most immediate and influential role model, as the young Alexander watched him campaign practically every year, winning victory after victory while ignoring severe wounds. Alexander's relationship with his father forged the competitive side of his personality; he had a need to outdo his father, illustrated by his reckless behaviour in battle. While Alexander worried that his father would leave him "no great or brilliant achievement to be displayed to the world", he also downplayed his father's achievements to his companions.
According to Plutarch, among Alexander's traits were a violent temper and rash, impulsive nature, which undoubtedly contributed to some of his decisions. Although Alexander was stubborn and did not respond well to orders from his father, he was open to reasoned debate. He had a calmer side—perceptive, logical, and calculating. He had a great desire for knowledge, a love for philosophy, and was an avid reader. This was no doubt in part due to Aristotle's tutelage; Alexander was intelligent and quick to learn. His intelligent and rational side was amply demonstrated by his ability and success as a general. He had great self-restraint in "pleasures of the body", in contrast with his lack of self-control with alcohol.
Alexander was erudite and patronized both arts and sciences. However, he had little interest in sports or the Olympic games (unlike his father), seeking only the Homeric ideals of honour ("timê") and glory ("kudos"). He had great charisma and force of personality, characteristics which made him a great leader. His unique abilities were further demonstrated by the inability of any of his generals to unite Macedonia and retain the Empire after his death—only Alexander had the ability to do so.
During his final years, and especially after the death of Hephaestion, Alexander began to exhibit signs of megalomania and paranoia. His extraordinary achievements, coupled with his own ineffable sense of destiny and the flattery of his companions, may have combined to produce this effect. His delusions of grandeur are readily visible in his will and in his desire to conquer the world, in as much as he is by various sources described as having "boundless ambition", an epithet, the meaning of which has descended into an historical cliché.
He appears to have believed himself a deity, or at least sought to deify himself. Olympias always insisted to him that he was the son of Zeus, a theory apparently confirmed to him by the oracle of Amun at Siwa. He began to identify himself as the son of Zeus-Ammon. Alexander adopted elements of Persian dress and customs at court, notably "proskynesis", a practice of which Macedonians disapproved, and were loath to perform. This behaviour cost him the sympathies of many of his countrymen. However, Alexander also was a pragmatic ruler who understood the difficulties of ruling culturally disparate peoples, many of whom lived in kingdoms where the king was divine. Thus, rather than megalomania, his behaviour may simply have been a practical attempt at strengthening his rule and keeping his empire together.
Alexander married three times: Roxana, daughter of the Sogdian nobleman Oxyartes of Bactria, out of love; and the Persian princesses Stateira II and Parysatis II, the former a daughter of Darius III and latter a daughter of Artaxerxes III, for political reasons. He apparently had two sons, Alexander IV of Macedon by Roxana and, possibly, Heracles of Macedon from his mistress Barsine. He lost another child when Roxana miscarried at Babylon.
Alexander also had a close relationship with his friend, general, and bodyguard Hephaestion, the son of a Macedonian noble. Hephaestion's death devastated Alexander. This event may have contributed to Alexander's failing health and detached mental state during his final months.
Alexander's sexuality has been the subject of speculation and controversy in modern times. The Roman era writer Athenaeus says, based on the scholar Dicaearchus, who was Alexander's contemporary, that the king "was quite excessively keen on boys", and that Alexander sexually embraced his eunuch Bagoas in public. This episode is also told by Plutarch, probably based on the same source. None of Alexander's contemporaries, however, are known to have explicitly described Alexander's relationship with Hephaestion as sexual, though the pair was often compared to Achilles and Patroclus, whom classical Greek culture painted as a couple. Aelian writes of Alexander's visit to Troy where "Alexander garlanded the tomb of Achilles, and Hephaestion that of Patroclus, the latter hinting that he was a beloved of Alexander, in just the same way as Patroclus was of Achilles." Some modern historians (e.g., Robin Lane Fox) believe not only that Alexander's youthful relationship with Hephaestion was sexual, but that their sexual contacts may have continued into adulthood, which went against the social norms of at least some Greek cities, such as Athens, though some modern researchers have tentatively proposed that Macedonia (or at least the Macedonian court) may have been more tolerant of homosexuality between adults.
Green argues that there is little evidence in ancient sources that Alexander had much carnal interest in women; he did not produce an heir until the very end of his life. However, Ogden calculates that Alexander, who impregnated his partners thrice in eight years, had a higher matrimonial record than his father at the same age. Two of these pregnancies — Stateira's and Barsine's — are of dubious legitimacy.
According to Diodorus Siculus, Alexander accumulated a harem in the style of Persian kings, but he used it rather sparingly, showing great self-control in "pleasures of the body". Nevertheless, Plutarch described how Alexander was infatuated by Roxana while complimenting him on not forcing himself on her. Green suggested that, in the context of the period, Alexander formed quite strong friendships with women, including Ada of Caria, who adopted him, and even Darius' mother Sisygambis, who supposedly died from grief upon hearing of Alexander's death.
Alexander's legacy extended beyond his military conquests. His campaigns greatly increased contacts and trade between East and West, and vast areas to the east were significantly exposed to Greek civilization and influence. Some of the cities he founded became major cultural centers, many surviving into the 21st century. His chroniclers recorded valuable information about the areas through which he marched, while the Greeks themselves got a sense of belonging to a world beyond the Mediterranean.
Alexander's most immediate legacy was the introduction of Macedonian rule to huge new swathes of Asia. At the time of his death, Alexander's empire covered some , and was the largest state of its time. Many of these areas remained in Macedonian hands or under Greek influence for the next 200–300 years. The successor states that emerged were, at least initially, dominant forces, and these 300 years are often referred to as the Hellenistic period.
The eastern borders of Alexander's empire began to collapse even during his lifetime. However, the power vacuum he left in the northwest of the Indian subcontinent directly gave rise to one of the most powerful Indian dynasties in history, the Maurya Empire. Taking advantage of this power vacuum, Chandragupta Maurya (referred to in Greek sources as "Sandrokottos"), of relatively humble origin, took control of the Punjab, and with that power base proceeded to conquer the Nanda Empire.
Over the course of his conquests, Alexander founded some twenty cities that bore his name, most of them east of the Tigris. The first, and greatest, was Alexandria in Egypt, which would become one of the leading Mediterranean cities. The cities' locations reflected trade routes as well as defensive positions. At first, the cities must have been inhospitable, little more than defensive garrisons. Following Alexander's death, many Greeks who had settled there tried to return to Greece. However, a century or so after Alexander's death, many of the Alexandrias were thriving, with elaborate public buildings and substantial populations that included both Greek and local peoples.
In 334 BC, Alexander the Great donated funds for the completion of the new temple of Athena Polias in Priene, in modern-day western Turkey. An inscription from the temple, now housed in the British Museum, declares: "King Alexander dedicated [this temple] to Athena Polias." This inscription is one of the few independent archaeological discoveries confirming an episode from Alexander's life. The temple was designed by Pytheos, one of the architects of the Mausoleum at Halicarnassus.
Libanius wrote that Alexander founded the temple of Zeus Bottiaios (), in the place where later the city of Antioch was built.
"Hellenization" was coined by the German historian Johann Gustav Droysen to denote the spread of Greek language, culture, and population into the former Persian empire after Alexander's conquest. That this export took place is undoubted, and can be seen in the great Hellenistic cities of, for instance, Alexandria, Antioch and Seleucia (south of modern Baghdad). Alexander sought to insert Greek elements into Persian culture and attempted to hybridize Greek and Persian culture. This culminated in his aspiration to homogenize the populations of Asia and Europe. However, his successors explicitly rejected such policies. Nevertheless, Hellenization occurred throughout the region, accompanied by a distinct and opposite 'Orientalization' of the successor states.
The core of the Hellenistic culture promulgated by the conquests was essentially Athenian. The close association of men from across Greece in Alexander's army directly led to the emergence of the largely Attic-based "koine", or "common" Greek dialect. Koine spread throughout the Hellenistic world, becoming the lingua franca of Hellenistic lands and eventually the ancestor of modern Greek. Furthermore, town planning, education, local government, and art current in the Hellenistic period were all based on Classical Greek ideals, evolving into distinct new forms commonly grouped as Hellenistic. Aspects of Hellenistic culture were still evident in the traditions of the Byzantine Empire in the mid-15th century.
Some of the most pronounced effects of Hellenization can be seen in Afghanistan and India, in the region of the relatively late-rising Greco-Bactrian Kingdom (250–125 BC) (in modern Afghanistan, Pakistan, and Tajikistan) and the Indo-Greek Kingdom (180 BC – 10 AD) in modern Afghanistan and India. On the Silk Road trade routes, Hellenistic culture hybridized with Iranian and Buddhist cultures. The cosmopolitan art and mythology of Gandhara (a region spanning the upper confluence of the Indus, Swat and Kabul rivers in modern Pakistan) of the ~3rd century BC to the ~5th century AD are most evident of the direct contact between Hellenistic civilization and South Asia, as are the Edicts of Ashoka, which directly mention the Greeks within Ashoka's dominion as converting to Buddhism and the reception of Buddhist emissaries by Ashoka's contemporaries in the Hellenistic world. The resulting syncretism known as Greco-Buddhism influenced the development of Buddhism and created a culture of Greco-Buddhist art. These Greco-Buddhist kingdoms sent some of the first Buddhist missionaries to China, Sri Lanka and Hellenistic Asia and Europe (Greco-Buddhist monasticism).
Some of the first and most influential figurative portrayals of the Buddha appeared at this time, perhaps modeled on Greek statues of Apollo in the Greco-Buddhist style. Several Buddhist traditions may have been influenced by the ancient Greek religion: the concept of Boddhisatvas is reminiscent of Greek divine heroes, and some Mahayana ceremonial practices (burning incense, gifts of flowers, and food placed on altars) are similar to those practiced by the ancient Greeks; however, similar practices were also observed amongst the native Indic culture. One Greek king, Menander I, probably became Buddhist, and was immortalized in Buddhist literature as 'Milinda'. The process of Hellenization also spurred trade between the east and west. For example, Greek astronomical instruments dating to the 3rd century BC were found in the Greco-Bactrian city of Ai Khanoum in modern-day Afghanistan, while the Greek concept of a spherical earth surrounded by the spheres of planets eventually supplanted the long-standing Indian cosmological belief of a disc consisting of four continents grouped around a central mountain (Mount Meru) like the petals of a flower. The Yavanajataka (lit. Greek astronomical treatise) and Paulisa Siddhanta texts depict the influence of Greek astronomical ideas on Indian astronomy.
Following the conquests of Alexander the Great in the east, Hellenistic influence on Indian art was far-ranging. In the area of architecture, a few examples of the Ionic order can be found as far as Pakistan with the Jandial temple near Taxila. Several examples of capitals displaying Ionic influences can be seen as far as Patna, especially with the Pataliputra capital, dated to the 3rd century BC. The Corinthian order is also heavily represented in the art of Gandhara, especially through Indo-Corinthian capitals.
Alexander and his exploits were admired by many Romans, especially generals, who wanted to associate themselves with his achievements. Polybius began his "Histories" by reminding Romans of Alexander's achievements, and thereafter Roman leaders saw him as a role model. Pompey the Great adopted the epithet "Magnus" and even Alexander's anastole-type haircut, and searched the conquered lands of the east for Alexander's 260-year-old cloak, which he then wore as a sign of greatness. Julius Caesar dedicated a Lysippean equestrian bronze statue but replaced Alexander's head with his own, while Octavian visited Alexander's tomb in Alexandria and temporarily changed his seal from a sphinx to Alexander's profile. The emperor Trajan also admired Alexander, as did Nero and Caracalla. The Macriani, a Roman family that in the person of Macrinus briefly ascended to the imperial throne, kept images of Alexander on their persons, either on jewelry, or embroidered into their clothes.
On the other hand, some Roman writers, particularly Republican figures, used Alexander as a cautionary tale of how autocratic tendencies can be kept in check by republican values. Alexander was used by these writers as an example of ruler values such as (friendship) and (clemency), but also (anger) and (over-desire for glory).
Emperor Julian in his satire called "The Caesars", describes a contest between the previous Roman emperors, with Alexander the Great called in as an extra contestant, in the presence of the assembled gods.
Pausanias writes that Alexander wanted to dig the Mimas mountain (today at the Karaburun area), but he didn't succeed. He also mentions that this was the only unsuccessful project of Alexander. In addition, Pliny the Elder writes about this unsuccessful plan adding that the distance was , and the purpose was to cut a canal through the isthmus, so as to connect the Caystrian and Hermaean bays.
Arrian wrote that Aristobulus said that the Icarus island (modern Failaka Island) in the Persian Gulf had this name because Alexander ordered the island to be named like this, after the Icarus island in the Aegean Sea.
Legendary accounts surround the life of Alexander the Great, many deriving from his own lifetime, probably encouraged by Alexander himself. His court historian Callisthenes portrayed the sea in Cilicia as drawing back from him in proskynesis. Writing shortly after Alexander's death, another participant, Onesicritus, invented a tryst between Alexander and Thalestris, queen of the mythical Amazons. When Onesicritus read this passage to his patron, Alexander's general and later King Lysimachus reportedly quipped, "I wonder where I was at the time."
In the first centuries after Alexander's death, probably in Alexandria, a quantity of the legendary material coalesced into a text known as the "Alexander Romance", later falsely ascribed to Callisthenes and therefore known as "Pseudo-Callisthenes". This text underwent numerous expansions and revisions throughout Antiquity and the Middle Ages, containing many dubious stories, and was translated into numerous languages.
Alexander the Great's accomplishments and legacy have been depicted in many cultures. Alexander has figured in both high and popular culture beginning in his own era to the present day. The "Alexander Romance", in particular, has had a significant impact on portrayals of Alexander in later cultures, from Persian to medieval European to modern Greek.
Alexander features prominently in modern Greek folklore, more so than any other ancient figure. The colloquial form of his name in modern Greek ("O Megalexandros") is a household name, and he is the only ancient hero to appear in the Karagiozis shadow play. One well-known fable among Greek seamen involves a solitary mermaid who would grasp a ship's prow during a storm and ask the captain "Is King Alexander alive?" The correct answer is "He is alive and well and rules the world!" causing the mermaid to vanish and the sea to calm. Any other answer would cause the mermaid to turn into a raging Gorgon who would drag the ship to the bottom of the sea, all hands aboard.
In pre-Islamic Middle Persian (Zoroastrian) literature, Alexander is referred to by the epithet "gujastak", meaning "accursed", and is accused of destroying temples and burning the sacred texts of Zoroastrianism. In Sunni Islamic Persia, under the influence of the "Alexander Romance" (in "Iskandarnamah"), a more positive portrayal of Alexander emerges. Firdausi's "Shahnameh" ("The Book of Kings") includes Alexander in a line of legitimate Persian shahs, a mythical figure who explored the far reaches of the world in search of the Fountain of Youth. Later Persian writers associate him with philosophy, portraying him at a symposium with figures such as Socrates, Plato and Aristotle, in search of immortality.
The figure of Dhul-Qarnayn (literally "the Two-Horned One") mentioned in the Quran is believed by scholars to be based on later legends of Alexander. In this tradition, he was a heroic figure who built a wall to defend against the nations of Gog and Magog. He then travelled the known world in search of the Water of Life and Immortality, eventually becoming a prophet.
The Syriac version of the "Alexander Romance" portrays him as an ideal Christian world conqueror who prayed to "the one true God". In Egypt, Alexander was portrayed as the son of Nectanebo II, the last pharaoh before the Persian conquest. His defeat of Darius was depicted as Egypt's salvation, "proving" Egypt was still ruled by an Egyptian.
According to Josephus, Alexander was shown the Book of Daniel when he entered Jerusalem, which described a mighty Greek king who would conquer the Persian Empire. This is cited as a reason for sparing Jerusalem.
In Hindi and Urdu, the name "Sikandar", derived from the Persian name for Alexander, denotes a rising young talent, and the Delhi Sultanate ruler Aladdin Khajli stylized himself as "Sikandar-i-Sani" (the Second Alexander the Great). In medieval India, Turkic and Afghan sovereigns from the Iranian-cultured region of Central Asia brought positive cultural connotations of Alexander to the Indian subcontinent, resulting in the efflorescence of "Sikandernameh" (Alexander Romances) written by Indo-Persian poets such as Amir Khusrow and the prominence of Alexander the Great as a popular subject in Mughal-era Persian miniatures. In medieval Europe, Alexander the Great was revered as a member of the Nine Worthies, a group of heroes whose lives were believed to encapsulate all the ideal qualities of chivalry.
In Greek Anthology there are poems referring to Alexander.
Irish playwright Aubrey Thomas de Vere wrote "Alexander the Great, a Dramatic Poem".
In popular culture, the British heavy metal band Iron Maiden included a song titled "Alexander the Great" on their 1986 album "Somewhere in Time". Written by bass player Steve Harris, the song retells Alexander's life.
Apart from a few inscriptions and fragments, texts written by people who actually knew Alexander or who gathered information from men who served with Alexander were all lost. Contemporaries who wrote accounts of his life included Alexander's campaign historian Callisthenes; Alexander's generals Ptolemy and Nearchus; Aristobulus, a junior officer on the campaigns; and Onesicritus, Alexander's chief helmsman. Their works are lost, but later works based on these original sources have survived. The earliest of these is Diodorus Siculus (1st century BC), followed by Quintus Curtius Rufus (mid-to-late 1st century AD), Arrian (1st to 2nd century AD), the biographer Plutarch (1st to 2nd century AD), and finally Justin, whose work dated as late as the 4th century. Of these, Arrian is generally considered the most reliable, given that he used Ptolemy and Aristobulus as his sources, closely followed by Diodorus. | https://en.wikipedia.org/wiki?curid=783 |
Alfred Korzybski
Alfred Habdank Skarbek Korzybski (, ; July 3, 1879 – March 1, 1950) was a Polish-American independent scholar who developed a field called general semantics, which he viewed as both distinct from, and more encompassing than, the field of semantics. He argued that human knowledge of the world is limited both by the human nervous system and the languages humans have developed, and thus no one can have direct access to reality, given that the most we can know is that which is filtered through the brain's responses to reality. His best known dictum is "The map is not the territory".
Born in Warsaw, Poland, then part of the Russian Empire, Korzybski belonged to an aristocratic Polish family whose members had worked as mathematicians, scientists, and engineers for generations. He learned the Polish language at home and the Russian language in schools; and having a French and German governess, he became fluent in four languages as a child.
Korzybski studied engineering at the Warsaw University of Technology. During the First World War (1914–1918) Korzybski served as an intelligence officer in the Russian Army. After being wounded in a leg and suffering other injuries, he moved to North America in 1916 (first to Canada, then to the United States) to coordinate the shipment of artillery to Russia. He also lectured to Polish-American audiences about the conflict, promoting the sale of war bonds. After the war he decided to remain in the United States, becoming a naturalized citizen in 1940. He met Mira Edgerly, a painter of portraits on ivory, shortly after the 1918 Armistice; They married in January 1919; the marriage lasted until his death.
E. P. Dutton published Korzybski's first book, "Manhood of Humanity", in 1921. In this work he proposed and explained in detail a new theory of humankind: mankind as a "time-binding" class of life (humans perform time binding by the transmission of knowledge and abstractions through time which become accreted in cultures).
Korzybski's work culminated in the initiation of a discipline that he named general semantics (GS). This should not be confused with semantics. The basic principles of general semantics, which include time-binding, are described in the publication "Science and Sanity", published in 1933. In 1938 Korzybski founded the Institute of General Semantics in Chicago. The post-World War II housing shortage in Chicago cost him the Institute's building lease, so in 1946 he moved the Institute to Lakeville, Connecticut, U.S., where he directed it until his death in 1950.
Korzybski maintained that humans are limited in what they know by (1) the structure of their nervous systems, and (2) the structure of their languages. Humans cannot experience the world directly, but only through their "abstractions" (nonverbal impressions or "gleanings" derived from the nervous system, and verbal indicators expressed and derived from language). These sometimes mislead us about what is the truth. Our understanding sometimes lacks "similarity of structure" with what is actually happening.
He sought to train our awareness of abstracting, using techniques he had derived from his study of mathematics and science. He called this awareness, this goal of his system, "consciousness of abstracting". His system included the promotion of attitudes such as "I don't know; let's see," in order that we may better discover or reflect on its realities as revealed by modern science. Another technique involved becoming inwardly and outwardly quiet, an experience he termed, "silence on the objective levels".
Many devotees and critics of Korzybski reduced his rather complex system to a simple matter of what he said about the verb form "is" of the general verb "to be." His system, however, is based primarily on such terminology as the different "orders of abstraction," and formulations such as "consciousness of abstracting." The contention that Korzybski "opposed" the use of the verb "to be" would be a profound exaggeration.
He thought that "certain uses" of the verb "to be", called the "is of identity" and the "is of predication", were faulty in structure, e.g., a statement such as, "Elizabeth is a fool" (said of a person named "Elizabeth" who has done something that we regard as foolish). In Korzybski's system, one's assessment of Elizabeth belongs to a higher order of abstraction than Elizabeth herself. Korzybski's remedy was to "deny" identity; in this example, to be aware continually that "Elizabeth" is "not" what we "call" her. We find Elizabeth not in the verbal domain, the world of words, but the nonverbal domain (the two, he said, amount to different orders of abstraction). This was expressed by Korzybski's most famous premise, "the map is not the territory". Note that this premise uses the phrase "is not", a form of "to be"; this and many other examples show that he did not intend to abandon "to be" as such. In fact, he said explicitly that there were no structural problems with the verb "to be" when used as an auxiliary verb or when used to state existence or location. It was even acceptable at times to use the faulty forms of the verb "to be," as long as one was aware of their structural limitations.
One day, Korzybski was giving a lecture to a group of students, and he interrupted the lesson suddenly in order to retrieve a packet of biscuits, wrapped in white paper, from his briefcase. He muttered that he just had to eat something, and he asked the students on the seats in the front row if they would also like a biscuit. A few students took a biscuit. "Nice biscuit, don't you think," said Korzybski, while he took a second one. The students were chewing vigorously. Then he tore the white paper from the biscuits, in order to reveal the original packaging. On it was a big picture of a dog's head and the words "Dog Cookies." The students looked at the package, and were shocked. Two of them wanted to vomit, put their hands in front of their mouths, and ran out of the lecture hall to the toilet. "You see," Korzybski remarked, "I have just demonstrated that people don't just eat food, but also words, and that the taste of the former is often outdone by the taste of the latter."
William Burroughs went to a Korzybski workshop in the Autumn of 1939. He was 25 years old, and paid $40. His fellow students—there were 38 in all—included young Samuel I. Hayakawa (later to become a Republican member of the U.S. Senate), Ralph Moriarty deBit (later to become the spiritual teacher Vitvan) and Wendell Johnson (founder of the Monster Study).
Korzybski was well received in numerous disciplines, as evidenced by the positive reactions from leading figures in the sciences and humanities in the 1940s and 1950s. These include author Robert A. Heinlein naming a character after him in his 1940 short story "Blowups Happen", and science fiction writer A. E. van Vogt in his novel "The World of Null-A", published in 1948.
As reported in the third edition of "Science and Sanity", in World War II the US Army used Korzybski's system to treat battle fatigue in Europe, under the supervision of Dr. Douglas M. Kelley, who went on to become the psychiatrist in charge of the Nazi war criminals at Nuremberg.
Some of the General Semantics tradition was continued by Samuel I. Hayakawa. | https://en.wikipedia.org/wiki?curid=784 |
Asteroids (video game)
Asteroids is a space-themed multidirectional shooter arcade game designed by Lyle Rains, Ed Logg, and Dominic Walsh and released in November 1979 by Atari, Inc. The player controls a single spaceship in an asteroid field which is periodically traversed by flying saucers. The object of the game is to shoot and destroy the asteroids and saucers, while not colliding with either, or being hit by the saucers' counter-fire. The game becomes harder as the number of asteroids increases.
"Asteroids" was one of the first major hits of the golden age of arcade games; the game sold over 70,000 arcade cabinets and proved both popular with players and influential with developers. In the 1980s it was ported to Atari's home systems, and the Atari VCS version sold over three million copies. The game was widely imitated, and it directly influenced "Defender", "Gravitar", and many other video games.
"Asteroids" was conceived during a meeting between Logg and Rains, who decided to use hardware developed by Howard Delman previously used for "Lunar Lander". Asteroids was based on an unfinished game titled "Cosmos"; its physics model, control scheme, and gameplay elements were derived from "Spacewar!", "Computer Space", and "Space Invaders" and refined through trial and error. The game is rendered on a vector display in a two-dimensional view that wraps around both screen axes.
The objective of "Asteroids" is to destroy asteroids and saucers. The player controls a triangular ship that can rotate left and right, fire shots straight forward, and thrust forward. Once the ship begins moving in a direction, it will continue in that direction for a time without player intervention unless the player applies thrust in a different direction. The ship eventually comes to a stop when not thrusting. The player can also send the ship into hyperspace, causing it to disappear and reappear in a random location on the screen, at the risk of self-destructing or appearing on top of an asteroid.
Each level starts with a few large asteroids drifting in various directions on the screen. Objects wrap around screen edges – for instance, an asteroid that drifts off the top edge of the screen reappears at the bottom and continues moving in the same direction. As the player shoots asteroids, they break into smaller asteroids that move faster and are more difficult to hit. Smaller asteroids are also worth more points. Two flying saucers appear periodically on the screen; the "big saucer" shoots randomly and poorly, while the "small saucer" fires frequently at the ship. After reaching a score of 40,000, only the small saucer appears. As the player's score increases, the angle range of the shots from the small saucer diminishes until the saucer fires extremely accurately. Once the screen has been cleared of all asteroids and flying saucers, a new set of large asteroids appears, thus starting the next level. The game gets harder as the number of asteroids increases until after the score reaches a range between 40,000 and 60,000. The player starts with 3-5 lives upon game start and gains an extra life per 10,000 points. When the player loses all their lives, the game ends. Machine "turns over" at 99,990 points, which is the maximum high score that can be achieved.
"Asteroids" contains several bugs. The game slows down as the player gains 50-100 lives, due to a programming error in that there is no limit for the permitted number of lives. The player can "lose" the game after more than 250 lives are collected.
In the original game design, saucers were supposed to begin shooting as soon as they appeared, but this was changed. Additionally, saucers can only aim at the player's ship on-screen; they are not capable of aiming across a screen boundary. These behaviors allow a "lurking" strategy, in which the player stays near the edge of the screen opposite the saucer. By keeping just one or two rocks in play, a player can shoot across the boundary and destroy saucers to accumulate points indefinitely with little risk of being destroyed. Arcade operators began to complain about losing revenue due to this exploit. In response, Atari issued a patched EPROM and, due to the impact of this exploit, Atari (and other companies) changed their development and testing policies to try to prevent future games from having such exploits.
"Asteroids" was conceived by Lyle Rains and programmed by Ed Logg with collaborations from other Atari staff. Logg was impressed with the Atari Video Computer System (later called the Atari 2600), and he joined Atari's coin-op division to work on "Dirt Bike", which was never released due to an unsuccessful field test. Paul Mancuso joined the development team as "Asteroids" technician and engineer Howard Delman contributed to the hardware. During a meeting in April 1979, Rains discussed "Planet Grab", a multiplayer arcade game later renamed to "Cosmos". Logg did not know the name of the game, thinking "Computer Space" as "the inspiration for the two-dimensional approach". Rains conceived of "Asteroids" as a mixture of "Computer Space" and "Space Invaders", combining the two-dimensional approach of "Computer Space" with "Space Invaders" addictive gameplay of "completion" and "eliminate all threats". The unfinished game featured a giant, indestructible asteroid, so Rains asked Logg: "Well, why don’t we have a game where you shoot the rocks and blow them up?" In response, Logg described a similar concept where the player selectively shoots at rocks that break into smaller pieces. Both agreed on the concept.
"Asteroids" was implemented on hardware developed by Delman and is a vector game, in which the graphics are composed of lines drawn on a vector monitor. Rains initially wanted the game done in raster graphics, but Logg, experienced in vector graphics, suggested an XY monitor because the high image quality would permit precise aiming. The hardware is chiefly a MOS 6502 executing the game program, and QuadraScan, a high-resolution vector graphics processor developed by Atari and referred to as an "XY display system" and the "Digital Vector Generator (DVG)".
The original design concepts for QuadraScan came out of Cyan Engineering, Atari's off-campus research lab in Grass Valley, California, in 1978. Cyan gave it to Delman, who finished the design and first used it for "Lunar Lander". Logg received Delman's modified board with five buttons, 13 sound effects, and additional RAM, and he used it to develop "Asteroids". The size of the board was 4 by 4 inches, and it was "linked up" to a monitor.
Logg modeled the player's ship, the five-button control scheme, and the game physics after "Spacewar!", which he had played as a student at the University of California, Berkeley, but made several changes to improve playability. The ship was programmed into the hardware and rendered by the monitor, and it was configured to move with thrust and inertia. The hyperspace button was not placed near Logg's right thumb, which he was dissatisfied with, as he had a problem "tak[ing] his hand off the thrust button". Drawings of asteroids in various shapes were incorporated into the game. Logg copied the idea of a high score table with initials from Exidy's "Star Fire".
The two saucers were formulated to be different from each other. A steadily decreasing timer that shortens intervals between saucer attacks was employed to keep the player from not shooting asteroids and saucers. The minimalist soundtrack features a "heartbeat" sound effect, which quickens as the game progresses. The game did not have a sound chip, so Delman created a hardware circuit for 13 sound effects by hand which was wired onto the board.
A prototype of "Asteroids" was well received by several Atari staff and engineers, who "wander[ed] between labs, passing comment and stopping to play as they went". Logg was often asked when he would be leaving by employees eager to play the prototype, so he created a second prototype specifically for staff to play. Atari went to Sacramento, California for testing, setting up prototypes of the game in local arcades to measure its potential success. The company also observed veteran players and younger players during focus group sessions at Atari itself. A group of old players familiar with "Spacewar!" struggled to maintain grip on the thrust button and requested a joystick, whereas younger players accustomed to "Space Invaders" noted they get no break in the game. Logg and other Atari engineers observed proceedings and documented comments in four pages.
"Asteroids" was released for the Atari VCS (later renamed the Atari 2600) and Atari 8-bit family in 1981, then the Atari 7800 in 1986. A port for the Atari 5200, identical to the Atari 8-bit computer version, was in development in 1982, but was not published.
Programmers Brad Stewart and Bob Smith tried to fit the Atari VCS port into a 4 KB cartridge, but were unable to. It became the first game for the console to use bank switching, a technique developed by Carl Nielsen's group of engineers that increased available ROM space from 4 KB to 8 KB.
The Atari 7800 version was a launch title and features cooperative play. The asteroids have colorful textures, and the "heartbeat" sound effect remains intact.
"Asteroids" was immediately successful upon release. It displaced "Space Invaders" by popularity in the United States and became Atari's best selling arcade game of all time, with over 70,000 units sold. Atari earned an estimated $150 million in sales from the game, and arcade operators earned a further $500 million from coin drops. Atari had been in the process of manufacturing another vector game, "Lunar Lander", but demand for "Asteroids" was so high "that several hundred "Asteroids" games were shipped in "Lunar Lander" cabinets". "Asteroids" was so popular that some video arcade operators had to install large boxes to hold the number of coins spent by players.
"Asteroids" received positive reviews from video game critics and has been regarded as Logg's magnum opus. William Cassidy, writing for GameSpy's "Classic Gaming", noticed its innovations, including being one of the first video games to track initials and allow players to enter their initials for appearing in the top 10 high scores, and commented, "the vector graphics fit the futuristic outer space theme very well." In 1996, "Next Generation" listed it as number 39 on their "Top 100 Games of All Time", particularly lauding the control dynamics which require "the constant juggling of speed, positioning, and direction." In 1999, "Next Generation" listed "Asteroids" as number 29 on their "Top 50 Games of All Time", commenting that, ""Asteroid" was a classic the day it was released, and it has never lost any of its appeal." "Asteroids" was ranked fourth on "Retro Gamer"s list of "Top 25 Arcade Games"; the "Retro Gamer" staff cited its simplicity and the lack of a proper ending as allowances of revisiting the game. In 2012, "Asteroids" was listed on Time's All-TIME 100 greatest video games list. "Entertainment Weekly" named "Asteroids" one of the top ten games for the Atari 2600 in 2013. It was added to the Museum of Modern Art's collection of video games. By contrast, in March 1983 the Atari 8-bit port won sixth place in "Softline"s Dog of the Year awards "for badness in computer games", Atari division, based on reader submissions.
Richard A. Edwards reviewed the 1981 "Asteroids" home cartridge in "The Space Gamer" No. 46. Edwards commented that "This home cartridge is a virtual duplicate of the ever-popular Atari arcade game. [...] If blasting asteroids is the thing you want to do then this is the game, but at this price I can't wholeheartedly recommend it."
Usage of the names of "Saturday Night Live" characters "Mr. Bill" and "Sluggo" to refer to the saucers in an "Esquire" article about the game led to Logg receiving a cease and desist letter from a lawyer with the "Mr. Bill Trademark."
Released in 1981, "Asteroids Deluxe" was the first sequel to "Asteroids". Dave Shepperd edited the code and made enhancements to the game without Logg's involvement. The onscreen objects were tinted blue, and hyperspace was replaced by a shield that depleted if used. The asteroids rotated, and the new "killer satellite" enemy broke apart (when hit) into smaller ships that homed in on the player's position. The arcade machine's monitor displayed vector graphics overlaying a holographic backdrop. The game is much more difficult than the original and enables saucers to shoot across the screen boundary, eliminating a common strategy for high scores in the original game.
It was followed by Owen Rubin's "Space Duel" in 1982, featuring colorful geometric shapes and co-op multiplayer gameplay.
In 1987's "Blasteroids", Ed Rotberg added "power-ups, ship morphing, branching levels, bosses, and the ability to dock your ships in multiplayer for added firepower". "Blasteroids" uses raster graphics instead of vectors.
The game was included as part of the Atari Lynx title "Super Asteroids & Missile Command", and featured in the original "Microsoft Arcade" compilation in 1993, the latter with four other Atari video games: "Missile Command", "Tempest", "Centipede", and "Battlezone".
Activision made an enhanced version of "Asteroids" for PlayStation, Nintendo 64, Microsoft Windows, and the Game Boy Color in 1998. Doug Perry, writing for entertainment and video game journalism website "IGN", praised the high-end graphics – with realistic space object models, backgrounds, and special effects – for making "Asteroids" "a pleasure to look at" while being a homage to the original arcade version. The Atari Flashback series of dedicated video game consoles have included both the 2600 and the arcade versions of "Asteroids".
Published by Crave Entertainment on December 14, 1999, "Asteroids Hyper 64" is the Nintendo 64 port of "Asteroids". The game's graphics were upgraded to 3D, with both the ship and asteroids receiving polygon models along static backgrounds, and it was supplemented with weapons and a multiplayer mode. "IGN" writer Matt Casamassina was pleased that the gameplay was faithful to the original but felt the minor additions and constant "repetition" was not enough to make the port "warrant a $50 purchase". He was disappointed about the lack of music and found the sound effects to be of poor quality.
A technical demo of "Asteroids" was developed by iThink for the Atari Jaguar but was never released. Unofficially referred to as "Asteroids 2000", it was demonstrated at E-JagFest 2000.
In 2001, Infogrames released "Atari Anniversary Edition" for the Sega Dreamcast, PlayStation, and PC compatibles. Developed by Digital Eclipse, it included emulated versions of Asteroids and other old Atari games. Jeff Gerstmann of "GameSpot" criticized the Dreamcast version for its limitations, such as the presentation of vector graphics on a low resolution television set, which obscures the copyright text in "Asteroids". The arcade and Atari 2600 versions of "Asteroids", along with "Asteroids Deluxe", were included in "Atari Anthology" for both Xbox and PlayStation 2.
Released on November 28, 2007, the Xbox Live Arcade port of "Asteroids" has revamped HD graphics along with an added intense "throttle monkey" mode. Both "Asteroids" in its arcade and 2600 versions and "Asteroids Deluxe" were ported to Microsofts "Game Room" download service in 2010. Glu Mobile released a mobile phone port of the game with supplementary features as well as the original arcade version.
"Asteroids" was included on "Atari Greatest Hits Volume 1" for the Nintendo DS. Craig Harris, writing for IGN, noted that the Nintendo DS's small screen can not properly display details of games with vector graphics.
Quality Software's "Asteroids in Space" (1980) was one of the best selling games for the Apple II and voted one of the most popular software titles of 1978-80 by "Softalk" magazine.
In December 1981, "BYTE" reviewed eight "Asteroids" clones for home computers. Three other Apple II "Asteroids" clones were reviewed together in the 1982 "Creative Computing Software Buyers Guide": "The Asteroid Field", "Asteron", and "Apple-Oids". In the last of these, the asteroids are in the shape of apples. Two independent clones, "Asteroid" for the Apple II and "Fasteroids" for TRS-80, were renamed to "Planetoids" and sold by Adventure International. Others clones include Acornsoft's "Meteors", "Moons of Jupiter" for the VIC-20, and "MineStorm" for the Vectrex.
The Mattel Intellivision game "Meteor!" , an "Asteroids" clone, was cancelled to avoid a lawsuit, and was reworked as "Astrosmash". The game borrows elements from "Asteroids" and "Space Invaders".
On February 6, 1982, Leo Daniels of Carolina Beach, North Carolina, set a world record score of 40,101,910 points. On November 13 of the same year, 15-year-old Scott Safran of Cherry Hill, New Jersey, set a new record at 41,336,440 points. In 1998, to congratulate Safran on his accomplishment, the Twin Galaxies Intergalactic Scoreboard searched for him for four years until 2002, when it was discovered that he had died in an accident in 1989. In a ceremony in Philadelphia on April 27, 2002, Walter Day of Twin Galaxies presented an award to the surviving members of Safran's family, commemorating his achievement. On April 5, 2010, John McAllister broke Safran's record with a high score of 41,838,740 in a 58-hour Internet livestream.
Some claim that the true world record for Asteroids was set in a laundromat in Hyde Park NY from June 30th- July 3rd 1982 and that details of the score of over 48 million were published in the July 4th edition of the Poughkeepsie Journal. | https://en.wikipedia.org/wiki?curid=785 |
Asparagales
Asparagales (asparagoid lilies) is an order of plants in modern classification systems such as the Angiosperm Phylogeny Group (APG) and the Angiosperm Phylogeny Web. The order takes its name from the type family Asparagaceae and is placed in the monocots amongst the lilioid monocots. The order has only recently been recognized in classification systems. It was first put forward by Huber in 1977 and later taken up in the Dahlgren system of 1985 and then the APG in 1998, 2003 and 2009. Before this, many of its families were assigned to the old order Liliales, a very large order containing almost all monocots with colorful tepals and lacking starch in their endosperm. DNA sequence analysis indicated that many of the taxa previously included in Liliales should actually be redistributed over three orders, Liliales, Asparagales, and Dioscoreales. The boundaries of the Asparagales and of its families have undergone a series of changes in recent years; future research may lead to further changes and ultimately greater stability. In the APG circumscription, Asparagales is the largest order of monocots with 14 families, 1,122 genera, and about 36,000 species.
The order is clearly circumscribed on the basis of molecular phylogenetics, but it is difficult to define morphologically since its members are structurally diverse. Most species of Asparagales are herbaceous perennials, although some are climbers and some are tree-like. The order also contains many geophytes (bulbs, corms, and various kinds of tuber). According to telomere sequence, at least two evolutionary switch-points happened within the order. The basal sequence is formed by TTTAGGG like in the majority of higher plants. Basal motif was changed to vertebrate-like TTAGGG and finally, the most divergent motif CTCGGTTATGGG appears in "Allium". One of the defining characteristics (synapomorphies) of the order is the presence of phytomelanin, a black pigment present in the seed coat, creating a dark crust. Phytomelanin is found in most families of the Asparagales (although not in Orchidaceae, thought to be a sister to the rest of the group).
The leaves of almost all species form a tight rosette, either at the base of the plant or at the end of the stem, but occasionally along the stem. The flowers are not particularly distinctive, being 'lily type', with six tepals and up to six stamina.
The order is thought to have first diverged from other related monocots some 120–130 million years ago (early in the Cretaceous period), although given the difficulty in classifying the families involved, estimates are likely to be uncertain.
From an economic point of view, the order Asparagales is second in importance within the monocots to the order Poales (which includes grasses and cereals). Species are used as food and flavourings (e.g. onion, garlic, leek, asparagus, vanilla), as cut flowers (e.g. freesia, gladiolus, iris, orchids), and as garden ornamentals (e.g. day lilies, lily of the valley, "Agapanthus").
Although most species in the order are herbaceous, some no more than 15 cm high, there are a number of climbers ("e.g.", some species of "Asparagus"), as well as several genera forming trees (e.g. "Agave", "Cordyline", "Yucca", "Dracaena", "Aloe" ), which can exceed 10 m in height. Succulent genera occur in several families (e.g. "Aloe").
Almost all species have a tight cluster of leaves (a rosette), either at the base of the plant or at the end of a more-or-less woody stem as with "Yucca". In some cases, the leaves are produced along the stem. The flowers are in the main not particularly distinctive, being of a general 'lily type', with six tepals, either free or fused from the base and up to six stamina. They are frequently clustered at the end of the plant stem.
The Asparagales are generally distinguished from the Liliales by the lack of markings on the tepals, the presence of septal nectaries in the ovaries, rather than the bases of the tepals or stamen filaments, and the presence of secondary growth. They are generally geophytes, but with linear leaves, and a lack of fine reticular venation.
The seeds characteristically have the external epidermis either obliterated (in most species bearing fleshy fruit), or if present, have a layer of black carbonaceous phytomelanin in species with dry fruits (nuts). The inner part of the seed coat is generally collapsed, in contrast to Liliales whose seeds have a well developed outer epidermis, lack phytomelanin, and usually display a cellular inner layer.
The orders which have been separated from the old Liliales are difficult to characterize. No single morphological character appears to be diagnostic of the order Asparagales.
As circumscribed within the Angiosperm Phylogeny Group system Asparagales is the largest order within the monocotyledons, with 14 families, 1,122 genera and about 25,000–42,000 species, thus accounting for about 50% of all monocots and 10–15% of the flowering plants (angiosperms). The attribution of botanical authority for the name Asparagales belongs to Johann Heinrich Friedrich Link (1767–1851) who coined the word 'Asparaginae' in 1829 for a higher order taxon that included "Asparagus" although Adanson and Jussieau had also done so earlier (see History). Earlier circumscriptions of Asparagales attributed the name to Bromhead (1838), who had been the first to use the term 'Asparagales'.
The type genus, "Asparagus", from which the name of the order is derived, was described by Carl Linnaeus in 1753, with ten species. He placed "Asparagus" within the "Hexandria Monogynia" (six stamens, one carpel) in his sexual classification in the "Species Plantarum". The majority of taxa now considered to constitute Asparagales have historically been placed within the very large and diverse family, Liliaceae. The family Liliaceae was first described by Michel Adanson in 1763, and in his taxonomic scheme he created eight sections within it, including the Asparagi with "Asparagus" and three other genera. The system of organising genera into families is generally credited to Antoine Laurent de Jussieu who formally described both the Liliaceae and the type family of Asparagales, the Asparagaceae, as Lilia and Asparagi, respectively, in 1789. Jussieu established the hierarchical system of taxonomy (phylogeny), placing "Asparagus" and related genera within a division of Monocotyledons, a class (III) of "Stamina Perigynia" and 'order' Asparagi, divided into three subfamilies. The use of the term "Ordo" (order) at that time was closer to what we now understand as Family, rather than Order. In creating his scheme he used a modified form of Linnaeus' sexual classification but using the respective topography of stamens to carpels rather than just their numbers. While De Jussieu's "Stamina Perigynia" also included a number of 'orders' that would eventually form families within the Asparagales such as the Asphodeli (Asphodelaceae), Narcissi (Amaryllidaceae) and Irides (Iridaceae), the remainder are now allocated to other orders. Jussieu's Asparagi soon came to be referred to as "Asparagacées" in the French literature (Latin: Asparagaceae). Meanwhile, the 'Narcissi' had been renamed as the 'Amaryllidées' (Amaryllideae) in 1805, by Jean Henri Jaume Saint-Hilaire, using "Amaryllis" as the type species rather than "Narcissus", and thus has the authority attribution for Amaryllidaceae. In 1810 Brown proposed that a subgroup of Liliaceae be distinguished on the basis of the position of the ovaries and be referred to as Amaryllideae and in 1813 de Candolle described Liliacées Juss. and Amaryllidées Brown as two quite separate families.
The literature on the organisation of genera into families and higher ranks became available in the English language with Samuel Frederick Gray's "A natural arrangement of British plants" (1821). Gray used a combination of Linnaeus' sexual classification and Jussieu's natural classification to group together a number of families having in common six equal stamens, a single style and a perianth that was simple and petaloid, but did not use formal names for these higher ranks. Within the grouping he separated families by the characteristics of their fruit and seed. He treated groups of genera with these characteristics as separate families, such as Amaryllideae, Liliaceae, Asphodeleae and Asparageae.
The circumscription of Asparagales has been a source of difficulty for many botanists from the time of John Lindley (1846), the other important British taxonomist of the early nineteenth century. In his first taxonomic work, "An Introduction to the Natural System of Botany" (1830) he partly followed Jussieu by describing a subclass he called Endogenae, or Monocotyledonous Plants (preserving de Candolle's "Endogenæ phanerogamæ") divided into two tribes, the Petaloidea and Glumaceae. He divided the former, often referred to as petaloid monocots, into 32 orders, including the Liliaceae (defined narrowly), but also most of the families considered to make up the Asparagales today, including the Amaryllideae.
By 1846, in his final scheme Lindley had greatly expanded and refined the treatment of the monocots, introducing both an intermediate ranking (Alliances) and tribes within orders ("i.e." families). Lindley placed the Liliaceae within the Liliales, but saw it as a paraphyletic ("catch-all") family, being all Liliales not included in the other orders, but hoped that the future would reveal some characteristic that would group them better. The order Liliales was very large and had become a used to include almost all monocotyledons with colourful tepals and without starch in their endosperm (the lilioid monocots). The Liliales was difficult to divide into families because morphological characters were not present in patterns that clearly demarcated groups. This kept the Liliaceae separate from the Amaryllidaceae (Narcissales). Of these Liliaceae was divided into eleven tribes (with 133 genera) and Amaryllidaceae into four tribes (with 68 genera), yet both contained many genera that would eventually segregate to each other's contemporary orders (Liliales and Asparagales respectively). The Liliaceae would be reduced to a small 'core' represented by the tribe Tulipae, while large groups such Scilleae and Asparagae would become part of Asparagales either as part of the Amaryllidaceae or as separate families. While of the Amaryllidaceae, the Agaveae would be part of Asparagaceae but the Alstroemeriae would become a family within the Liliales.
The number of known genera (and species) continued to grow and by the time of the next major British classification, that of Bentham and Hooker in 1883 (published in Latin) several of Lindley's other families had been absorbed into the Liliaceae. They used the term 'series' to indicate suprafamilial rank, with seven series of monocotyledons (including Glumaceae), but did not use Lindley's terms for these. However they did place the Liliaceous and Amaryllidaceous genera into separate series. The Liliaceae were placed in series Coronariae, while the Amaryllideae were placed in series Epigynae. The Liliaceae now consisted of twenty tribes (including Tulipeae, Scilleae and Asparageae), and the Amaryllideae of five (including Agaveae and Alstroemerieae). An important addition to the treatment of the Liliaceae was the recognition of the Allieae as a distinct tribe that would eventually find its way to the Asparagales as the subfamily Allioideae of the Amaryllidaceae.
The appearance of Charles Darwin's Origin of Species in 1859 changed the way that taxonomists considered plant classification, incorporating evolutionary information into their schemata. The Darwinian approach led to the concept of phylogeny (tree-like structure) in assembling classification systems, starting with Eichler. Eichler, having established a hierarchical system in which the flowering plants (angiosperms) were divided into monocotyledons and dicotyledons, further divided into former into seven orders. Within the Liliiflorae were seven families, including Liliaceae and Amaryllidaceae. Liliaceae included "Allium" and "Ornithogalum" (modern Allioideae) and "Asparagus".
Engler, in his system developed Eichler's ideas into a much more elaborate scheme which he treated in a number of works including "Die Natürlichen Pflanzenfamilien" (Engler and Prantl 1888) and "Syllabus der Pflanzenfamilien" (1892–1924). In his treatment of Liliiflorae the Liliineae were a suborder which included both families Liliaceae and Amaryllidaceae. The Liliaceae had eight subfamilies and the Amaryllidaceae four. In this rearrangement of Liliaceae, with fewer subdivisions, the core Liliales were represented as subfamily Lilioideae (with Tulipae and Scilleae as tribes), the Asparagae were represented as Asparagoideae and the Allioideae was preserved, representing the alliaceous genera. Allieae, Agapantheae and Gilliesieae were the three tribes within this subfamily. In the Amaryllidacea, there was little change from Bentham and Hooker. A similar approach was adopted by Wettstein.
In the twentieth century the Wettstein system (1901–1935) placed many of the taxa in an order called 'Liliiflorae'. Next Johannes Paulus Lotsy (1911) proposed dividing the Liliiflorae into a number of smaller families including Asparagaceae. Then Herbert Huber (1969, 1977), following Lotsy's example, proposed that the Liliiflorae be split into four groups including the 'Asparagoid' Liliiflorae.
The widely used Cronquist system (1968–1988) used the very broadly defined order Liliales.
These various proposals to separate small groups of genera into more homogeneous families made little impact till that of Dahlgren (1985) incorporating new information including synapomorphy. Dahlgren developed Huber's ideas further and popularised them, with a major deconstruction of existing families into smaller units. They created a new order, calling it Asparagales. This was one of five orders within the superorder Liliiflorae. Where Cronquist saw one family, Dahlgren saw forty distributed over three orders (predominantly Liliales and Asparagales).
Over the 1980s, in the context of a more general review of the classification of angiosperms, the Liliaceae were subjected to more intense scrutiny. By the end of that decade, the Royal Botanic Gardens at Kew, the British Museum of Natural History and the Edinburgh Botanical Gardens formed a committee to examine the possibility of separating the family at least for the organization of their herbaria. That committee finally recommended that 24 new families be created in the place of the original broad Liliaceae, largely by elevating subfamilies to the rank of separate families.
The order Asparagales as currently circumscribed has only recently been recognized in classification systems, through the advent of phylogenetics. The 1990s saw considerable progress in plant phylogeny and phylogenetic theory, enabling a phylogenetic tree to be constructed for all of the flowering plants. The establishment of major new clades necessitated a departure from the older but widely used classifications such as Cronquist and Thorne based largely on morphology rather than genetic data. This complicated discussion about plant evolution and necessitated a major restructuring. "rbc"L gene sequencing and cladistic analysis of monocots had redefined the Liliales in 1995. from four morphological orders "sensu" Dahlgren. The largest clade representing the Liliaceae, all previously included in Liliales, but including both the Calochortaceae and Liliaceae "sensu" Tamura. This redefined family, that became referred to as core Liliales, but corresponded to the emerging circumscription of the Angiosperm Phylogeny Group (1998).
The 2009 revision of the Angiosperm Phylogeny Group system, APG III, places the order in the clade monocots.
From the Dahlgren system of 1985 onwards, studies based mainly on morphology had identified the Asparagales as a distinct group, but had also included groups now located in Liliales, Pandanales and Zingiberales. Research in the 21st century has supported the monophyly of Asparagales, based on morphology, 18S rDNA, and other DNA sequences, although some phylogenetic reconstructions based on molecular data have suggested that Asparagales may be paraphyletic, with Orchidaceae separated from the rest. Within the monocots, Asparagales is the sister group of the commelinid clade.
This cladogram shows the placement of Asparagales within the orders of Lilianae "sensu" Chase & Reveal (monocots) based on molecular phylogenetic evidence. The lilioid monocot orders are bracketed, namely Petrosaviales, Dioscoreales, Pandanales, Liliales and Asparagales. These constitute a paraphyletic assemblage, that is groups with a common ancestor that do not include all direct descendants (in this case commelinids as the sister group to Asparagales); to form a clade, all the groups joined by thick lines would need to be included. While Acorales and Alismatales have been collectively referred to as "alismatid monocots" (basal or early branching monocots), the remaining clades (lilioid and commelinid monocots) have been referred to as the "core monocots". The relationship between the orders (with the exception of the two sister orders) is pectinate, that is diverging in succession from the line that leads to the commelinids. Numbers indicate crown group (most recent common ancestor of the sampled species of the clade of interest) divergence times in mya (million years ago).
A phylogenetic tree for the Asparagales, generally to family level, but including groups which were recently and widely treated as families but which are now reduced to subfamily rank, is shown below.
The tree shown above can be divided into a basal paraphyletic group, the 'lower Asparagales (asparagoids)', from Orchidaceae to Asphodelaceae, and a well-supported monophyletic group of 'core Asparagales' (higher asparagoids), comprising the two largest families, Amaryllidaceae "sensu lato" and Asparagaceae "sensu lato".
Two differences between these two groups (although with exceptions) are: the mode of microsporogenesis and the position of the ovary. The 'lower Asparagales' typically have simultaneous microsporogenesis (i.e. cell walls develop only after both meiotic divisions), which appears to be an apomorphy within the monocots, whereas the 'core Asparagales' have reverted to successive microsporogenesis (i.e. cell walls develop after each division). The 'lower Asparagales' typically have an inferior ovary, whereas the 'core Asparagales' have reverted to a superior ovary. A 2002 morphological study by Rudall treated possessing an inferior ovary as a synapomorphy of the Asparagales, stating that reversions to a superior ovary in the 'core Asparagales' could be associated with the presence of nectaries below the ovaries. However, Stevens notes that superior ovaries are distributed among the 'lower Asparagales' in such a way that it is not clear where to place the evolution of different ovary morphologies. The position of the ovary seems a much more flexible character (here and in other angiosperms) than previously thought.
The APG III system when it was published in 2009, greatly expanded the families Xanthorrhoeaceae, Amaryllidaceae, and Asparagaceae. Thirteen of the families of the earlier APG II system were thereby reduced to subfamilies within these three families. The expanded Xanthorrhoeaceae is now called "Asphodelaceae". The APG II families (left) and their equivalent APG III subfamilies (right) are as follows:
Orchidaceae is the largest family of all angiosperms and hence by far the largest in the order. The Dahlgren system recognized three families of orchids, but DNA sequence analysis later showed that these families are polyphyletic and so should be combined. Several studies suggest (with high bootstrap support) that Orchidaceae is the sister of the rest of the Asparagales. Other studies have placed the orchids differently in the phylogenetic tree, generally among the Boryaceae-Hypoxidaceae clade. The position of Orchidaceae shown above seems the best current hypothesis, but cannot be taken as confirmed.
Orchids have simultaneous microsporogenesis and inferior ovaries, two characters that are typical of the 'lower Asparagales'. However, their nectaries are rarely in the septa of the ovaries, and most orchids have dust-like seeds, atypical of the rest of the order. (Some members of Vanilloideae and Cypripedioideae have crustose seeds, probably associated with dispersal by birds and mammals that are attracted by fermenting fleshy fruit releasing fragrant compounds, e.g. vanilla.)
In terms of the number of species, Orchidaceae diversification is remarkable. However, although the other Asparagales may be less rich in species, they are more variable morphologically, including tree-like forms.
The four families excluding Boryaceae form a well-supported clade in studies based on DNA sequence analysis. All four contain relatively few species, and it has been suggested that they be combined into one family under the name Hypoxidaceae "sensu lato". The relationship between Boryaceae (which includes only two genera, "Borya" and "Alania"), and other Asparagales has remained unclear for a long time. The Boryaceae are mycorrhizal, but not in the same way as orchids. Morphological studies have suggested a close relationship between Boryaceae and Blandfordiaceae. There is relatively low support for the position of Boryaceae in the tree shown above.
The relationship shown between Ixioliriaceae and Tecophilaeaceae is still unclear. Some studies have supported a clade of these two families, others have not. The position of Doryanthaceae has also varied, with support for the position shown above, but also support for other positions.
The clade from Iridaceae upwards appears to have stronger support. All have some genetic characteristics in common, having lost Arabidopsis-type telomeres. Iridaceae is distinctive among the Asparagales in the unique structure of the inflorescence (a rhipidium), the combination of an inferior ovary and three stamens, and the common occurrence of unifacial leaves whereas bifacial leaves are the norm in other Asparagales.
Members of the clade from Iridaceae upwards have infra-locular septal nectaries, which Rudall interpreted as a driver towards secondarily superior ovaries.
The next node in the tree (Xanthorrhoeaceae "sensu lato" + the 'core Asparagales') has strong support. 'Anomalous' secondary thickening occurs among this clade, e.g. in "Xanthorrhoea" (family Asphodelaceae) and "Dracaena" (family Asparagaceae "sensu lato"), with species reaching tree-like proportions.
The 'core Asparagales', comprising Amaryllidaceae "sensu lato" and Asparagaceae "sensu lato", are a strongly supported clade, as are clades for each of the families. Relationships within these broadly defined families appear less clear, particularly within the Asparagaceae "sensu lato". Stevens notes that most of its subfamilies are difficult to recognize, and that significantly different divisions have been used in the past, so that the use of a broadly defined family to refer to the entire clade is justified. Thus the relationships among subfamilies shown above, based on APWeb , is somewhat uncertain.
Several studies have attempted to date the evolution of the Asparagales, based on phylogenetic evidence. Earlier studies generally give younger dates than more recent studies, which have been preferred in the table below.
A 2009 study suggests that the Asparagales have the highest diversification rate in the monocots, about the same as the order Poales, although in both orders the rate is little over half that of the eudicot order Lamiales, the clade with the highest rate.
The taxonomic diversity of the monocotyledons is described in detail by Kubitzki. Up-to-date information on the Asparagales can be found on the Angiosperm Phylogeny Website.
The APG III system's family circumscriptions are being used as the basis of the Kew-hosted "World Checklist of Selected Plant Families". With this circumscription, the order consists of 14 families (Dahlgren had 31) with approximately 1120 genera and 26000 species.
Order Asparagales Link
The earlier 2003 version, APG II, allowed 'bracketed' families, i.e. families which could either be segregated from more comprehensive families or could be included in them. These are the families given under "including" in the list above. APG III does not allow bracketed families, requiring the use of the more comprehensive family; otherwise the circumscription of the Asparagales is unchanged. A separate paper accompanying the publication of the 2009 APG III system provided subfamilies to accommodate the families which were discontinued. The first APG system of 1998 contained some extra families, included in square brackets in the list above.
Two older systems which use the order Asparagales are the Dahlgren system and the Kubitzki system. The families included in the circumscriptions of the order in these two systems are shown in the first and second columns of the table below. The equivalent family in the modern APG III system (see below) is shown in the third column. Note that although these systems may use the same name for a family, the genera which it includes may be different, so the equivalence between systems is only approximate in some cases.
The Asparagales include many important crop plants and ornamental plants. Crops include Allium, Asparagus and Vanilla, while ornamentals include irises, hyacinths and orchids. | https://en.wikipedia.org/wiki?curid=786 |
Alismatales
The Alismatales (alismatids) are an order of flowering plants including about 4500 species. Plants assigned to this order are mostly tropical or aquatic. Some grow in fresh water, some in marine habitats.
The Alismatales comprise herbaceous flowering plants of aquatic and marshy habitats, and the only monocots known to have green embryos other than the Amaryllidaceae. They also include the only marine angiosperms growing completely submerged, the seagrasses. The flowers are usually arranged in inflorescences, and the mature seeds lack endosperm.
Both marine and freshwater forms include those with staminate flowers that detach from the parent plant and float to the surface where they become pollinated. In others, pollination occurs underwater, where pollen may form elongated strands, increasing chance of success. Most aquatic species have a totally submerged juvenile phase, and flowers are either floating or emergent. Vegetation may be totally submersed, have floating leaves, or protrude from the water. Collectively, they are commonly known as "water plantain".
The Alismatales contain about 165 genera in 13 families, with a cosmopolitan distribution. Phylogenetically, they are basal monocots, diverging early in evolution relative to the lilioid and commelinid monocot lineages. Together with the Acorales, the Alismatales are referred to informally as the alismatid monocots.
The Cronquist system (1981) places the Alismatales in subclass Alismatidae, class Liliopsida [= monocotyledons] and includes only three families as shown:
Cronquist's subclass Alismatidae conformed fairly closely to the order Alismatales as defined by APG, minus the Araceae.
The Dahlgren system places the Alismatales in the superorder Alismatanae in the subclass Liliidae [= monocotyledons] in the class Magnoliopsida [= angiosperms] with the following families included:
In Tahktajan's classification (1997), the order Alismatales contains only the Alismataceae and Limnocharitaceae, making it equivalent to the Alismataceae as revised in APG-III. Other families included in the Alismatates as currently defined are here distributed among 10 additional orders, all of which are assigned, with the following exception, to the Subclass Alismatidae. Araceae in Tahktajan 1997 is assigned to the Arales and placed in the Subclass Aridae; Tofieldiaceae to the Melanthiales and placed in the Liliidae.
The Angiosperm Phylogeny Group system (APG) of 1998 and APG II (2003) assigned the Alismatales to the monocots, which may be thought of as an unranked clade containing the families listed below. The biggest departure from earlier systems (see below) is the inclusion of family Araceae. By its inclusion, the order has grown enormously in number of species. The family Araceae alone accounts for about a hundred genera, totaling over two thousand species. The rest of the families together contain only about five hundred species, many of which are in very small families.
The APG III system (2009) differs only in that the Limnocharitaceae are combined with the Alismataceae; it was also suggested that the genus "Maundia" (of the Juncaginaceae) could be separated into a monogeneric family, the Maundiaceae, but the authors noted that more study was necessary before the Maundiaceae could be recognized.
In APG IV (2016), it was decided that evidence was sufficient to elevate "Maundia" to family level as the monogeneric Maundiaceae. The authors considered including a number of the smaller orders within the Juncaginaceae, but an online survey of botanists and other users found little support for this "lumping" approach. Consequently, the family structure for APG IV is:
Cladogram showing the orders of monocots (Lilianae "sensu" Chase & Reveal) based on molecular phylogenetic evidence: | https://en.wikipedia.org/wiki?curid=787 |
Asterales
Asterales is an order of dicotyledonous flowering plants that includes the large family Asteraceae (or Compositae) known for composite flowers made of florets, and ten families related to the Asteraceae.
The order is a cosmopolite (plants found throughout most of the world including desert and frigid zones), and includes mostly herbaceous species, although a small number of trees (such as the giant Lobelia and the giant Senecio) and shrubs are also present.
Asterales are organisms that seem to have evolved from one common ancestor. Asterales share characteristics on morphological and biochemical levels. Synapomorphies (a character that is shared by two or more groups through evolutionary development) include the presence in the plants of oligosaccharide inulin, a nutrient storage molecule used instead of starch; and unique stamen morphology. The stamens are usually found around the style, either aggregated densely or fused into a tube, probably an adaptation in association with the plunger (brush; or secondary) pollination that is common among the families of the order, wherein pollen is collected and stored on the length of the pistil.
The name and order Asterales is botanically venerable, dating back to at least 1926 in the Hutchinson system of plant taxonomy when it contained only five families, of which only two are retained in the APG III classification. Under the Cronquist system of taxonomic classification of flowering plants, Asteraceae was the only family in the group, but newer systems (such as APG II and APG III) have expanded it to 11. In the classification system of Dahlgren the Asterales were in the superorder Asteriflorae (also called Asteranae).
The order Asterales currently includes 11 families, the largest of which are the Asteraceae, with about 25,000 species, and the Campanulaceae ("bellflowers"), with about 2,000 species. The remaining families count together for less than 1500 species. The two large families are cosmopolitan, with many of their species found in the Northern Hemisphere, and the smaller families are usually confined to Australia and the adjacent areas, or sometimes South America.
Only the Asteraceae have composite flower heads; the other families do not, but share other characteristics such as storage of inulin that define the 11 families as more closely related to each other than to other plant families or orders such as the rosids.
The phylogenetic tree according to APG III for the Campanulid clade is as below.
The core Asterales are Stylidiaceae (six genera), APA clade (Alseuosmiaceae, Phellinaceae and Argophyllaceae, together 7 genera), MGCA clade (Menyanthaceae, Goodeniaceae, Calyceraceae, in total twenty genera), and Asteraceae (about sixteen hundred genera). Other Asterales are Rousseaceae (four genera), Campanulaceae (eighty four genera) and Pentaphragmataceae (one genus).
All Asterales families are represented in the Southern Hemisphere; however, Asteraceae and Campanulaceae are cosmopolitan and Menyanthaceae nearly so.
Although most extant species of Asteraceae are herbaceous, the examination of the basal members in the family suggests that the common ancestor of the family was an arborescent plant, a tree or shrub, perhaps adapted to dry conditions, radiating from South America. Less can be said about the Asterales themselves with certainty, although since several families in Asterales contain trees, the ancestral member is most likely to have been a tree or shrub.
Because all clades are represented in the southern hemisphere but many not in the northern hemisphere, it is natural to conjecture that there is a common southern origin to them. Asterales are angiosperms, flowering plants that appeared about 140 million years ago. The Asterales order probably originated in the Cretaceous (145 – 66 Mya) on the supercontinent Gondwana which broke up from 184 – 80 Mya, forming the area that is now Australia, South America, Africa, India and Antarctica.
Asterales contain about 14% of eudicot diversity. From an analysis of relationships and diversities within the Asterales and with their superorders, estimates of the age of the beginning of the Asterales have been made, which range from 116 Mya to 82Mya. However few fossils have been found, of the Menyanthaceae-Asteraceae clade in the Oligocene, about 29 Mya.
Fossil evidence of the Asterales is rare and belongs to rather recent epochs, so the precise estimation of the order's age is quite difficult. An Oligocene (34 – 23 Mya) pollen is known for Asteraceae and Goodeniaceae, and seeds from Oligocene and Miocene (23 – 5.3 Mya) are known for Menyanthaceae and Campanulaceae respectively.
The Asterales, by dint of being a super-set of the family Asteraceae, include some species grown for food, including the sunflower ("Helianthus annuus"), lettuce ("Lactuca sativa") and chicory ("Cichorium"). Many are also used as spices and traditional medicines.
Asterales are common plants and have many known uses. For example, pyrethrum (derived from Old World members of the genus "Chrysanthemum") is a natural insecticide with minimal environmental impact. Wormwood, derived from a genus that includes the sagebrush, is used as a source of flavoring for absinthe, a bitter classical liquor of European origin. | https://en.wikipedia.org/wiki?curid=789 |
Asteroid
Asteroids are minor planets, especially of the inner Solar System. Larger asteroids have also been called planetoids. These terms have historically been applied to any astronomical object orbiting the Sun that did not resolve into a disc in a telescope and was not observed to have characteristics of an active comet such as a tail. As minor planets in the outer Solar System were discovered that were found to have volatile-rich surfaces similar to comets, these came to be distinguished from the objects found in the main asteroid belt. In this article, the term "asteroid" refers to the minor planets of the inner Solar System, including those co-orbital with Jupiter.
Millions of asteroids exist, many the shattered remnants of planetesimals, bodies within the young Sun's solar nebula that never grew large enough to become planets. The vast majority of known asteroids orbit within the main asteroid belt located between the orbits of Mars and Jupiter, or are co-orbital with Jupiter (the Jupiter trojans). However, other orbital families exist with significant populations, including the near-Earth objects. Individual asteroids are classified by their characteristic spectra, with the majority falling into three main groups: C-type, M-type, and S-type. These were named after and are generally identified with carbon-rich, metallic, and silicate (stony) compositions, respectively. The sizes of asteroids varies greatly; the largest, Ceres, is almost across and massive enough to qualify as a dwarf planet.
Asteroids are somewhat arbitrarily differentiated from comets and meteoroids. In the case of comets, the difference is one of composition: while asteroids are mainly composed of mineral and rock, comets are primarily composed of dust and ice. Furthermore, asteroids formed closer to the sun, preventing the development of cometary ice. The difference between asteroids and meteoroids is mainly one of size: meteoroids have a diameter of one meter or less, whereas asteroids have a diameter of greater than one meter. Finally, meteoroids can be composed of either cometary or asteroidal materials.
Only one asteroid, 4 Vesta, which has a relatively reflective surface, is normally visible to the naked eye, and this only in very dark skies when it is favorably positioned. Rarely, small asteroids passing close to Earth may be visible to the naked eye for a short time. , the Minor Planet Center had data on 930,000 minor planets in the inner and outer Solar System, of which about 545,000 had enough information to be given numbered designations.
The United Nations declared 30 June as International Asteroid Day to educate the public about asteroids. The date of International Asteroid Day commemorates the anniversary of the Tunguska asteroid impact over Siberia, Russian Federation, on 30 June 1908.
In April 2018, the B612 Foundation reported "It is 100 percent certain we'll be hit [by a devastating asteroid], but we're not 100 percent sure when." Also in 2018, physicist Stephen Hawking,
in his final book "Brief Answers to the Big Questions", considered an asteroid collision to be the biggest threat to the planet. In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the ""National Near-Earth Object Preparedness Strategy Action Plan"" to better prepare. According to expert testimony in the United States Congress in 2013, NASA would require at least five years of preparation before a mission to intercept an asteroid could be launched.
The first asteroid to be discovered, Ceres, was originally considered to be a new planet. This was followed by the discovery of other similar bodies, which, with the equipment of the time, appeared to be points of light, like stars, showing little or no planetary disc, though readily distinguishable from stars due to their apparent motions. This prompted the astronomer Sir William Herschel to propose the term "asteroid", coined in Greek as ἀστεροειδής, or "asteroeidēs", meaning 'star-like, star-shaped', and derived from the Ancient Greek "astēr" 'star, planet'. In the early second half of the nineteenth century, the terms "asteroid" and "planet" (not always qualified as "minor") were still used interchangeably.
Overview of discovery timeline:
Asteroid discovery methods have dramatically improved over the past two centuries.
In the last years of the 18th century, Baron Franz Xaver von Zach organized a group of 24 astronomers to search the sky for the missing planet predicted at about 2.8 AU from the Sun by the Titius-Bode law, partly because of the discovery, by Sir William Herschel in 1781, of the planet Uranus at the distance predicted by the law. This task required that hand-drawn sky charts be prepared for all stars in the zodiacal band down to an agreed-upon limit of faintness. On subsequent nights, the sky would be charted again and any moving object would, hopefully, be spotted. The expected motion of the missing planet was about 30 seconds of arc per hour, readily discernible by observers.
The first object, Ceres, was not discovered by a member of the group, but rather by accident in 1801 by Giuseppe Piazzi, director of the observatory of Palermo in Sicily. He discovered a new star-like object in Taurus and followed the displacement of this object during several nights. Later that year, Carl Friedrich Gauss used these observations to calculate the orbit of this unknown object, which was found to be between the planets Mars and Jupiter. Piazzi named it after Ceres, the Roman goddess of agriculture.
Three other asteroids (2 Pallas, 3 Juno, and 4 Vesta) were discovered over the next few years, with Vesta found in 1807. After eight more years of fruitless searches, most astronomers assumed that there were no more and abandoned any further searches.
However, Karl Ludwig Hencke persisted, and began searching for more asteroids in 1830. Fifteen years later, he found 5 Astraea, the first new asteroid in 38 years. He also found 6 Hebe less than two years later. After this, other astronomers joined in the search and at least one new asteroid was discovered every year after that (except the wartime year 1945). Notable asteroid hunters of this early era were J. R. Hind, Annibale de Gasparis, Robert Luther, H. M. S. Goldschmidt, Jean Chacornac, James Ferguson, Norman Robert Pogson, E. W. Tempel, J. C. Watson, C. H. F. Peters, A. Borrelly, J. Palisa, the Henry brothers and Auguste Charlois.
In 1891, Max Wolf pioneered the use of astrophotography to detect asteroids, which appeared as short streaks on long-exposure photographic plates. This dramatically increased the rate of detection compared with earlier visual methods: Wolf alone discovered 248 asteroids, beginning with 323 Brucia, whereas only slightly more than 300 had been discovered up to that point. It was known that there were many more, but most astronomers did not bother with them, calling them "vermin of the skies", a phrase variously attributed to Eduard Suess and Edmund Weiss. Even a century later, only a few thousand asteroids were identified, numbered and named.
Until 1998, asteroids were discovered by a four-step process. First, a region of the sky was photographed by a wide-field telescope, or astrograph. Pairs of photographs were taken, typically one hour apart. Multiple pairs could be taken over a series of days. Second, the two films or plates of the same region were viewed under a stereoscope. Any body in orbit around the Sun would move slightly between the pair of films. Under the stereoscope, the image of the body would seem to float slightly above the background of stars. Third, once a moving body was identified, its location would be measured precisely using a digitizing microscope. The location would be measured relative to known star locations.
These first three steps do not constitute asteroid discovery: the observer has only found an apparition, which gets a provisional designation, made up of the year of discovery, a letter representing the half-month of discovery, and finally a letter and a number indicating the discovery's sequential number (example: ).
The last step of discovery is to send the locations and time of observations to the Minor Planet Center, where computer programs determine whether an apparition ties together earlier apparitions into a single orbit. If so, the object receives a catalogue number and the observer of the first apparition with a calculated orbit is declared the discoverer, and granted the honor of naming the object subject to the approval of the International Astronomical Union.
There is increasing interest in identifying asteroids whose orbits cross Earth's, and that could, given enough time, collide with Earth "(see Earth-crosser asteroids)". The three most important groups of near-Earth asteroids are the Apollos, Amors, and Atens. Various asteroid deflection strategies have been proposed, as early as the 1960s.
The near-Earth asteroid 433 Eros had been discovered as long ago as 1898, and the 1930s brought a flurry of similar objects. In order of discovery, these were: 1221 Amor, 1862 Apollo, 2101 Adonis, and finally 69230 Hermes, which approached within 0.005 AU of Earth in 1937. Astronomers began to realize the possibilities of Earth impact.
Two events in later decades increased the alarm: the increasing acceptance of the Alvarez hypothesis that an impact event resulted in the Cretaceous–Paleogene extinction, and the 1994 observation of Comet Shoemaker-Levy 9 crashing into Jupiter. The U.S. military also declassified the information that its military satellites, built to detect nuclear explosions, had detected hundreds of upper-atmosphere impacts by objects ranging from one to ten meters across.
All these considerations helped spur the launch of highly efficient surveys that consist of charge-coupled device (CCD) cameras and computers directly connected to telescopes. , it was estimated that 89% to 96% of near-Earth asteroids one kilometer or larger in diameter had been discovered. A list of teams using such systems includes:
, the LINEAR system alone has discovered 147,132 asteroids. Among all the surveys, 19,266 near-Earth asteroids have been discovered including almost 900 more than in diameter.
Traditionally, small bodies orbiting the Sun were classified as comets, asteroids, or meteoroids, with anything smaller than one meter across being called a meteoroid. Beech and Steel's 1995 paper proposed a meteoroid definition including size limits. The term "asteroid", from the Greek word for "star-like", never had a formal definition, with the broader term minor planet being preferred by the International Astronomical Union.
However, following the discovery of asteroids below ten meters in size, Rubin and Grossman's 2010 paper revised the previous definition of meteoroid to objects between 10 µm and 1 meter in size in order to maintain the distinction between asteroids and meteoroids. The smallest asteroids discovered (based on absolute magnitude "H") are with "H" = 33.2 and with "H" = 32.1 both with an estimated size of about 1 meter.
In 2006, the term "small Solar System body" was also introduced to cover both most minor planets and comets. Other languages prefer "planetoid" (Greek for "planet-like"), and this term is occasionally used in English especially for larger minor planets such as the dwarf planets as well as an alternative for asteroids since they are not star-like. The word "planetesimal" has a similar meaning, but refers specifically to the small building blocks of the planets that existed when the Solar System was forming. The term "planetule" was coined by the geologist William Daniel Conybeare to describe minor planets, but is not in common use. The three largest objects in the asteroid belt, Ceres, Pallas, and Vesta, grew to the stage of protoplanets. Ceres is a dwarf planet, the only one in the inner Solar System.
When found, asteroids were seen as a class of objects distinct from comets, and there was no unified term for the two until "small Solar System body" was coined in 2006. The main difference between an asteroid and a comet is that a comet shows a coma due to sublimation of near surface ices by solar radiation. A few objects have ended up being dual-listed because they were first classified as minor planets but later showed evidence of cometary activity. Conversely, some (perhaps all) comets are eventually depleted of their surface volatile ices and become asteroid-like. A further distinction is that comets typically have more eccentric orbits than most asteroids; most "asteroids" with notably eccentric orbits are probably dormant or extinct comets.
For almost two centuries, from the discovery of Ceres in 1801 until the discovery of the first centaur, Chiron in 1977, all known asteroids spent most of their time at or within the orbit of Jupiter, though a few such as Hidalgo ventured far beyond Jupiter for part of their orbit. Those located between the orbits of Mars and Jupiter were known for many years simply as The Asteroids. When astronomers started finding more small bodies that permanently resided further out than Jupiter, now called centaurs, they numbered them among the traditional asteroids, though there was debate over whether they should be considered asteroids or as a new type of object. Then, when the first trans-Neptunian object (other than Pluto), Albion, was discovered in 1992, and especially when large numbers of similar objects started turning up, new terms were invented to sidestep the issue: Kuiper-belt object, trans-Neptunian object, scattered-disc object, and so on. These inhabit the cold outer reaches of the Solar System where ices remain solid and comet-like bodies are not expected to exhibit much cometary activity; if centaurs or trans-Neptunian objects were to venture close to the Sun, their volatile ices would sublimate, and traditional approaches would classify them as comets and not asteroids.
The innermost of these are the Kuiper-belt objects, called "objects" partly to avoid the need to classify them as asteroids or comets. They are thought to be predominantly comet-like in composition, though some may be more akin to asteroids. Furthermore, most do not have the highly eccentric orbits associated with comets, and the ones so far discovered are larger than traditional comet nuclei. (The much more distant Oort cloud is hypothesized to be the main reservoir of dormant comets.) Other recent observations, such as the analysis of the cometary dust collected by the "Stardust" probe, are increasingly blurring the distinction between comets and asteroids, suggesting "a continuum between asteroids and comets" rather than a sharp dividing line.
The minor planets beyond Jupiter's orbit are sometimes also called "asteroids", especially in popular presentations. However, it is becoming increasingly common for the term "asteroid" to be restricted to minor planets of the inner Solar System. Therefore, this article will restrict itself for the most part to the classical asteroids: objects of the asteroid belt, Jupiter trojans, and near-Earth objects.
When the IAU introduced the class small Solar System bodies in 2006 to include most objects previously classified as minor planets and comets, they created the class of dwarf planets for the largest minor planets – those that have enough mass to have become ellipsoidal under their own gravity. According to the IAU, "the term 'minor planet' may still be used, but generally the term 'Small Solar System Body' will be preferred." Currently only the largest object in the asteroid belt, Ceres, at about across, has been placed in the dwarf planet category.
It is thought that planetesimals in the asteroid belt evolved much like the rest of the solar nebula until Jupiter neared its current mass, at which point excitation from orbital resonances with Jupiter ejected over 99% of planetesimals in the belt. Simulations and a discontinuity in spin rate and spectral properties suggest that asteroids larger than approximately in diameter accreted during that early era, whereas smaller bodies are fragments from collisions between asteroids during or after the Jovian disruption. Ceres and Vesta grew large enough to melt and differentiate, with heavy metallic elements sinking to the core, leaving rocky minerals in the crust.
In the Nice model, many Kuiper-belt objects are captured in the outer asteroid belt, at distances greater than 2.6 AU. Most were later ejected by Jupiter, but those that remained may be the D-type asteroids, and possibly include Ceres.
Various dynamical groups of asteroids have been discovered orbiting in the inner Solar System. Their orbits are perturbed by the gravity of other bodies in the Solar System and by the Yarkovsky effect. Significant populations include:
The majority of known asteroids orbit within the asteroid belt between the orbits of Mars and Jupiter, generally in relatively low-eccentricity (i.e. not very elongated) orbits. This belt is now estimated to contain between 1.1 and 1.9 million asteroids larger than in diameter, and millions of smaller ones. These asteroids may be remnants of the protoplanetary disk, and in this region the accretion of planetesimals into planets during the formative period of the Solar System was prevented by large gravitational perturbations by Jupiter.
Trojans are populations that share an orbit with a larger planet or moon, but do not collide with it because they orbit in one of the two Lagrangian points of stability, L4 and L5, which lie 60° ahead of and behind the larger body.
The most significant population of trojans are the Jupiter trojans. Although fewer Jupiter trojans have been discovered (), it is thought that they are as numerous as the asteroids in the asteroid belt. Trojans have been found in the orbits of other planets, including Venus, Earth, Mars, Uranus, and Neptune.
Near-Earth asteroids, or NEAs, are asteroids that have orbits that pass close to that of Earth. Asteroids that actually cross Earth's orbital path are known as "Earth-crossers". , 14,464 near-Earth asteroids are known and the number over one kilometer in diameter is estimated to be 900–1,000.
Asteroids vary greatly in size, from almost for the largest down to rocks just 1 meter across. The three largest are very much like miniature planets: they are roughly spherical, have at least partly differentiated interiors, and are thought to be surviving protoplanets. The vast majority, however, are much smaller and are irregularly shaped; they are thought to be either battered planetesimals or fragments of larger bodies.
The dwarf planet Ceres is by far the largest asteroid, with a diameter of . The next largest are 4 Vesta and 2 Pallas, both with diameters of just over . Vesta is the only main-belt asteroid that can, on occasion, be visible to the naked eye. On some rare occasions, a near-Earth asteroid may briefly become visible without technical aid; see 99942 Apophis.
The mass of all the objects of the asteroid belt, lying between the orbits of Mars and Jupiter, is estimated to be in the range of , about 4% of the mass of the Moon. Of this, Ceres comprises , about a third of the total. Adding in the next three most massive objects, Vesta (9%), Pallas (7%), and Hygiea (3%), brings this figure up to half, whereas the three most-massive asteroids after that, 511 Davida (1.2%), 704 Interamnia (1.0%), and 52 Europa (0.9%), constitute only another 3%. The number of asteroids increases rapidly as their individual masses decrease.
The number of asteroids decreases markedly with size. Although this generally follows a power law, there are 'bumps' at and , where more asteroids than expected from a logarithmic distribution are found.
Although their location in the asteroid belt excludes them from planet status, the three largest objects, Ceres, Vesta, and Pallas, are intact protoplanets that share many characteristics common to planets, and are atypical compared to the majority of irregularly shaped asteroids. The fourth largest asteroid, Hygiea, appears nearly spherical although it may have an undifferentiated interior, like the majority of asteroids. Between them, the four largest asteroids constitute half the mass of the asteroid belt.
Ceres is the only asteroid with a fully ellipsoidal shape and hence the only one that is a dwarf planet. It has a much higher absolute magnitude than the other asteroids, of around 3.32, and may possess a surface layer of ice. Like the planets, Ceres is differentiated: it has a crust, a mantle and a core. No meteorites from Ceres have been found on Earth.
Vesta, too, has a differentiated interior, though it formed inside the Solar System's frost line, and so is devoid of water; its composition is mainly of basaltic rock with minerals such as olivine. Aside from the large crater at its southern pole, Rheasilvia, Vesta also has an ellipsoidal shape. Vesta is the parent body of the Vestian family and other V-type asteroids, and is the source of the HED meteorites, which constitute 5% of all meteorites on Earth.
Pallas is unusual in that, like Uranus, it rotates on its side, with its axis of rotation tilted at high angles to its orbital plane. Its composition is similar to that of Ceres: high in carbon and silicon, and perhaps partially differentiated. Pallas is the parent body of the Palladian family of asteroids.
Hygiea is the largest carbonaceous asteroid and, unlike the other largest asteroids, lies relatively close to the plane of the ecliptic. It is the largest member and presumed parent body of the Hygiean family of asteroids. Because there is no sufficiently large crater on the surface to be the source of that family, as there is on Vesta, it is thought that Hygiea may have been completely disrupted in the collision that formed the Hygiean family, and recoalesced after losing a bit less than 2% of its mass. Observations taken with the Very Large Telescope's SPHERE imager in 2017 and 2018, and announced in late 2019, revealed that Hygiea has a nearly spherical shape, which is at consistent both with it being in hydrostatic equilibrium (and thus a dwarf planet), or formerly being in hydrostatic equilibrium, or with being disrupted and recoalescing.
Measurements of the rotation rates of large asteroids in the asteroid belt show that there is an upper limit. Very few asteroids with a diameter larger than 100 meters have a rotation period smaller than 2.2 hours. For asteroids rotating faster than approximately this rate, the inertial force at the surface is greater than the gravitational force, so any loose surface material would be flung out. However, a solid object should be able to rotate much more rapidly. This suggests that most asteroids with a diameter over 100 meters are rubble piles formed through accumulation of debris after collisions between asteroids.
The physical composition of asteroids is varied and in most cases poorly understood. Ceres appears to be composed of a rocky core covered by an icy mantle, where Vesta is thought to have a nickel-iron core, olivine mantle, and basaltic crust. 10 Hygiea, however, which appears to have a uniformly primitive composition of carbonaceous chondrite, is thought to be the largest undifferentiated asteroid. Most of the smaller asteroids are thought to be piles of rubble held together loosely by gravity, though the largest are probably solid. Some asteroids have moons or are co-orbiting binaries: Rubble piles, moons, binaries, and scattered asteroid families are thought to be the results of collisions that disrupted a parent asteroid, or, possibly, a planet.
Asteroids contain traces of amino acids and other organic compounds, and some speculate that asteroid impacts may have seeded the early Earth with the chemicals necessary to initiate life, or may have even brought life itself to Earth "(also see panspermia)". In August 2011, a report, based on NASA studies with meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine and related organic molecules) may have been formed on asteroids and comets in outer space.
Composition is calculated from three primary sources: albedo, surface spectrum, and density. The last can only be determined accurately by observing the orbits of moons the asteroid might have. So far, every asteroid with moons has turned out to be a rubble pile, a loose conglomeration of rock and metal that may be half empty space by volume. The investigated asteroids are as large as 280 km in diameter, and include 121 Hermione (268×186×183 km), and 87 Sylvia (384×262×232 km). Only half a dozen asteroids are larger than 87 Sylvia, though none of them have moons; however, some smaller asteroids are thought to be more massive, suggesting they may not have been disrupted, and indeed 511 Davida, the same size as Sylvia to within measurement error, is estimated to be two and a half times as massive, though this is highly uncertain. The fact that such large asteroids as Sylvia can be rubble piles, presumably due to disruptive impacts, has important consequences for the formation of the Solar System: Computer simulations of collisions involving solid bodies show them destroying each other as often as merging, but colliding rubble piles are more likely to merge. This means that the cores of the planets could have formed relatively quickly.
On 7 October 2009, the presence of water ice was confirmed on the surface of 24 Themis using NASA's Infrared Telescope Facility. The surface of the asteroid appears completely covered in ice. As this ice layer is sublimating, it may be getting replenished by a reservoir of ice under the surface. Organic compounds were also detected on the surface. Scientists hypothesize that some of the first water brought to Earth was delivered by asteroid impacts after the collision that produced the Moon. The presence of ice on 24 Themis supports this theory.
In October 2013, water was detected on an extrasolar body for the first time, on an asteroid orbiting the white dwarf GD 61. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." In May 2016, significant asteroid data arising from the Wide-field Infrared Survey Explorer and NEOWISE missions have been questioned. Although the early original criticism had not undergone peer review, a more recent peer-reviewed study was subsequently published.
In November 2019, scientists reported detecting, for the first time, sugar molecules, including ribose, in meteorites, suggesting that chemical processes on asteroids can produce some fundamentally essential bio-ingredients important to life, and supporting the notion of an RNA world prior to a DNA-based origin of life on Earth, and possibly, as well, the notion of panspermia.
Most asteroids outside the "big four" (Ceres, Pallas, Vesta, and Hygiea) are likely to be broadly similar in appearance, if irregular in shape. 50-km (31-mi) 253 Mathilde is a rubble pile saturated with craters with diameters the size of the asteroid's radius, and Earth-based observations of 300-km (186-mi) 511 Davida, one of the largest asteroids after the big four, reveal a similarly angular profile, suggesting it is also saturated with radius-size craters. Medium-sized asteroids such as Mathilde and 243 Ida that have been observed up close also reveal a deep regolith covering the surface. Of the big four, Pallas and Hygiea are practically unknown. Vesta has compression fractures encircling a radius-size crater at its south pole but is otherwise a spheroid. Ceres seems quite different in the glimpses Hubble has provided, with surface features that are unlikely to be due to simple craters and impact basins, but details will be expanded with the "Dawn spacecraft", which entered Ceres orbit on 6 March 2015.
Asteroids become darker and redder with age due to space weathering. However evidence suggests most of the color change occurs rapidly, in the first hundred thousands years, limiting the usefulness of spectral measurement for determining the age of asteroids.
Asteroids are commonly categorized according to two criteria: the characteristics of their orbits, and features of their reflectance spectrum.
Many asteroids have been placed in groups and families based on their orbital characteristics. Apart from the broadest divisions, it is customary to name a group of asteroids after the first member of that group to be discovered. Groups are relatively loose dynamical associations, whereas families are tighter and result from the catastrophic break-up of a large parent asteroid sometime in the past. Families are more common and easier to identify within the main asteroid belt, but several small families have been reported among the Jupiter trojans. Main belt families were first recognized by Kiyotsugu Hirayama in 1918 and are often called Hirayama families in his honor.
About 30–35% of the bodies in the asteroid belt belong to dynamical families each thought to have a common origin in a past collision between asteroids. A family has also been associated with the plutoid dwarf planet .
Some asteroids have unusual horseshoe orbits that are co-orbital with Earth or some other planet. Examples are 3753 Cruithne and . The first instance of this type of orbital arrangement was discovered between Saturn's moons Epimetheus and Janus.
Sometimes these horseshoe objects temporarily become quasi-satellites for a few decades or a few hundred years, before returning to their earlier status. Both Earth and Venus are known to have quasi-satellites.
Such objects, if associated with Earth or Venus or even hypothetically Mercury, are a special class of Aten asteroids. However, such objects could be associated with outer planets as well.
In 1975, an asteroid taxonomic system based on color, albedo, and spectral shape was developed by Clark R. Chapman, David Morrison, and Ben Zellner. These properties are thought to correspond to the composition of the asteroid's surface material. The original classification system had three categories: C-types for dark carbonaceous objects (75% of known asteroids), S-types for stony (silicaceous) objects (17% of known asteroids) and U for those that did not fit into either C or S. This classification has since been expanded to include many other asteroid types. The number of types continues to grow as more asteroids are studied.
The two most widely used taxonomies now used are the Tholen classification and SMASS classification. The former was proposed in 1984 by David J. Tholen, and was based on data collected from an eight-color asteroid survey performed in the 1980s. This resulted in 14 asteroid categories. In 2002, the Small Main-Belt Asteroid Spectroscopic Survey resulted in a modified version of the Tholen taxonomy with 24 different types. Both systems have three broad categories of C, S, and X asteroids, where X consists of mostly metallic asteroids, such as the M-type. There are also several smaller classes.
The proportion of known asteroids falling into the various spectral types does not necessarily reflect the proportion of all asteroids that are of that type; some types are easier to detect than others, biasing the totals.
Originally, spectral designations were based on inferences of an asteroid's composition. However, the correspondence between spectral class and composition is not always very good, and a variety of classifications are in use. This has led to significant confusion. Although asteroids of different spectral classifications are likely to be composed of different materials, there are no assurances that asteroids within the same taxonomic class are composed of similar materials.
A newly discovered asteroid is given a provisional designation (such as ) consisting of the year of discovery and an alphanumeric code indicating the half-month of discovery and the sequence within that half-month. Once an asteroid's orbit has been confirmed, it is given a number, and later may also be given a name (e.g. 433 Eros). The formal naming convention uses parentheses around the number (e.g. (433) Eros), but dropping the parentheses is quite common. Informally, it is common to drop the number altogether, or to drop it after the first mention when a name is repeated in running text. In addition, names can be proposed by the asteroid's discoverer, within guidelines established by the International Astronomical Union.
The first asteroids to be discovered were assigned iconic symbols like the ones traditionally used to designate the planets. By 1855 there were two dozen asteroid symbols, which often occurred in multiple variants.
In 1851, after the fifteenth asteroid (Eunomia) had been discovered, Johann Franz Encke made a major change in the upcoming 1854 edition of the "Berliner Astronomisches Jahrbuch" (BAJ, "Berlin Astronomical Yearbook"). He introduced a disk (circle), a traditional symbol for a star, as the generic symbol for an asteroid. The circle was then numbered in order of discovery to indicate a specific asteroid (although he assigned ① to the fifth, Astraea, while continuing to designate the first four only with their existing iconic symbols). The numbered-circle convention was quickly adopted by astronomers, and the next asteroid to be discovered (16 Psyche, in 1852) was the first to be designated in that way at the time of its discovery. However, Psyche was given an iconic symbol as well, as were a few other asteroids discovered over the next few years (see chart above). 20 Massalia was the first asteroid that was not assigned an iconic symbol, and no iconic symbols were created after the 1855 discovery of 37 Fides. That year Astraea's number was increased to ⑤, but the first four asteroids, Ceres to Vesta, were not listed by their numbers until the 1867 edition. The circle was soon abbreviated to a pair of parentheses, which were easier to typeset and sometimes omitted altogether over the next few decades, leading to the modern convention.
Until the age of space travel, objects in the asteroid belt were merely pinpricks of light in even the largest telescopes and their shapes and terrain remained a mystery. The best modern ground-based telescopes and the Earth-orbiting Hubble Space Telescope can resolve a small amount of detail on the surfaces of the largest asteroids, but even these mostly remain little more than fuzzy blobs. Limited information about the shapes and compositions of asteroids can be inferred from their light curves (their variation in brightness as they rotate) and their spectral properties, and asteroid sizes can be estimated by timing the lengths of star occulations (when an asteroid passes directly in front of a star). Radar imaging can yield good information about asteroid shapes and orbital and rotational parameters, especially for near-Earth asteroids. In terms of delta-v and propellant requirements, NEOs are more easily accessible than the Moon.
The first close-up photographs of asteroid-like objects were taken in 1971, when the "Mariner 9" probe imaged Phobos and Deimos, the two small moons of Mars, which are probably captured asteroids. These images revealed the irregular, potato-like shapes of most asteroids, as did later images from the Voyager probes of the small moons of the gas giants.
The first true asteroid to be photographed in close-up was 951 Gaspra in 1991, followed in 1993 by 243 Ida and its moon Dactyl, all of which were imaged by the "Galileo" probe en route to Jupiter.
The first dedicated asteroid probe was "NEAR Shoemaker", which photographed 253 Mathilde in 1997, before entering into orbit around 433 Eros, finally landing on its surface in 2001.
Other asteroids briefly visited by spacecraft en route to other destinations include 9969 Braille (by "Deep Space 1" in 1999), and 5535 Annefrank (by "Stardust" in 2002).
From September to November 2005, the Japanese "Hayabusa" probe studied 25143 Itokawa in detail and was plagued with difficulties, but returned samples of its surface to Earth on 13 June 2010.
The European "Rosetta" probe (launched in 2004) flew by 2867 Šteins in 2008 and 21 Lutetia, the third-largest asteroid visited to date, in 2010.
In September 2007, NASA launched the "Dawn" spacecraft, which orbited 4 Vesta from July 2011 to September 2012, and has been orbiting the dwarf planet 1 Ceres since 2015. 4 Vesta is the second-largest asteroid visited to date.
On 13 December 2012, China's lunar orbiter "Chang'e 2" flew within of the asteroid 4179 Toutatis on an extended mission.
The Japan Aerospace Exploration Agency (JAXA) launched the "Hayabusa2" probe in December 2014, and plans to return samples from 162173 Ryugu in December 2020.
In June 2018, the US National Science and Technology Council warned that America is unprepared for an asteroid impact event, and has developed and released the ""National Near-Earth Object Preparedness Strategy Action Plan"" to better prepare.
In September 2016, NASA launched the OSIRIS-REx sample return mission to asteroid 101955 Bennu, which it reached in December 2018. , the probe is in orbit around the asteroid.
In early 2013, NASA announced the planning stages of a mission to capture a near-Earth asteroid and move it into lunar orbit where it could possibly be visited by astronauts and later impacted into the Moon. On 19 June 2014, NASA reported that asteroid 2011 MD was a prime candidate for capture by a robotic mission, perhaps in the early 2020s.
It has been suggested that asteroids might be used as a source of materials that may be rare or exhausted on Earth (asteroid mining), or materials for constructing space habitats "(see Colonization of the asteroids)". Materials that are heavy and expensive to launch from Earth may someday be mined from asteroids and used for space manufacturing and construction.
In the U.S. Discovery program the "Psyche" spacecraft proposal to 16 Psyche and "Lucy" spacecraft to Jupiter trojans made it to the semi-finalist stage of mission selection.
In January 2017, "Lucy" and "Psyche" mission were both selected as NASA's Discovery Program missions 13 and 14 respectively.
Location of Ceres (within asteroid belt) compared to other bodies of the Solar System
Asteroids and the asteroid belt are a staple of science fiction stories. Asteroids play several potential roles in science fiction: as places human beings might colonize, resources for extracting minerals, hazards encountered by spacecraft traveling between two other points, and as a threat to life on Earth or other inhabited planets, dwarf planets, and natural satellites by potential impact.
"Further information about asteroids" | https://en.wikipedia.org/wiki?curid=791 |
Affidavit
An affidavit ( ; Medieval Latin for "he has declared under oath") is a written statement of fact which is voluntarily made by an "affiant" or "deponent" under an oath or affirmation which is administered by a person who is authorized to do so by law. Such a statement is witnessed as to the authenticity of the affiant's signature by a taker of oaths, such as a notary public or commissioner of oaths. An affidavit is a type of verified statement or showing, or in other words, it contains a verification, which means that it is made under oath or penalty of perjury, and this serves as evidence for its veracity and is required in court proceedings.
Affidavits may be written in the first or third person, depending on who drafted the document. The document's component parts are typically as follows:
If an affidavit is notarized or authenticated, it will also include a caption with a venue and title in reference to judicial proceedings. In some cases, an introductory clause, called a "preamble", is added attesting that the affiant personally appeared before the authenticating authority.
On 2 March 2016, the High Court of Australia held that the ACT Uniform Evidence Legislation is neutral in the way sworn evidence and unsworn evidence is treated as being of equal weight.
In Indian law, although an affidavit may be taken as proof of the facts stated therein, the courts have no jurisdiction to admit evidence by way of affidavit. Affidavit is not treated as "evidence" within the meaning of Section 3 of the Evidence Act. However, it was held by the Supreme Court that an affidavit can be used as evidence only if the court so orders for sufficient reasons, namely, the right of the opposite party to have the deponent produced for cross-examination. Therefore, an affidavit cannot ordinarily be used as evidence in absence of a specific order of the court.
In Sri Lanka, under the Oaths Ordinance, with the exception of a court-martial, a person may submit an affidavit signed in the presence of a commissioner for oaths or a justice of the peace.
Affidavits are made in a similar way as to England and Wales, although "make oath" is sometimes omitted. A declaration may be substituted for an affidavit in most cases for those opposed to swearing oaths. The person making the affidavit is known as the deponent but does not sign the affidavit. The affidavit concludes in the standard format "sworn (declared) before me, [name of commissioner for oaths/solicitor], a commissioner for oaths (solicitor), on the [date] at [location] in the county/city of [county/city], and I know the deponent (declarant)", and it is signed and stamped by the commissioner for oaths.
In American jurisprudence, under the rules for hearsay, admission of an unsupported affidavit as evidence is unusual (especially if the affiant is not available for cross-examination) with regard to material facts which may be dispositive of the matter at bar. Affidavits from persons who are dead or otherwise incapacitated, or who cannot be located or made to appear, may be accepted by the court, but usually only in the presence of corroborating evidence. An affidavit which reflected a better grasp of the facts close in time to the actual events may be used to refresh a witness's recollection. Materials used to refresh recollection are admissible as evidence. If the affiant is a party in the case, the affiant's opponent may be successful in having the affidavit admitted as evidence, as statements by a party-opponent are admissible through an exception to the hearsay rule.
Affidavits are typically included in the response to interrogatories. Requests for admissions under Federal Rule of Civil Procedure 36, however, are not required to be sworn.
When a person signs an affidavit, that person is eligible to take the stand at a trial or evidentiary hearing. One party may wish to summon the affiant to verify the contents of the affidavit, while the other party may want to cross-examine the affiant about the affidavit.
Some types of motions will not be accepted by the court unless accompanied by an independent sworn statement or other evidence in support of the need for the motion. In such a case, a court will accept an affidavit from the filing attorney in support of the motion, as certain assumptions are made, to wit: The affidavit in place of sworn testimony promotes judicial economy. The lawyer is an officer of the court and knows that a false swearing by him, if found out, could be grounds for severe penalty up to and including disbarment. The lawyer if called upon would be able to present independent and more detailed evidence to prove the facts set forth in his affidavit.
The acceptance of an affidavit by one society does not confirm its acceptance as a legal document in other jurisdictions. Equally, the acceptance that a lawyer is an officer of the court (for swearing the affidavit) is not a given. This matter is addressed by the use of the apostille, a means of certifying the legalization of a document for international use under the terms of the 1961 Hague Convention Abolishing the Requirement of Legalization for Foreign Public Documents. Documents which have been notarized by a notary public, and certain other documents, and then certified with a conformant apostille, are accepted for legal use in all the nations that have signed the Hague Convention. Thus most affidavits now require to be apostilled if used for cross border issues.
There are various occasions or circumstances when a person needs an affidavit for a specific purpose and for that reason there are multiple as listed below: | https://en.wikipedia.org/wiki?curid=795 |
Aries (constellation)
Aries is one of the constellations of the zodiac. It is located in the Northern celestial hemisphere between Pisces to the west and Taurus to the east. The name Aries is Latin for ram, and its symbol is (Unicode ♈), representing a ram's horns. It is one of the 48 constellations described by the 2nd century astronomer Ptolemy, and remains one of the 88 modern constellations. It is a mid-sized constellation, ranking 39th overall size, with an area of 441 square degrees (1.1% of the celestial sphere).
Although Aries came to represent specifically the ram whose fleece became the Golden Fleece of Ancient Greek mythology, it has represented a ram since late Babylonian times. Before that, the stars of Aries formed a farmhand. Different cultures have incorporated the stars of Aries into different constellations including twin inspectors in China and a porpoise in the Marshall Islands. Aries is a relatively dim constellation, possessing only four bright stars: Hamal (Alpha Arietis, second magnitude), Sheratan (Beta Arietis, third magnitude), Mesarthim (Gamma Arietis, fourth magnitude), and 41 Arietis (also fourth magnitude). The few deep-sky objects within the constellation are quite faint and include several pairs of interacting galaxies. Several meteor showers appear to radiate from Aries, including the Daytime Arietids and the Epsilon Arietids.
Aries is recognized as an official constellation now, albeit as a specific region of the sky, by the International Astronomical Union. It was originally defined in ancient texts as a specific pattern of stars, and has remained a constellation since ancient times; it now includes the ancient pattern as well as the surrounding stars. In the description of the Babylonian zodiac given in the clay tablets known as the MUL.APIN, the constellation, now known as Aries, was the final station along the ecliptic. The MUL.APIN was a comprehensive table of the risings and settings of stars, which likely served as an agricultural calendar. Modern-day Aries was known as , "The Agrarian Worker" or "The Hired Man". Although likely compiled in the 12th or 11th century BC, the MUL.APIN reflects a tradition which marks the Pleiades as the vernal equinox, which was the case with some precision at the beginning of the Middle Bronze Age. The earliest identifiable reference to Aries as a distinct constellation comes from the boundary stones that date from 1350 to 1000 BC. On several boundary stones, a zodiacal ram figure is distinct from the other characters present. The shift in identification from the constellation as the Agrarian Worker to the Ram likely occurred in later Babylonian tradition because of its growing association with Dumuzi the Shepherd. By the time the MUL.APIN was created—by 1000 BC—modern Aries was identified with both Dumuzi's ram and a hired laborer. The exact timing of this shift is difficult to determine due to the lack of images of Aries or other ram figures.
In ancient Egyptian astronomy, Aries was associated with the god Amon-Ra, who was depicted as a man with a ram's head and represented fertility and creativity. Because it was the location of the vernal equinox, it was called the "Indicator of the Reborn Sun". During the times of the year when Aries was prominent, priests would process statues of Amon-Ra to temples, a practice that was modified by Persian astronomers centuries later. Aries acquired the title of "Lord of the Head" in Egypt, referring to its symbolic and mythological importance.
Aries was not fully accepted as a constellation until classical times. In Hellenistic astrology, the constellation of Aries is associated with the golden ram of Greek mythology that rescued Phrixus and Helle on orders from Hermes, taking Phrixus to the land of Colchis. Phrixos and Helle were the son and daughter of King Athamas and his first wife Nephele. The king's second wife, Ino, was jealous and wished to kill his children. To accomplish this, she induced a famine in Boeotia, then falsified a message from the Oracle of Delphi that said Phrixos must be sacrificed to end the famine. Athamas was about to sacrifice his son atop Mount Laphystium when Aries, sent by Nephele, arrived. Helle fell off of Aries's back in flight and drowned in the Dardanelles, also called the Hellespont in her honor. After arriving, Phrixus sacrificed the ram to Zeus and gave the Fleece to Aeëtes of Colchis, who rewarded him with an engagement to his daughter Chalciope. Aeëtes hung its skin in a sacred place where it became known as the Golden Fleece and was guarded by a dragon. In a later myth, this Golden Fleece was stolen by Jason and the Argonauts.
Historically, Aries has been depicted as a crouched, wingless ram with its head turned towards Taurus. Ptolemy asserted in his "Almagest" that Hipparchus depicted Alpha Arietis as the ram's muzzle, though Ptolemy did not include it in his constellation figure. Instead, it was listed as an "unformed star", and denoted as "the star over the head". John Flamsteed, in his "Atlas Coelestis", followed Ptolemy's description by mapping it above the figure's head. Flamsteed followed the general convention of maps by depicting Aries lying down. Astrologically, Aries has been associated with the head and its humors. It was strongly associated with Mars, both the planet and the god. It was considered to govern Western Europe and Syria, and to indicate a strong temper in a person.
The First Point of Aries, the location of the vernal equinox, is named for the constellation. This is because the Sun crossed the celestial equator from south to north in Aries more than two millennia ago. Hipparchus defined it in 130 BC. as a point south of Gamma Arietis. Because of the precession of the equinoxes, the First Point of Aries has since moved into Pisces and will move into Aquarius by around 2600 AD. The Sun now appears in Aries from late April through mid May, though the constellation is still associated with the beginning of spring.
Medieval Muslim astronomers depicted Aries in various ways. Astronomers like al-Sufi saw the constellation as a ram, modeled on the precedent of Ptolemy. However, some Islamic celestial globes depicted Aries as a nondescript four-legged animal with what may be antlers instead of horns. Some early Bedouin observers saw a ram elsewhere in the sky; this constellation featured the Pleiades as the ram's tail. The generally accepted Arabic formation of Aries consisted of thirteen stars in a figure along with five "unformed" stars, four of which were over the animal's hindquarters and one of which was the disputed star over Aries's head. Al-Sufi's depiction differed from both other Arab astronomers' and Flamsteed's, in that his Aries was running and looking behind itself.
The obsolete constellations introduced in Aries (Musca Borealis, Lilium, Vespa, and Apes) have all been composed of the northern stars. Musca Borealis was created from the stars 33 Arietis, 35 Arietis, 39 Arietis, and 41 Arietis. In 1612, Petrus Plancius introduced Apes, a constellation representing a bee. In 1624, the same stars were used by Jakob Bartsch to create a constellation called Vespa, representing a wasp. In 1679 Augustin Royer used these stars for his constellation Lilium, representing the fleur-de-lis. None of these constellation became widely accepted. Johann Hevelius renamed the constellation "Musca" in 1690 in his "Firmamentum Sobiescianum". To differentiate it from Musca, the southern fly, it was later renamed Musca Borealis but it did not gain acceptance and its stars were ultimately officially reabsorbed into Aries.
In 1922, the International Astronomical Union defined its recommended three-letter abbreviation, "Ari". The official boundaries of Aries were defined in 1930 by Eugène Delporte as a polygon of 12 segments. Its right ascension is between 1h 46.4m and 3h 29.4m and its declination is between 10.36° and 31.22° in the equatorial coordinate system.
In traditional Chinese astronomy, stars from Aries were used in several constellations. The brightest stars—Alpha, Beta, and Gamma Arietis—formed a constellation called "Lou" (婁), variously translated as "bond", "lasso", and "sickle", which was associated with the ritual sacrifice of cattle. This name was shared by the 16th lunar mansion, the location of the full moon closest to the autumnal equinox. The lunar mansion represented the area where animals were gathered before sacrifice around that time. This constellation has also been associated with harvest-time as it could represent a woman carrying a basket of food on her head. 35, 39, and 41 Arietis were part of a constellation called "Wei" (胃), which represented a fat abdomen and was the namesake of the 17th lunar mansion, which represented granaries. Delta and Zeta Arietis were a part of the constellation "Tianyin" (天陰), thought to represent the Emperor's hunting partner. "Zuogeng" (左更), a constellation depicting a marsh and pond inspector, was composed of Mu, Nu, Omicron, Pi, and Sigma Arietis. He was accompanied by "Yeou-kang", a constellation depicting an official in charge of pasture distribution.
In a similar system to the Chinese, the first lunar mansion in Hindu astronomy was called "Aswini", after the traditional names for Beta and Gamma Arietis, the Aswins. Because the Hindu new year began with the vernal equinox, the Rig Veda contains over 50 new-year's related hymns to the twins, making them some of the most prominent characters in the work. Aries itself was known as ""Aja"" and ""Mesha"". In Hebrew astronomy Aries was named ""Taleh""; it signified either Simeon or Gad, and generally symbolizes the "Lamb of the World". The neighboring Syrians named the constellation "Amru", and the bordering Turks named it "Kuzi". Half a world away, in the Marshall Islands, several stars from Aries were incorporated into a constellation depicting a porpoise, along with stars from Cassiopeia, Andromeda, and Triangulum. Alpha, Beta, and Gamma Arietis formed the head of the porpoise, while stars from Andromeda formed the body and the bright stars of Cassiopeia formed the tail. Other Polynesian peoples recognized Aries as a constellation. The Marquesas islanders called it "Na-pai-ka"; the Māori constellation "Pipiri" may correspond to modern Aries as well. In indigenous Peruvian astronomy, a constellation with most of the same stars as Aries existed. It was called the "Market Moon" and the "Kneeling Terrace", as a reminder for when to hold the annual harvest festival, Ayri Huay.
Aries has three prominent stars forming an asterism, designated Alpha, Beta, and Gamma Arietis by Johann Bayer. Alpha (Hamal) and Beta (Sheratan) are commonly used for navigation. There is also one other star above the fourth magnitude, 41 Arietis (Bharani). α Arietis, called Hamal, is the brightest star in Aries. Its traditional name is derived from the Arabic word for "lamb" or "head of the ram" ("ras al-hamal"), which references Aries's mythological background. With a spectral class of K2 and a luminosity class of III, it is an orange giant with an apparent visual magnitude of 2.00, which lies 66 light-years from Earth. Hamal has a luminosity of and its absolute magnitude is −0.1.
β Arietis, also known as Sheratan, is a blue-white star with an apparent visual magnitude of 2.64. Its traditional name is derived from ""sharatayn"", the Arabic word for "the two signs", referring to both Beta and Gamma Arietis in their position as heralds of the vernal equinox. The two stars were known to the Bedouin as ""qarna al-hamal"", "horns of the ram". It is 59 light-years from Earth. It has a luminosity of and its absolute magnitude is 2.1. It is a spectroscopic binary star, one in which the companion star is only known through analysis of the spectra. The spectral class of the primary is A5. Hermann Carl Vogel determined that Sheratan was a spectroscopic binary in 1903; its orbit was determined by Hans Ludendorff in 1907. It has since been studied for its eccentric orbit.
γ Arietis, with a common name of Mesarthim, is a binary star with two white-hued components, located in a rich field of magnitude 8–12 stars. Its traditional name has conflicting derivations. It may be derived from a corruption of "al-sharatan", the Arabic word meaning "pair" or a word for "fat ram". However, it may also come from the Sanskrit for "first star of Aries" or the Hebrew for "ministerial servants", both of which are unusual languages of origin for star names. Along with Beta Arietis, it was known to the Bedouin as ""qarna al-hamal"". The primary is of magnitude 4.59 and the secondary is of magnitude 4.68. The system is 164 light-years from Earth. The two components are separated by 7.8 arcseconds, and the system as a whole has an apparent magnitude of 3.9. The primary has a luminosity of and the secondary has a luminosity of ; the primary is an A-type star with an absolute magnitude of 0.2 and the secondary is a B9-type star with an absolute magnitude of 0.4. The angle between the two components is 1°. Mesarthim was discovered to be a double star by Robert Hooke in 1664, one of the earliest such telescopic discoveries. The primary, γ1 Arietis, is an Alpha² Canum Venaticorum variable star that has a range of 0.02 magnitudes and a period of 2.607 days. It is unusual because of its strong silicon emission lines.
The constellation is home to several double stars, including Epsilon, Lambda, and Pi Arietis. ε Arietis is a binary star with two white components. The primary is of magnitude 5.2 and the secondary is of magnitude 5.5. The system is 290 light-years from Earth. Its overall magnitude is 4.63, and the primary has an absolute magnitude of 1.4. Its spectral class is A2. The two components are separated by 1.5 arcseconds. λ Arietis is a wide double star with a white-hued primary and a yellow-hued secondary. The primary is of magnitude 4.8 and the secondary is of magnitude 7.3. The primary is 129 light-years from Earth. It has an absolute magnitude of 1.7 and a spectral class of F0. The two components are separated by 36 arcseconds at an angle of 50°; the two stars are located 0.5° east of 7 Arietis. π Arietis is a close binary star with a blue-white primary and a white secondary. The primary is of magnitude 5.3 and the secondary is of magnitude 8.5. The primary is 776 light-years from Earth. The primary itself is a wide double star with a separation of 25.2 arcseconds; the tertiary has a magnitude of 10.8. The primary and secondary are separated by 3.2 arcseconds.
Most of the other stars in Aries visible to the naked eye have magnitudes between 3 and 5. δ Ari, called Boteïn, is a star of magnitude 4.35, 170 light-years away. It has an absolute magnitude of −0.1 and a spectral class of K2. ζ Arietis is a star of magnitude 4.89, 263 light-years away. Its spectral class is A0 and its absolute magnitude is 0.0. 14 Arietis is a star of magnitude 4.98, 288 light-years away. Its spectral class is F2 and its absolute magnitude is 0.6. 39 Arietis (Lilii Borea) is a similar star of magnitude 4.51, 172 light-years away. Its spectral class is K1 and its absolute magnitude is 0.0. 35 Arietis is a dim star of magnitude 4.55, 343 light-years away. Its spectral class is B3 and its absolute magnitude is −1.7. 41 Arietis, known both as c Arietis and Nair al Butain, is a brighter star of magnitude 3.63, 165 light-years away. Its spectral class is B8 and it has a luminosity of . Its absolute magnitude is −0.2. 53 Arietis is a runaway star of magnitude 6.09, 815 light-years away. Its spectral class is B2. It was likely ejected from the Orion Nebula approximately five million years ago, possibly due to supernovae. Finally, Teegarden's Star is the closest star to Earth in Aries. It is a brown dwarf of magnitude 15.14 and spectral class M6.5V. With a proper motion of 5.1 arcseconds per year, it is the 24th closest star to Earth overall.
Aries has its share of variable stars, including R and U Arietis, Mira-type variable stars, and T Arietis, a semi-regular variable star. R Arietis is a Mira variable star that ranges in magnitude from a minimum of 13.7 to a maximum of 7.4 with a period of 186.8 days. It is 4,080 light-years away. U Arietis is another Mira variable star that ranges in magnitude from a minimum of 15.2 to a maximum of 7.2 with a period of 371.1 days. T Arietis is a semiregular variable star that ranges in magnitude from a minimum of 11.3 to a maximum of 7.5 with a period of 317 days. It is 1,630 light-years away. One particularly interesting variable in Aries is SX Arietis, a rotating variable star considered to be the prototype of its class, helium variable stars. SX Arietis stars have very prominent emission lines of Helium I and Silicon III. They are normally main-sequence B0p—B9p stars, and their variations are not usually visible to the naked eye. Therefore, they are observed photometrically, usually having periods that fit in the course of one night. Similar to Alpha² Canum Venaticorum variables, SX Arietis stars have periodic changes in their light and magnetic field, which correspond to the periodic rotation; they differ from the Alpha² Canum Venaticorum variables in their higher temperature. There are between 39 and 49 SX Arietis variable stars currently known; ten are noted as being "uncertain" in the General Catalog of Variable Stars.
NGC 772 is a spiral galaxy with an integrated magnitude of 10.3, located southeast of β Arietis and 15 arcminutes west of 15 Arietis. It is a relatively bright galaxy and shows obvious nebulosity and ellipticity in an amateur telescope. It is 7.2 by 4.2 arcminutes, meaning that its surface brightness, magnitude 13.6, is significantly lower than its integrated magnitude. NGC 772 is a class SA(s)b galaxy, which means that it is an unbarred spiral galaxy without a ring that possesses a somewhat prominent bulge and spiral arms that are wound somewhat tightly. The main arm, on the northwest side of the galaxy, is home to many star forming regions; this is due to previous gravitational interactions with other galaxies. NGC 772 has a small companion galaxy, NGC 770, that is about 113,000 light-years away from the larger galaxy. The two galaxies together are also classified as Arp 78 in the Arp peculiar galaxy catalog. NGC 772 has a diameter of 240,000 light-years and the system is 114 million light-years from Earth. Another spiral galaxy in Aries is NGC 673, a face-on class SAB(s)c galaxy. It is a weakly barred spiral galaxy with loosely wound arms. It has no ring and a faint bulge and is 2.5 by 1.9 arcminutes. It has two primary arms with fragments located farther from the core. 171,000 light-years in diameter, NGC 673 is 235 million light-years from Earth.
NGC 678 and NGC 680 are a pair of galaxies in Aries that are only about 200,000 light-years apart. Part of the NGC 691 group of galaxies, both are at a distance of approximately 130 million light-years. NGC 678 is an edge-on spiral galaxy that is 4.5 by 0.8 arcminutes. NGC 680, an elliptical galaxy with an asymmetrical boundary, is the brighter of the two at magnitude 12.9; NGC 678 has a magnitude of 13.35. Both galaxies have bright cores, but NGC 678 is the larger galaxy at a diameter of 171,000 light-years; NGC 680 has a diameter of 72,000 light-years. NGC 678 is further distinguished by its prominent dust lane. NGC 691 itself is a spiral galaxy slightly inclined to our line of sight. It has multiple spiral arms and a bright core. Because it is so diffuse, it has a low surface brightness. It has a diameter of 126,000 light-years and is 124 million light-years away. NGC 877 is the brightest member of an 8-galaxy group that also includes NGC 870, NGC 871, and NGC 876, with a magnitude of 12.53. It is 2.4 by 1.8 arcminutes and is 178 million light-years away with a diameter of 124,000 light-years. Its companion is NGC 876, which is about 103,000 light-years from the core of NGC 877. They are interacting gravitationally, as they are connected by a faint stream of gas and dust. Arp 276 is a different pair of interacting galaxies in Aries, consisting of NGC 935 and IC 1801.
NGC 821 is an E6 elliptical galaxy. It is unusual because it has hints of an early spiral structure, which is normally only found in lenticular and spiral galaxies. NGC 821 is 2.6 by 2.0 arcminutes and has a visual magnitude of 11.3. Its diameter is 61,000 light-years and it is 80 million light-years away. Another unusual galaxy in Aries is Segue 2, a dwarf and satellite galaxy of the Milky Way, recently discovered to be a potential relic of the epoch of reionization.
Aries is home to several meteor showers. The Daytime Arietid meteor shower is one of the strongest meteor showers that occurs during the day, lasting from 22 May to 2 July. It is an annual shower associated with the Marsden group of comets that peaks on 7 June with a maximum zenithal hourly rate of 54 meteors. Its parent body may be the asteroid Icarus. The meteors are sometimes visible before dawn, because the radiant is 32 degrees away from the Sun. They usually appear at a rate of 1–2 per hour as "earthgrazers", meteors that last several seconds and often begin at the horizon. Because most of the Daytime Arietids are not visible to the naked eye, they are observed in the radio spectrum. This is possible because of the ionized gas they leave in their wake. Other meteor showers radiate from Aries during the day; these include the Daytime Epsilon Arietids and the Northern and Southern Daytime May Arietids. The Jodrell Bank Observatory discovered the Daytime Arietids in 1947 when James Hey and G. S. Stewart adapted the World War II-era radar systems for meteor observations.
The Delta Arietids are another meteor shower radiating from Aries. Peaking on 9 December with a low peak rate, the shower lasts from 8 December to 14 January, with the highest rates visible from 8 to 14 December. The average Delta Aquarid meteor is very slow, with an average velocity of per second. However, this shower sometimes produces bright fireballs. This meteor shower has northern and southern components, both of which are likely associated with 1990 HA, a near-Earth asteroid.
The Autumn Arietids also radiate from Aries. The shower lasts from 7 September to 27 October and peaks on 9 October. Its peak rate is low. The Epsilon Arietids appear from 12 to 23 October. Other meteor showers radiating from Aries include the October Delta Arietids, Daytime Epsilon Arietids, Daytime May Arietids, Sigma Arietids, Nu Arietids, and Beta Arietids. The Sigma Arietids, a class IV meteor shower, are visible from 12 to 19 October, with a maximum zenithal hourly rate of less than two meteors per hour on 19 October.
Aries contains several stars with extrasolar planets. HIP 14810, a G5 type star, is orbited by three giant planets (those more than ten times the mass of Earth). HD 12661, like HIP 14810, is a G-type main sequence star, slightly larger than the Sun, with two orbiting planets. One planet is 2.3 times the mass of Jupiter, and the other is 1.57 times the mass of Jupiter. HD 20367 is a G0 type star, approximately the size of the Sun, with one orbiting planet. The planet, discovered in 2002, has a mass 1.07 times that of Jupiter and orbits every 500 days. In 2019, scientists conducting the CARMENES survey at the Calar Alto Observatory announced evidence of two Earth-mass exoplanets orbiting the star within its habitable zone.
"SIMBAD" | https://en.wikipedia.org/wiki?curid=798 |
Aquarius (constellation)
Aquarius is a constellation of the zodiac, situated between Capricornus and Pisces. Its name is Latin for "water-carrier" or "cup-carrier", and its symbol is (Unicode ♒), a representation of water. Aquarius is one of the oldest of the recognized constellations along the zodiac (the Sun's apparent path). It was one of the 48 constellations listed by the 2nd century astronomer Ptolemy, and it remains one of the 88 modern constellations. It is found in a region often called the Sea due to its profusion of constellations with watery associations such as Cetus the whale, Pisces the fish, and Eridanus the river.
At apparent magnitude 2.9, Beta Aquarii is the brightest star in the constellation.
Aquarius is identified as "The Great One" in the Babylonian star catalogues and represents the god Ea himself, who is commonly depicted holding an overflowing vase. The Babylonian star-figure appears on entitlement stones and cylinder seals from the second millennium. It contained the winter solstice in the Early Bronze Age. In Old Babylonian astronomy, Ea was the ruler of the southernmost quarter of the Sun's path, the "Way of Ea", corresponding to the period of 45 days on either side of winter solstice. Aquarius was also associated with the destructive floods that the Babylonians regularly experienced, and thus was negatively connoted. In Ancient Egypt astronomy, Aquarius was associated with the annual flood of the Nile; the banks were said to flood when Aquarius put his jar into the river, beginning spring.
In the Greek tradition, the constellation came to be represented simply as a single vase from which a stream poured down to Piscis Austrinus. The name in the Hindu zodiac is likewise "kumbha" "water-pitcher".
In Greek mythology, Aquarius is sometimes associated with Deucalion, the son of Prometheus who built a ship with his wife Pyrrha to survive an imminent flood. They sailed for nine days before washing ashore on Mount Parnassus. Aquarius is also sometimes identified with beautiful Ganymede, a youth in Greek mythology and the son of Trojan king Tros, who was taken to Mount Olympus by Zeus to act as cup-carrier to the gods. Neighboring Aquila represents the eagle, under Zeus' command, that snatched the young boy; some versions of the myth indicate that the eagle was in fact Zeus transformed. An alternative version of the tale recounts Ganymede's kidnapping by the goddess of the dawn, Eos, motivated by her affection for young men; Zeus then stole him from Eos and employed him as cup-bearer. Yet another figure associated with the water bearer is Cecrops I, a king of Athens who sacrificed water instead of wine to the gods.
In the first century, Ptolemy's "Almagest" established the common Western depiction of Aquarius. His water jar, an asterism itself, consists of Gamma, Pi, Eta, and Zeta Aquarii; it pours water in a stream of more than 20 stars terminating with Fomalhaut, now assigned solely to Piscis Austrinus. The water bearer's head is represented by 5th magnitude 25 Aquarii while his left shoulder is Beta Aquarii; his right shoulder and forearm are represented by Alpha and Gamma Aquarii respectively.
In Chinese astronomy, the stream of water flowing from the Water Jar was depicted as the "Army of Yu-Lin" ("Yu-lin-kiun" or "Yulinjun"). The name "Yu-lin" means "feathers and forests", referring to the numerous light-footed soldiers from the northern reaches of the empire represented by these faint stars. The constellation's stars were the most numerous of any Chinese constellation, numbering 45, the majority of which were located in modern Aquarius. The celestial army was protected by the wall "Leibizhen", which counted Iota, Lambda, Phi, and Sigma Aquarii among its 12 stars. 88, 89, and 98 Aquarii represent "Fou-youe", the axes used as weapons and for hostage executions. Also in Aquarius is "Loui-pi-tchin", the ramparts that stretch from 29 and 27 Piscium and 33 and 30 Aquarii through Phi, Lambda, Sigma, and Iota Aquarii to Delta, Gamma, Kappa, and Epsilon Capricorni.
Near the border with Cetus, the axe "Fuyue" was represented by three stars; its position is disputed and may have instead been located in Sculptor. "Tienliecheng" also has a disputed position; the 13-star castle replete with ramparts may have possessed Nu and Xi Aquarii but may instead have been located south in Piscis Austrinus. The Water Jar asterism was seen to the ancient Chinese as the tomb, "Fenmu". Nearby, the emperors' mausoleum "Xiuliang" stood, demarcated by Kappa Aquarii and three other collinear stars. "Ku" ("crying") and "Qi" ("weeping"), each composed of two stars, were located in the same region.
Three of the Chinese lunar mansions shared their name with constellations. "Nu", also the name for the 10th lunar mansion, was a handmaiden represented by Epsilon, Mu, 3, and 4 Aquarii. The 11th lunar mansion shared its name with the constellation "Xu" ("emptiness"), formed by Beta Aquarii and Alpha Equulei; it represented a bleak place associated with death and funerals. "Wei", the rooftop and 12th lunar mansion, was a V-shaped constellation formed by Alpha Aquarii, Theta Pegasi, and Epsilon Pegasi; it shared its name with two other Chinese constellations, in modern-day Scorpius and Aries.
Despite both its prominent position on the zodiac and its large size, Aquarius has no particularly bright stars, its four brightest stars being less than magnitude 2. However, recent research has shown that there are several stars lying within its borders that possess planetary systems.
The two brightest stars, Alpha and Beta Aquarii, are luminous yellow supergiants, of spectral types G0Ib and G2Ib respectively, that were once hot blue-white B-class main sequence stars 5 to 9 times as massive as the Sun. The two are also moving through space perpendicular to the plane of the Milky Way. Just shading Alpha, Beta Aquarii is the brightest star in Aquarius with an apparent magnitude of 2.91. It also has the proper name of Sadalsuud. Having cooled and swollen to around 50 times the Sun's diameter, it is around 2200 times as luminous as the Sun. It is around 6.4 times as massive as the Sun and around 56 million years old. Sadalsuud is 540 ± 20 light-years from Earth. Alpha Aquarii, also known as Sadalmelik, has an apparent magnitude of 2.94. It is 520 ± 20 light-years distant from Earth, and is around 6.5 times as massive as the Sun and 3000 times as luminous. It is 53 million years old.
γ Aquarii, also called Sadachbia, is a white main sequence star of spectral type star of spectral type A0V that is between 158 and 315 million years old and is around two and a half times the Sun's mass, and double its radius. Of magnitude 3.85, it is 164 ± 9 light years away. It has a luminosity of . The name Sadachbia comes from the Arabic for "lucky stars of the tents", "sa'd al-akhbiya".
δ Aquarii, also known as Skat or Scheat is a blue-white A2 spectral type star of magnitude 3.27 and luminosity of .
ε Aquarii, also known as Albali, is a blue-white A1 spectral type star with an apparent magnitude of 3.77, an absolute magnitude of 1.2, and a luminosity of .
ζ Aquarii is an F2 spectral type double star; both stars are white. Overall, it appears to be of magnitude 3.6 and luminosity of . The primary has a magnitude of 4.53 and the secondary a magnitude of 4.31, but both have an absolute magnitude of 0.6. Its orbital period is 760 years; the two components are currently moving farther apart.
θ Aquarii, sometimes called Ancha, is a G8 spectral type star with an apparent magnitude of 4.16 and an absolute magnitude of 1.4.
κ Aquarii, also called Situla.
λ Aquarii, also called Hudoor or Ekchusis, is an M2 spectral type star of magnitude 3.74 and luminosity of .
ξ Aquarii, also called Bunda, is an A7 spectral type star with an apparent magnitude of 4.69 and an absolute magnitude of 2.4.
π Aquarii, also called Seat, is a B0 spectral type star with an apparent magnitude of 4.66 and an absolute magnitude of −4.1.
Twelve exoplanet systems have been found in Aquarius as of 2013. Gliese 876, one of the nearest stars to Earth at a distance of 15 light-years, was the first red dwarf star to be found to possess a planetary system. It is orbited by four planets, including one terrestrial planet 6.6 times the mass of Earth. The planets vary in orbital period from 2 days to 124 days. 91 Aquarii is an orange giant star orbited by one planet, 91 Aquarii b. The planet's mass is 2.9 times the mass of Jupiter, and its orbital period is 182 days. Gliese 849 is a red dwarf star orbited by the first known long-period Jupiter-like planet, Gliese 849 b. The planet's mass is 0.99 times that of Jupiter and its orbital period is 1,852 days.
There are also less-prominent systems in Aquarius. WASP-6, a type G8 star of magnitude 12.4, is host to one exoplanet, WASP-6 b. The star is 307 parsecs from Earth and has a mass of 0.888 solar masses and a radius of 0.87 solar radii. WASP-6 b was discovered in 2008 by the transit method. It orbits its parent star every 3.36 days at a distance of 0.042 astronomical units (AU). It is 0.503 Jupiter masses but has a proportionally larger radius of 1.224 Jupiter radii. HD 206610, a K0 star located 194 parsecs from Earth, is host to one planet, HD 206610 b. The host star is larger than the Sun; more massive at 1.56 solar masses and larger at 6.1 solar radii. The planet was discovered by the radial velocity method in 2010 and has a mass of 2.2 Jupiter masses. It orbits every 610 days at a distance of 1.68 AU. Much closer to its sun is WASP-47 b, which orbits every 4.15 days only 0.052 AU from its sun, yellow dwarf (G9V) WASP-47. WASP-47 is close in size to the Sun, having a radius of 1.15 solar radii and a mass even closer at 1.08 solar masses. WASP-47 b was discovered in 2011 by the transit method, like WASP-6 b. It is slightly larger than Jupiter with a mass of 1.14 Jupiter masses and a radius of 1.15 Jupiter masses.
There are several more single-planet systems in Aquarius. HD 210277, a magnitude 6.63 yellow star located 21.29 parsecs from Earth, is host to one known planet: HD 210277 b. The 1.23 Jupiter mass planet orbits at nearly the same distance as Earth orbits the Sun1.1 AU, though its orbital period is significantly longer at around 442 days. HD 210277 b was discovered earlier than most of the other planets in Aquarius, detected by the radial velocity method in 1998. The star it orbits resembles the Sun beyond their similar spectral class; it has a radius of 1.1 solar radii and a mass of 1.09 solar masses. HD 212771 b, a larger planet at 2.3 Jupiter masses, orbits host star HD 212771 at a distance of 1.22 AU. The star itself, barely below the threshold of naked-eye visibility at magnitude 7.6, is a G8IV (yellow subgiant) star located 131 parsecs from Earth. Though it has a similar mass to the Sun1.15 solar massesit is significantly less dense with its radius of 5 solar radii. Its lone planet was discovered in 2010 by the radial velocity method, like several other exoplanets in the constellation.
As of 2013, there were only two known multiple-planet systems within the bounds of Aquarius: the Gliese 876 and HD 215152 systems. The former is quite prominent; the latter has only two planets and has a host star farther away at 21.5 parsecs. The HD 215152 system consists of the planets HD 215152 b and HD 215152 c orbiting their K0-type, magnitude 8.13 sun. Both discovered in 2011 by the radial velocity method, the two tiny planets orbit very close to their host star. HD 215152 c is the larger at 0.0097 Jupiter masses (still significantly larger than the Earth, which weighs in at 0.00315 Jupiter masses); its smaller sibling is barely smaller at 0.0087 Jupiter masses. The error in the mass measurements (0.0032 and respectively) is large enough to make this discrepancy statistically insignificant. HD 215152 c also orbits further from the star than HD 215152 b, 0.0852 AU compared to 0.0652.
On 23 February 2017, NASA announced that ultracool dwarf star TRAPPIST-1 in Aquarius has seven Earth-like rocky planets. Of these, three are in the system's habitable zone, and may contain water. The discovery of the TRAPPIST-1 system is seen by astronomers as a significant step toward finding life beyond Earth.
Because of its position away from the galactic plane, the majority of deep-sky objects in Aquarius are galaxies, globular clusters, and planetary nebulae. Aquarius contains three deep sky objects that are in the Messier catalog: the globular clusters Messier 2, Messier 72, and the open cluster Messier 73. Two well-known planetary nebulae are also located in Aquarius: the Saturn Nebula (NGC 7009), to the southeast of μ Aquarii; and the famous Helix Nebula (NGC 7293), southwest of δ Aquarii.
M2, also catalogued as NGC 7089, is a rich globular cluster located approximately 37,000 light-years from Earth. At magnitude 6.5, it is viewable in small-aperture instruments, but a 100 mm aperture telescope is needed to resolve any stars. M72, also catalogued as NGC 6981, is a small 9th magnitude globular cluster located approximately 56,000 light-years from Earth. M73, also catalogued as NGC 6994, is an open cluster with highly disputed status.
Aquarius is also home to several planetary nebulae. NGC 7009, also known as the Saturn Nebula, is an 8th magnitude planetary nebula located 3,000 light-years from Earth. It was given its moniker by the 19th century astronomer Lord Rosse for its resemblance to the planet Saturn in a telescope; it has faint protrusions on either side that resemble Saturn's rings. It appears blue-green in a telescope and has a central star of magnitude 11.3. Compared to the Helix Nebula, another planetary nebula in Aquarius, it is quite small. NGC 7293, also known as the Helix Nebula, is the closest planetary nebula to Earth at a distance of 650 light-years. It covers 0.25 square degrees, making it also the largest planetary nebula as seen from Earth. However, because it is so large, it is only viewable as a very faint object, though it has a fairly high integrated magnitude of 6.0.
One of the visible galaxies in Aquarius is NGC 7727, of particular interest for amateur astronomers who wish to discover or observe supernovae. A spiral galaxy (type S), it has an integrated magnitude of 10.7 and is 3 by 3 arcseconds. NGC 7252 is a tangle of stars resulting from the collision of two large galaxies and is known as the Atoms-for-Peace galaxy because of its resemblance to a cartoon atom.
There are three major meteor showers with radiants in Aquarius: the Eta Aquariids, the Delta Aquariids, and the Iota Aquariids.
The Eta Aquariids are the strongest meteor shower radiating from Aquarius. It peaks between 5 and 6 May with a rate of approximately 35 meteors per hour. Originally discovered by Chinese astronomers in 401, Eta Aquariids can be seen coming from the Water Jar beginning on 21 April and as late as 12 May. The parent body of the shower is Halley's Comet, a periodic comet. Fireballs are common shortly after the peak, approximately between 9 May and 11 May. The normal meteors appear to have yellow trails.
The Delta Aquariids is a double radiant meteor shower that peaks first on 29 July and second on 6 August. The first radiant is located in the south of the constellation, while the second radiant is located in the northern circlet of Pisces asterism. The southern radiant's peak rate is about 20 meteors per hour, while the northern radiant's peak rate is about 10 meteors per hour.
The Iota Aquariids is a fairly weak meteor shower that peaks on 6 August, with a rate of approximately 8 meteors per hour.
, the Sun appears in the constellation Aquarius from 16 February to 11 March. In tropical astrology, the Sun is considered to be in the sign Aquarius from 20 January to 19 February, and in sidereal astrology, from 15 February to 14 March.
Aquarius is also associated with the Age of Aquarius, a concept popular in 1960s counterculture. Despite this prominence, the Age of Aquarius will not dawn until the year 2597, as an astrological age does not begin until the Sun is in a particular constellation on the vernal equinox. | https://en.wikipedia.org/wiki?curid=799 |
Anime
Anime (, ; Japanese: アニメ , plural: "anime"), sometimes called Japanimation, is hand-drawn and computer animation originating from Japan. The term "anime" is derived from the English word "animation", and in Japan is used to refer to all forms of animated media. Outside Japan, the term refers specifically to animation from Japan or to a Japanese-disseminated animation style often characterized by colorful graphics, vibrant characters and fantastical themes. This culturally abstract approach to the word's meaning may open the possibility of anime produced in countries other than Japan.
The earliest commercial Japanese animation dates to 1917. A characteristic art style emerged in the 1960s with the works of Osamu Tezuka and spread in the second half of the 20th century, developing a large domestic and international audience. Anime can be distributed theatrically, by way of television broadcasts, directly to home media, and over the Internet. In addition to completely original works, anime are often adaptations of Japanese comics (known as manga), light novels, or video games.
Production methods and techniques related to anime have adapted over time in response to emergent technologies. As a multimedia art form, it combines graphic art, characterization, cinematography, and other creative techniques. Anime production typically focuses less on the animation of movement and more on the realism of settings as well as the use of camera effects, including panning, zooming, and angle shots. Diverse art styles are used, and character proportions and features can be quite varied, including characteristically large or realistically sized emotive eyes. Anime is classified into numerous genres targeting both broad and niche audiences.
The anime industry consists of over 430 production studios, with major names including Studio Ghibli, Gainax, and Toei Animation. Despite comprising only a fraction of Japan's domestic film market, anime makes up a majority of Japanese DVD and Blu-ray sales. It has also seen international success with the rise of English-dubbed and subbed programming. , Japanese anime accounted for 60% of the world's animated television shows.
Anime is an art form, specifically animation, that includes all genres found in cinema, but it can be mistakenly classified as a genre. In Japanese, the term "anime" is used as a blanket term to refer to all forms of animation from around the world. In English, "anime" () is more restrictively used to denote a "Japanese-style animated film or television entertainment" or as "a style of animation created in Japan".
The etymology of the word "anime" is disputed. The English term "animation" is written in Japanese "katakana" as ("animēshon", ) and is ("anime") in its shortened form. The pronunciation of "anime" in Japanese differs from pronunciations in other languages such as Standard English (pronunciation: ), which has different vowels and stress with regards to Japanese, where each mora carries equal stress. As with a few other Japanese words such as "saké", "Pokémon", and "Kobo Abé," English-language texts sometimes spell "anime" as "animé" (as in French), with an acute accent over the final "e", to cue the reader to pronounce the letter, not to leave it silent as Standard English orthography may suggest.
Some sources claim that "anime" derives from the French term for animation "dessin animé", but others believe this to be a myth derived from the French popularity of the medium in the late 1970s and 1980s. In English, "anime"—when used as a common noun—normally functions as a mass noun. (For example: "Do you watch anime?" or "How much anime have you collected?") Prior to the widespread use of "anime", the term "Japanimation" was prevalent throughout the 1970s and 1980s. In the mid-1980s, the term "anime" began to supplant "Japanimation". In general, the latter term now only appears in period works where it is used to distinguish and identify Japanese animation.
The word "anime" has also been criticized, e.g. in 1987, when Hayao Miyazaki stated that he despised the truncated word "anime" because to him it represented the desolation of the Japanese animation industry. He equated the desolation with animators lacking motivation and with mass-produced, overly expressionistic products relying upon a fixed iconography of facial expressions and protracted and exaggerated action scenes but lacking depth and sophistication in that they do not attempt to convey emotion or thought.
The first format of anime was theatrical viewing which originally began with commercial productions in 1917. Originally the animated flips were crude and required played musical components before adding sound and vocal components to the production. On July 14, 1958, Nippon Television aired "Mogura no Abanchūru" ("Mole's Adventure"), both the first televised and first color anime to debut. It was not until the 1960s when the first televised series were broadcast and it has remained a popular medium since. Works released in a direct to video format are called "original video animation" (OVA) or "original animation video" (OAV); and are typically not released theatrically or televised prior to home media release. The emergence of the Internet has led some animators to distribute works online in a format called "original net anime" (ONA).
The home distribution of anime releases were popularized in the 1980s with the VHS and LaserDisc formats. The VHS NTSC video format used in both Japan and the United States is credited as aiding the rising popularity of anime in the 1990s. The LaserDisc and VHS formats were transcended by the DVD format which offered the unique advantages; including multiple subtitling and dubbing tracks on the same disc. The DVD format also has its drawbacks in its usage of region coding; adopted by the industry to solve licensing, piracy and export problems and restricted region indicated on the DVD player. The Video CD (VCD) format was popular in Hong Kong and Taiwan, but became only a minor format in the United States that was closely associated with bootleg copies.
Japanese animation began in the early 20th century, when Japanese filmmakers experimented with the animation techniques also pioneered in France, Germany, the United States and Russia. A claim for the earliest Japanese animation is "Katsudō Shashin", an undated and private work by an unknown creator. In 1917, the first professional and publicly displayed works began to appear. Animators such as Ōten Shimokawa and Seitarou Kitayama produced numerous works, with the oldest surviving film being Kouchi's "Namakura Gatana", a two-minute clip of a samurai trying to test a new sword on his target only to suffer defeat. The 1923 Great Kantō earthquake resulted in widespread destruction to Japan's infrastructure and the destruction of Shimokawa's warehouse, destroying most of these early works.
By the 1930s animation was well established in Japan as an alternative format to the live-action industry. It suffered competition from foreign producers and many animators—like Noburō Ōfuji and Yasuji Murata—still worked in cheaper cutout animation rather than cel animation. Other creators, Kenzō Masaoka and Mitsuyo Seo, nonetheless made great strides in animation technique; they benefited from the patronage of the government, which employed animators to produce educational shorts and propaganda. The first talkie anime was "Chikara to Onna no Yo no Naka", produced by Masaoka in 1933. By 1940, numerous anime artists' organizations had risen, including the Shin Mangaha Shudan and Shin Nippon Mangaka. The first feature-length animated film was "Momotaro's Divine Sea Warriors" directed by Seo in 1944 with sponsorship by the Imperial Japanese Navy.
The success of The Walt Disney Company's 1937 feature film "Snow White and the Seven Dwarfs" profoundly influenced many Japanese animators. The 1950s saw a proliferation of short, animated advertisements made in Japan for television broadcasting. In the 1960s, manga artist and animator Osamu Tezuka adapted and simplified many Disney animation techniques to reduce costs and to limit the number of frames in productions. He intended this as a temporary measure to allow him to produce material on a tight schedule with inexperienced animation staff. "Three Tales", aired in 1960, was the first anime shown on television. The first anime television series was "Otogi Manga Calendar", aired from 1961 to 1964.
The 1970s saw a surge of growth in the popularity of "manga", Japanese comic books and graphic novels, many of which were later animated. The work of Osamu Tezuka drew particular attention: he has been called a "legend" and the "god of manga". His work—and that of other pioneers in the field—inspired characteristics and genres that remain fundamental elements of anime today. The giant robot genre (known as "mecha" outside Japan), for instance, took shape under Tezuka, developed into the Super Robot genre under Go Nagai and others, and was revolutionized at the end of the decade by Yoshiyuki Tomino who developed the Real Robot genre. Robot anime like the "Gundam" and "The Super Dimension Fortress Macross" series became instant classics in the 1980s, and the robot genre of anime is still one of the most common in Japan and worldwide today. In the 1980s, anime became more accepted in the mainstream in Japan (although less than manga), and experienced a boom in production. Following a few successful adaptations of anime in overseas markets in the 1980s, anime gained increased acceptance in those markets in the 1990s and even more at the turn of the 21st century. In 2002, "Spirited Away", a Studio Ghibli production directed by Hayao Miyazaki won the Golden Bear at the Berlin International Film Festival and in 2003 at the 75th Academy Awards it won the Academy Award for Best Animated Feature.
Anime differs greatly from other forms of animation by its diverse art styles, methods of animation, its production, and its process. Visually, anime is a diverse art form that contains a wide variety of art styles, differing from one creator, artist, and studio. While no one art style predominates anime as a whole, they do share some similar attributes in terms of animation technique and character design.
Anime follows the typical production of animation, including storyboarding, voice acting, character design, and cel production ("Shirobako", itself a series, highlights many of the aspects involved in anime production). Since the 1990s, animators have increasingly used computer animation to improve the efficiency of the production process. Artists like Noburō Ōfuji pioneered the earliest anime works, which were experimental and consisted of images drawn on blackboards, stop motion animation of paper cutouts, and silhouette animation. Cel animation grew in popularity until it came to dominate the medium. In the 21st century, the use of other animation techniques is mostly limited to independent short films, including the stop motion puppet animation work produced by Tadahito Mochinaga, Kihachirō Kawamoto and Tomoyasu Murata. Computers were integrated into the animation process in the 1990s, with works such as "Ghost in the Shell" and "Princess Mononoke" mixing cel animation with computer-generated images. Fuji Film, a major cel production company, announced it would stop cel production, producing an industry panic to procure cel imports and hastening the switch to digital processes.
Prior to the digital era, anime was produced with traditional animation methods using a pose to pose approach. The majority of mainstream anime uses fewer expressive key frames and more in-between animation.
Japanese animation studios were pioneers of many limited animation techniques, and have given anime a distinct set of conventions. Unlike Disney animation, where the emphasis is on the movement, anime emphasizes the art quality and let limited animation techniques make up for the lack of time spent on movement. Such techniques are often used not only to meet deadlines but also as artistic devices. Anime scenes place emphasis on achieving three-dimensional views, and backgrounds are instrumental in creating the atmosphere of the work. The backgrounds are not always invented and are occasionally based on real locations, as exemplified in "Howl's Moving Castle" and "The Melancholy of Haruhi Suzumiya". Oppliger stated that anime is one of the rare mediums where putting together an all-star cast usually comes out looking "tremendously impressive".
The cinematic effects of anime differentiates itself from the stage plays found in American animation. Anime is cinematically shot as if by camera, including panning, zooming, distance and angle shots to more complex dynamic shots that would be difficult to produce in reality. In anime, the animation is produced before the voice acting, contrary to American animation which does the voice acting first; this can cause lip sync errors in the Japanese version.
Body proportions of human anime characters tend to accurately reflect the proportions of the human body in reality. The height of the head is considered by the artist as the base unit of proportion. Head heights can vary, but most anime characters are about seven to eight heads tall. Anime artists occasionally make deliberate modifications to body proportions to produce super deformed characters that feature a disproportionately small body compared to the head; many super deformed characters are two to four heads tall. Some anime works like "Crayon Shin-chan" completely disregard these proportions, in such a way that they resemble caricatured Western cartoons.
A common anime character design convention is exaggerated eye size. The animation of characters with large eyes in anime can be traced back to Osamu Tezuka, who was deeply influenced by such early animation characters as Betty Boop, who was drawn with disproportionately large eyes. Tezuka is a central figure in anime and manga history, whose iconic art style and character designs allowed for the entire range of human emotions to be depicted solely through the eyes. The artist adds variable color shading to the eyes and particularly to the cornea to give them greater depth. Generally, a mixture of a light shade, the tone color, and a dark shade is used. Cultural anthropologist Matt Thorn argues that Japanese animators and audiences do not perceive such stylized eyes as inherently more or less foreign. However, not all anime characters have large eyes. For example, the works of Hayao Miyazaki are known for having realistically proportioned eyes, as well as realistic hair colors on their characters.
Hair in anime is often unnaturally lively and colorful or uniquely styled. The movement of hair in anime is exaggerated and "hair action" is used to emphasize the action and emotions of characters for added visual effect. Poitras traces hairstyle color to cover illustrations on manga, where eye-catching artwork and colorful tones are attractive for children's manga. Despite being produced for a domestic market, anime features characters whose race or nationality is not always defined, and this is often a deliberate decision, such as in the "Pokémon" animated series.
Anime and manga artists often draw from a common canon of iconic facial expression illustrations to denote particular moods and thoughts. These techniques are often different in form than their counterparts in Western animation, and they include a fixed iconography that is used as shorthand for certain emotions and moods. For example, a male character may develop a nosebleed when aroused. A variety of visual symbols are employed, including sweat drops to depict nervousness, visible blushing for embarrassment, or glowing eyes for an intense glare.
The opening and credits sequences of most anime television episodes are accompanied by Japanese pop or rock songs, often by reputed bands. They may be written with the series in mind, but are also aimed at the general music market, and therefore often allude only vaguely or not at all to the themes or plot of the series. Pop and rock songs are also sometimes used as incidental music ("insert songs") in an episode, often to highlight particularly important scenes.
Anime are often classified by target demographic, including , , and a diverse range of genres targeting an adult audience. Shoujo and shounen anime sometimes contain elements popular with children of both sexes in an attempt to gain crossover appeal. Adult anime may feature a slower pace or greater plot complexity that younger audiences may typically find unappealing, as well as adult themes and situations. A subset of adult anime works featuring pornographic elements are labeled "R18" in Japan, and are internationally known as "hentai" (originating from ). By contrast, some anime subgenres incorporate "ecchi", sexual themes or undertones without depictions of sexual intercourse, as typified in the comedic or harem genres; due to its popularity among adolescent and adult anime enthusiasts, the inclusion of such elements is considered a form of fan service. Some genres explore homosexual romances, such as "yaoi" (male homosexuality) and "yuri" (female homosexuality). While often used in a pornographic context, the terms "yaoi" and "yuri" can also be used broadly in a wider context to describe or focus on the themes or the development of the relationships themselves.
Anime's genre classification differs from other types of animation and does not lend itself to simple classification. Gilles Poitras compared the labeling "Gundam 0080" and its complex depiction of war as a "giant robot" anime akin to simply labeling "War and Peace" a "war novel". Science fiction is a major anime genre and includes important historical works like Tezuka's "Astro Boy" and Yokoyama's "Tetsujin 28-go". A major subgenre of science fiction is mecha, with the "Gundam" metaseries being iconic. The diverse fantasy genre includes works based on Asian and Western traditions and folklore; examples include the Japanese feudal fairytale "InuYasha", and the depiction of Scandinavian goddesses who move to Japan to maintain a computer called Yggdrasil in "Ah! My Goddess". Genre crossing in anime is also prevalent, such as the blend of fantasy and comedy in "Dragon Half", and the incorporation of slapstick humor in the crime anime film "Castle of Cagliostro". Other subgenres found in anime include magical girl, harem, sports, martial arts, literary adaptations, medievalism, and war.
The animation industry consists of more than 430 production companies with some of the major studios including Toei Animation, Gainax, Madhouse, Gonzo, Sunrise, Bones, TMS Entertainment, Nippon Animation, P.A.Works, Studio Pierrot and Studio Ghibli. Many of the studios are organized into a trade association, The Association of Japanese Animations. There is also a labor union for workers in the industry, the Japanese Animation Creators Association. Studios will often work together to produce more complex and costly projects, as done with Studio Ghibli's "Spirited Away". An anime episode can cost between US$100,000 and US$300,000 to produce. In 2001, animation accounted for 7% of the Japanese film market, above the 4.6% market share for live-action works. The popularity and success of anime is seen through the profitability of the DVD market, contributing nearly 70% of total sales. According to a 2016 article on Nikkei Asian Review, Japanese television stations have bought over worth of anime from production companies "over the past few years", compared with under from overseas. There has been a rise in sales of shows to television stations in Japan, caused by late night anime with adults as the target demographic. This type of anime is less popular outside Japan, being considered "more of a niche product". "Spirited Away" (2001) is the all-time highest-grossing film in Japan. | https://en.wikipedia.org/wiki?curid=800 |
Ankara
Ankara (, also , ), historically known as Ancyra (), Άγκυρα (trl: "Anchor" in "Greek"), and Angora (, also ), is the capital of Turkey. With a population of 4,587,558 in the urban centre (2014) and 5,150,072 in its province (2015), it is Turkey's second largest city after Istanbul (the former imperial capital), having outranked İzmir in the 20th century. Ankara covers an area of 24,521 km2 (9,468 sq mi).
On 23 April 1920 the Grand National Assembly of Turkey was established in Ankara, which became the headquarters of Atatürk and the Turkish National Movement during the Turkish War of Independence. Ankara became the new Turkish capital upon the establishment of the Republic on 29 October 1923, succeeding in this role the former Turkish capital Istanbul (Constantinople) following the fall of the Ottoman Empire. The government is a prominent employer, but Ankara is also an important commercial and industrial city, located at the centre of Turkey's road and railway networks. The city gave its name to the Angora wool shorn from Angora rabbits, the long-haired Angora goat (the source of mohair), and the Angora cat. The area is also known for its pears, honey and muscat grapes. Although situated in one of the driest places of Turkey and surrounded mostly by steppe vegetation except for the forested areas on the southern periphery, Ankara can be considered a green city in terms of green areas per inhabitant, at per head.
Ankara is a very old city with various Hittite, Phrygian, Hellenistic, Roman, Byzantine, and Ottoman archaeological sites. The historical centre of town is a rocky hill rising over the left bank of the Ankara Çayı, a tributary of the Sakarya River, the ancient Sangarius. The hill remains crowned by the ruins of the old citadel. Although few of its outworks have survived, there are well-preserved examples of Roman and Ottoman architecture throughout the city, the most remarkable being the 20 Temple of Augustus and Rome that boasts the Monumentum Ancyranum, the inscription recording the "Res Gestae Divi Augusti".
The orthography of the name Ankara has varied over the ages. It has been identified with the Hittite cult center "Ankuwaš", although this remains a matter of debate. In classical antiquity and during the medieval period, the city was known as "Ánkyra" (, "anchor") in Greek and "Ancyra" in Latin; the Galatian Celtic name was probably a similar variant. Following its annexation by the Seljuk Turks in 1073, the city became known in many European languages as "Angora"; it was also known in Ottoman Turkish as "Engürü". The form "Angora" is preserved in the names of breeds of many different kinds of animals, and in the names of several locations in the US (see Angora).
The region's history can be traced back to the Bronze Age Hattic civilization, which was succeeded in the 2nd millennium BC by the Hittites, in the 10th century BC by the Phrygians, and later by the Lydians, Persians, Greeks, Galatians, Romans, Byzantines, and Turks (the Seljuk Sultanate of Rûm, the Ottoman Empire and finally republican Turkey).
The oldest settlements in and around the city center of Ankara belonged to the Hattic civilization which existed during the Bronze Age and was gradually absorbed c. 2000–1700 BC by the Indo-European Hittites. The city grew significantly in size and importance under the Phrygians starting around 1000 BC, and experienced a large expansion following the mass migration from Gordion, (the capital of Phrygia), after an earthquake which severely damaged that city around that time. In Phrygian tradition, King Midas was venerated as the founder of Ancyra, but Pausanias mentions that the city was actually far older, which accords with present archaeological knowledge.
Phrygian rule was succeeded first by Lydian and later by Persian rule, though the strongly Phrygian character of the peasantry remained, as evidenced by the gravestones of the much later Roman period. Persian sovereignty lasted until the Persians' defeat at the hands of Alexander the Great who conquered the city in 333 BC. Alexander came from Gordion to Ankara and stayed in the city for a short period. After his death at Babylon in 323 BC and the subsequent division of his empire among his generals, Ankara and its environs fell into the share of Antigonus.
Another important expansion took place under the Greeks of Pontos who came there around 300 BC and developed the city as a trading center for the commerce of goods between the Black Sea ports and Crimea to the north; Assyria, Cyprus, and Lebanon to the south; and Georgia, Armenia and Persia to the east. By that time the city also took its name Ἄγκυρα ("Ánkyra", meaning "anchor" in Greek) which, in slightly modified form, provides the modern name of "Ankara".
In 278 BC, the city, along with the rest of central Anatolia, was occupied by a Celtic group, the Galatians, who were the first to make Ankara one of their main tribal centers, the headquarters of the Tectosages tribe. Other centers were Pessinus, today's Ballıhisar, for the Trocmi tribe, and Tavium, to the east of Ankara, for the Tolistobogii tribe. The city was then known as "Ancyra". The Celtic element was probably relatively small in numbers; a warrior aristocracy which ruled over Phrygian-speaking peasants. However, the Celtic language continued to be spoken in Galatia for many centuries. At the end of the 4th century, St. Jerome, a native of Dalmatia, observed that the language spoken around Ankara was very similar to that being spoken in the northwest of the Roman world near Trier.
The city was subsequently passed under the control of the Roman Empire. In 25 BC, Emperor Augustus raised it to the status of a "polis" and made it the capital city of the Roman province of Galatia. Ankara is famous for the "Monumentum Ancyranum" ("Temple of Augustus and Rome") which contains the official record of the "Acts of Augustus", known as the "Res Gestae Divi Augusti", an inscription cut in marble on the walls of this temple. The ruins of Ancyra still furnish today valuable bas-reliefs, inscriptions and other architectural fragments. Two other Galatian tribal centers, Tavium near Yozgat, and Pessinus (Balhisar) to the west, near Sivrihisar, continued to be reasonably important settlements in the Roman period, but it was Ancyra that grew into a grand metropolis.
An estimated 200,000 people lived in Ancyra in good times during the Roman Empire, a far greater number than was to be the case from after the fall of the Roman Empire until the early 20th century. A small river, the Ankara Çayı, ran through the center of the Roman town. It has now been covered and diverted, but it formed the northern boundary of the old town during the Roman, Byzantine and Ottoman periods. Çankaya, the rim of the majestic hill to the south of the present city center, stood well outside the Roman city, but may have been a summer resort. In the 19th century, the remains of at least one Roman villa or large house were still standing not far from where the Çankaya Presidential Residence stands today. To the west, the Roman city extended until the area of the Gençlik Park and Railway Station, while on the southern side of the hill, it may have extended downwards as far as the site presently occupied by Hacettepe University. It was thus a sizeable city by any standards and much larger than the Roman towns of Gaul or Britannia.
Ancyra's importance rested on the fact that it was the junction point where the roads in northern Anatolia running north–south and east–west intersected, giving it major strategic importance for Rome's eastern frontier. The great imperial road running east passed through Ankara and a succession of emperors and their armies came this way. They were not the only ones to use the Roman highway network, which was equally convenient for invaders. In the second half of the 3rd century, Ancyra was invaded in rapid succession by the Goths coming from the west (who rode far into the heart of Cappadocia, taking slaves and pillaging) and later by the Arabs. For about a decade, the town was one of the western outposts of one of Palmyrean empress Zenobia in the Syrian Desert, who took advantage of a period of weakness and disorder in the Roman Empire to set up a short-lived state of her own.
The town was reincorporated into the Roman Empire under Emperor Aurelian in 272. The tetrarchy, a system of multiple (up to four) emperors introduced by Diocletian (284–305), seems to have engaged in a substantial programme of rebuilding and of road construction from Ankara westwards to Germe and Dorylaeum (now Eskişehir).
In its heyday, Roman Ankara was a large market and trading center but it also functioned as a major administrative capital, where a high official ruled from the city's Praetorium, a large administrative palace or office. During the 3rd century, life in Ancyra, as in other Anatolian towns, seems to have become somewhat militarized in response to the invasions and instability of the town.
The city is well known during the 4th century as a centre of Christian activity (see also below), due to frequent imperial visits, and through the letters of the pagan scholar Libanius. Bishop Marcellus of Ancyra and Basil of Ancyra were active in the theological controversies of their day, and the city was the site of no less than three church synods in 314, 358 and 375, the latter two in favour of Arianism.
The city was visited by Emperor Constans I (r. 337–350) in 347 and 350, Julian (r. 361–363) during his Persian campaign in 362, and Julian's successor Jovian (r. 363–364) in winter 363/364 (he entered his consulship while in the city). After Jovian's death soon after, Valentinian I (r. 364–375) was acclaimed emperor at Ancyra, and in the next year his brother Valens (r. 364–378) used Ancyra as his base against the usurper Procopius. When the province of Galatia was divided sometime in 396/99, Ancyra remained the civil capital of Galatia I, as well as its ecclesiastical centre (metropolitan see). Emperor Arcadius (r. 395–408) frequently used the city as his summer residence, and some information about the ecclesiastical affairs of the city during the early 5th century is found in the works of Palladius of Galatia and Nilus of Galatia.
In 479, the rebel Marcian attacked the city, without being able to capture it. In 610/11, Comentiolus, brother of Emperor Phocas (r. 602–610), launched his own unsuccessful rebellion in the city against Heraclius (r. 610–641). Ten years later, in 620 or more likely 622, it was captured by the Sassanid Persians during the Byzantine–Sassanid War of 602–628. Although the city returned to Byzantine hands after the end of the war, the Persian presence left traces in the city's archaeology, and likely began the process of its transformation from a late antique city to a medieval fortified settlement.
In 654, the city was captured for the first time by the Arabs of the Rashidun Caliphate, under Muawiyah, the future founder of the Umayyad Caliphate. At about the same time, the themes were established in Anatolia, and Ancyra became capital of the Opsician Theme, which was the largest and most important theme until it was split up under Emperor Constantine V (r. 741–775); Ancyra then became the capital of the new Bucellarian Theme. The city was captured at least temporarily by the Umayyad prince Maslama ibn Hisham in 739/40, the last of the Umayyads' territorial gains from the Byzantine Empire. Ancyra was attacked without success by Abbasid forces in 776 and in 798/99. In 805, Emperor Nikephoros I (r. 802–811) strengthened its fortifications, a fact which probably saved it from sack during the large-scale invasion of Anatolia by Caliph Harun al-Rashid in the next year. Arab sources report that Harun and his successor al-Ma'mun (r. 813–833) took the city, but this information is later invention. In 838, however, during the Amorium campaign, the armies of Caliph al-Mu'tasim (r. 833–842) converged and met at the city; abandoned by its inhabitants, Ancara was razed to the ground, before the Arab armies went on to besiege and destroy Amorium. In 859, Emperor Michael III (r. 842–867) came to the city during a campaign against the Arabs, and ordered its fortifications restored. In 872, the city was menaced, but not taken, by the Paulicians under Chrysocheir. The last Arab raid to reach the city was undertaken in 931, by the Abbasid governor of Tarsus, Thamal al-Dulafi, but the city again was not captured.
Early Christian martyrs of Ancyra, about whom little is known, included Proklos and Hilarios who were natives of the otherwise unknown nearby village of Kallippi, and suffered repression under the emperor Trajan (98–117). In the 280s we hear of Philumenos, a Christian corn merchant from southern Anatolia, being captured and martyred in Ankara, and Eustathius.
As in other Roman towns, the reign of Diocletian marked the culmination of the persecution of the Christians. In 303, Ancyra was one of the towns where the co-Emperors Diocletian and his deputy Galerius launched their anti-Christian persecution. In Ancyra, their first target was the 38-year-old Bishop of the town, whose name was Clement. Clement's life describes how he was taken to Rome, then sent back, and forced to undergo many interrogations and hardship before he, and his brother, and various companions were put to death. The remains of the church of St. Clement can be found today in a building just off Işıklar Caddesi in the Ulus district. Quite possibly this marks the site where Clement was originally buried. Four years later, a doctor of the town named Plato and his brother Antiochus also became celebrated martyrs under Galerius. Theodotus of Ancyra is also venerated as a saint.
However, the persecution proved unsuccessful and in 314 Ancyra was the center of an important council of the early church; its 25 disciplinary canons constitute one of the most important documents in the early history of the administration of the Sacrament of Penance. The synod also considered ecclesiastical policy for the reconstruction of the Christian Church after the persecutions, and in particular the treatment of "lapsi"—Christians who had given in to forced paganism (sacrifices) to avoid martyrdom during these persecutions.
Though paganism was probably tottering in Ancyra in Clement's day, it may still have been the majority religion. Twenty years later, Christianity and monotheism had taken its place. Ancyra quickly turned into a Christian city, with a life dominated by monks and priests and theological disputes. The town council or senate gave way to the bishop as the main local figurehead. During the middle of the 4th century, Ancyra was involved in the complex theological disputes over the nature of Christ, and a form of Arianism seems to have originated there.
In 362–363, the Emperor Julian passed through Ancyra on his way to an ill-fated campaign against the Persians, and according to Christian sources, engaged in a persecution of various holy men. The stone base for a statue, with an inscription describing Julian as "Lord of the whole world from the British Ocean to the barbarian nations", can still be seen, built into the eastern side of the inner circuit of the walls of Ankara Castle. The Column of Julian which was erected in honor of the emperor's visit to the city in 362 still stands today. In 375, Arian bishops met at Ancyra and deposed several bishops, among them St. Gregory of Nyssa.
In the late 4th century, Ancyra became something of an imperial holiday resort. After Constantinople became the East Roman capital, emperors in the 4th and 5th centuries would retire from the humid summer weather on the Bosporus to the drier mountain atmosphere of Ancyra. Theodosius II (408–450) kept his court in Ancyra in the summers. Laws issued in Ancyra testify to the time they spent there.
The Metropolis of Ancyra continued to be a residential see of the Eastern Orthodox Church until the 20th century, with about 40,000 faithful, mostly Turkish-speaking, but that situation ended as a result of the 1923 Convention Concerning the Exchange of Greek and Turkish Populations. The earlier Armenian Genocide put an end to the residential eparchy of Ancyra of the Armenian Catholic Church, which had been established in 1850. It is also a titular metropolis of the Ecumenical Patriarchate of Constantinople.
Both the Ancient Byzantine Metropolitan archbishopric and the 'modern' Armenian eparchy are now listed by the Catholic Church as titular sees, with separate apostolic successions.
After the Battle of Manzikert in 1071, the Seljuk Turks overran much of Anatolia. By 1073, the Turkish settlers had reached the vicinity of Ancyra, and the city was captured shortly after, at the latest by the time of the rebellion of Nikephoros Melissenos in 1081. In 1101, when the Crusade under Raymond IV of Toulouse arrived, the city had been under Danishmend control for some time. The Crusaders captured the city, and handed it over to the Byzantine emperor Alexios I Komnenos (r. 1081–1118). Byzantine rule did not last long, and the city was captured by the Seljuk Sultanate of Rum at some unknown point; in 1127, it returned to Danishmend control until 1143, when the Seljuks of Rum retook it.
After the Battle of Köse Dağ in 1243, in which the Mongols defeated the Seljuks, most of Anatolia became part of the dominion of the Mongols. Taking advantage of Seljuk decline, a semi-religious cast of craftsmen and trade people named "Ahiler" chose Angora as their independent city-state in 1290. Orhan I, the second Bey of the Ottoman Empire, captured the city in 1356. Timur defeated Bayezid I at the Battle of Ankara in 1402 and took the city, but in 1403 Angora was again under Ottoman control.
The Levant Company maintained a factory in the town from 1639 to 1768. In the 19th century, its population was estimated at 20,000 to 60,000. It was sacked by Egyptians under Ibrahim Pasha in 1832. Prior to World War I, the town had a British consulate and a population of around 28,000, roughly of whom were Christian.
Following the Ottoman defeat in World War I, the Ottoman capital Constantinople (modern Istanbul) and much of Anatolia were occupied by the Allies, who planned to share these lands between Armenia, France, Greece, Italy and the United Kingdom, leaving for the Turks the core piece of land in central Anatolia. In response, the leader of the Turkish nationalist movement, Mustafa Kemal Atatürk, established the headquarters of his resistance movement in Angora in 1920. After the Turkish War of Independence was won and the Treaty of Sèvres was superseded by the Treaty of Lausanne (1923), the Turkish nationalists replaced the Ottoman Empire with the Republic of Turkey on 29 October 1923. A few days earlier, Angora had officially replaced Constantinople as the new Turkish capital city, on 13 October 1923, and Republican officials declared that the city's name is Ankara.
After Ankara became the capital of the newly founded Republic of Turkey, new development divided the city into an old section, called "Ulus", and a new section, called "Yenişehir". Ancient buildings reflecting Roman, Byzantine, and Ottoman history and narrow winding streets mark the old section. The new section, now centered on Kızılay Square, has the trappings of a more modern city: wide streets, hotels, theaters, shopping malls, and high-rises. Government offices and foreign embassies are also located in the new section. Ankara has experienced a phenomenal growth since it was made Turkey's capital in 1923, when it was "a small town of no importance". In 1924, the year after the government had moved there, Ankara had about 35,000 residents. By 1927 there were 44,553 residents and by 1950 the population had grown to 286,781. Ankara continued to grow rapidly during the latter half of the 20th century and eventually outranked Izmir as Turkey's second largest city, after Istanbul. Ankara's urban population reached 4,587,558 in 2014, while the population of Ankara Province reached 5,150,072 in 2015.
After 1930, it became known officially in Western languages as Ankara. After the late 1930s the public stopped using the name "Angora".
The city has exported mohair (from the Angora goat) and Angora wool (from the Angora rabbit) internationally for centuries. In the 19th century, the city also exported substantial amounts of goat and cat skins, gum, wax, honey, berries, and madder root. It was connected to Istanbul by railway before the First World War, continuing to export mohair, wool, berries, and grain.
The Central Anatolia Region is one of the primary locations of grape and wine production in Turkey, and Ankara is particularly famous for its Kalecik Karası and Muscat grapes; and its Kavaklıdere wine, which is produced in the Kavaklıdere neighbourhood within the Çankaya district of the city. Ankara is also famous for its pears. Another renowned natural product of Ankara is its indigenous type of honey ("Ankara Balı") which is known for its light color and is mostly produced by the Atatürk Forest Farm and Zoo in the Gazi district, and by other facilities in the Elmadağ, Çubuk and Beypazarı districts. Çubuk-1 and Çubuk-2 dams on the Çubuk Brook in Ankara were among the first dams constructed in the Turkish Republic.
Ankara is the center of the state-owned and private Turkish defence and aerospace companies, where the industrial plants and headquarters of the Turkish Aerospace Industries, MKE, ASELSAN, Havelsan, Roketsan, FNSS, Nurol Makina, and numerous other firms are located. Exports to foreign countries from these defence and aerospace firms have steadily increased in the past decades. The IDEF in Ankara is one of the largest international expositions of the global arms industry. A number of the global automotive companies also have production facilities in Ankara, such as the German bus and truck manufacturer MAN SE. Ankara hosts the OSTIM Industrial Zone, Turkey's largest industrial park.
A large percentage of the complicated employment in Ankara is provided by the state institutions; such as the ministries, subministries, and other administrative bodies of the Turkish government. There are also many foreign citizens working as diplomats or clerks in the embassies of their respective countries.
Ankara and its province are located in the Central Anatolia Region of Turkey. The Çubuk Brook flows through the city center of Ankara. It is connected in the western suburbs of the city to the Ankara River, which is a tributary of the Sakarya River.
Ankara has a cold semi-arid climate (Köppen climate classification: "BSk"). Under the Trewartha climate classification, Ankara has a middle latitude steppe climate ("BSk"). Due to its elevation and inland location, Ankara has cold and snowy winters, and hot and dry summers. Rainfall occurs mostly during the spring and autumn. The city lies in USDA Hardiness zone 7b, and its annual average precipitation is fairly low at , nevertheless precipitation can be observed throughout the year. Monthly mean temperatures range from in January to in July, with an annual mean of .
Ankara had a population of 75,000 in 1927. As of 2016, Ankara Province has a population of 5,346,518.
When Ankara became the capital of the Republic of Turkey in 1923, it was designated as a planned city for 500,000 future inhabitants. During the 1920s, 1930s and 1940s, the city grew in a planned and orderly pace. However, from the 1950s onward, the city grew much faster than envisioned, because unemployment and poverty forced people to migrate from the countryside into the city in order to seek a better standard of living. As a result, many illegal houses called gecekondu were built around the city, causing the unplanned and uncontrolled urban landscape of Ankara, as not enough planned housing could be built fast enough. Although precariously built, the vast majority of them have electricity, running water and modern household amenities.
Nevertheless, many of these gecekondus have been replaced by huge public housing projects in the form of tower blocks such as Elvankent, Eryaman and Güzelkent; and also as mass housing compounds for military and civil service accommodation. Although many gecekondus still remain, they too are gradually being replaced by mass housing compounds, as empty land plots in the city of Ankara for new construction projects are becoming impossible to find.
Çorum and Yozgat, which are located in Central Anatolia and whose population is decreasing, are the provinces with the highest net migration to Ankara. About half of the Central Anatolia population of 15,608,868 people resides in Ankara.
The population of Ankara has a higher education level than the country average. According to 2008 data, 15-years-higher literacy rate creates 88% of the total provincial population (91% in men and 86% in women). This ratio was 83% for Turkey (88% males, 79% females). This difference is particularly evident in the university educated segment of the population. The ratio of university and high school graduates to total population is 10.6% in Ankara, while 5.4% in Turkey.
The "Electricity, Gas, Bus General Directorate" (EGO) operates the Ankara Metro and other forms of public transportation. Ankara is currently served by a suburban rail named Ankaray (A1) and three subway lines (M1, M2, M3) of the Ankara Metro with about 300,000 total daily commuters, while an additional subway line (M4) is currently under construction. A long gondola lift with four stations connects the district of Şentepe to the Yenimahalle metro station.
The Ankara Central Station is a major rail hub in Turkey. The Turkish State Railways operates passenger train service from Ankara to other major cities, such as: Istanbul, Eskişehir, Balıkesir, Kütahya, İzmir, Kayseri, Adana, Kars, Elâzığ, Malatya, Diyarbakır, Karabük, Zonguldak and Sivas. Commuter rail also runs between the stations of Sincan and Kayaş. On 13 March 2009, the new Yüksek Hızlı Tren (YHT) high-speed rail service began operation between Ankara and Eskişehir. On 23 August 2011, another YHT high-speed line commercially started its service between Ankara and Konya. On 25 July 2014, the Ankara–Istanbul high-speed line of YHT entered service.
Esenboğa International Airport, located in the north-east of the city, is Ankara's main airport.
The average amount of time people spend commuting on public transit in Ankara on a weekday is 71 minutes. 17% of public transit passengers, ride for more than two hours every day. The average amount of time people wait at a stop or station for public transit is sixteen minutes, while 28% of users wait for over twenty minutes on average every day. The average distance people usually ride in a single trip with public transit is , while 27% travel for over in a single direction.
Ankara is politically a triple battleground between the ruling conservative Justice and Development Party (AKP), the opposition Kemalist centre-left Republican People's Party (CHP) and the nationalist far-right Nationalist Movement Party (MHP). The province of Ankara is divided into 25 districts. The CHP's key and almost only political stronghold in Ankara lies within the central area of Çankaya, which is the city's most populous district. While the CHP has always gained between 60 and 70% of the vote in Çankaya since 2002, political support elsewhere throughout Ankara is minimal. The high population within Çankaya, as well as Yenimahalle to an extent, has allowed the CHP to take overall second place behind the AKP in both local and general elections, with the MHP a close third, despite the fact that the MHP is politically stronger than the CHP in almost every other district. Overall, the AKP enjoys the most support throughout the city. The electorate of Ankara thus tend to vote in favour of the political right, far more so than the other main cities of Istanbul and İzmir. In retrospect, the 2013–14 protests against the AKP government were particularly strong in Ankara, proving to be fatal on multiple occasions.
The city suffered from a series of terrorist attacks in 2015 and 2016, most notably on 10 October 2015; 17 February 2016; 13 March 2016; and 15 July 2016.
Melih Gökçek was the Metropolitan Mayor of Ankara between 1994 and 2017. Initially elected in the 1994 local elections, he was re-elected in 1999, 2004 and 2009. In the 2014 local elections, Gökçek stood for a fifth term. The MHP's metropolitan mayoral candidate for the 2009 local elections, Mansur Yavaş, stood as the CHP's candidate against Gökçek in 2014. In a heavily controversial election, Gökçek was declared the winner by just 1% ahead of Yavaş amid allegations of systematic electoral fraud. With the Supreme Electoral Council and courts rejecting his appeals, Yavaş declared his intention to take the irregularities to the European Court of Human Rights. Although Gökçek was inaugurated for a fifth term, most election observers believe that Yavaş was the winner of the election. Gökçek resigned on 28 October 2017 and was replaced by the former mayor of Sincan district, Mustafa Tuna.
Since April 8, 2019, the Mayor of Ankara is Mansur Yavaş from the Republican People's Party (CHP), who won the mayoral election in 2019.
The foundations of the Ankara castle and citadel were laid by the Galatians on a prominent lava outcrop (), and the rest was completed by the Romans. The Byzantines and Seljuks further made restorations and additions. The area around and inside the citadel, being the oldest part of Ankara, contains many fine examples of traditional architecture. There are also recreational areas to relax. Many restored traditional Turkish houses inside the citadel area have found new life as restaurants, serving local cuisine.
The citadel was depicted in various Turkish banknotes during 1927–1952 and 1983–1989.
The remains, the stage, and the backstage of the Roman theatre can be seen outside the castle. Roman statues that were found here are exhibited in the Museum of Anatolian Civilizations. The seating area is still under excavation.
The Augusteum, now known as the Temple of Augustus and Rome, was built 25 20 following the conquest of Central Anatolia by the Roman Empire. Ancyra then formed the capital of the new province of Galatia. After the death of Augustus in 14, a copy of the text of the "Res Gestae Divi Augusti" (the "Monumentum Ancyranum") was inscribed on the interior of the temple's ' in Latin and a Greek translation on an exterior wall of the '. The temple on the ancient acropolis of Ancyra was enlarged in the 2nd century and converted into a church in the 5th century. It is located in the Ulus quarter of the city. It was subsequently publicized by the Austrian ambassador Ogier Ghiselin de Busbecq in the 16th century.
The Roman Baths of Ankara have all the typical features of a classical Roman bath complex: a "frigidarium" (cold room), a "tepidarium" (warm room) and a "caldarium" (hot room). The baths were built during the reign of the Roman emperor Caracalla in the early 3rd century to honor Asclepios, the God of Medicine. Today, only the basement and first floors remain. It is situated in the Ulus quarter.
The Roman Road of Ankara or "Cardo Maximus" was found in 1995 by Turkish archaeologist Cevdet Bayburtluoğlu. It is long and wide. Many ancient artifacts were discovered during the excavations along the road and most of them are currently displayed at the Museum of Anatolian Civilizations.
The Column of Julian or Julianus, now in the Ulus district, was erected in honor of the Roman emperor Julian the Apostate's visit to Ancyra in 362.
Kocatepe Mosque is the largest mosque in the city. Located in the Kocatepe quarter, it was constructed between 1967 and 1987 in classical Ottoman style with four minarets. Its size and prominent location have made it a landmark for the city.
Ahmet Hamdi Akseki Mosque is located near the Presidency of Religious Affairs on the Eskişehir Road. Built in the Turkish neoclassical style, it is one of the largest new mosques in the city, completed and opened in 2013. It can accommodate 6 thousand people during general prayers, and up to 30 thousand people during funeral prayers. The mosque was decorated with Anatolian Seljuk style patterns.
It is the largest Ottoman mosque in Ankara and was built by the famous architect Sinan in the 16th century. The mimber (pulpit) and mihrap (prayer niche) are of white marble, and the mosque itself is of Ankara stone, an example of very fine workmanship.
This mosque, in the Ulus quarter next to the Temple of Augustus, was built in the early 15th century in Seljuk style by an unknown architect. It was subsequently restored by architect Mimar Sinan in the 16th century, with Kütahya tiles being added in the 18th century. The mosque was built in honor of Hacı Bayram-ı Veli, whose tomb is next to the mosque, two years before his death (1427–28). The usable space inside this mosque is on the first floor and on the second floor.
It was founded in the Ulus quarter near the Ankara Citadel and was constructed by the Ahi fraternity during the late 14th and early 15th centuries. The finely carved walnut mimber (pulpit) is of particular interest.
The Alâeddin Mosque is the oldest mosque in Ankara. It has a carved walnut mimber, the inscription on which records that the mosque was completed in early AH 574 (which corresponds to the summer of 1178 AD) and was built by the Seljuk prince Muhiddin Mesud Şah (d. 1204), the Bey of Ankara, who was the son of the Anatolian Seljuk sultan Kılıç Arslan II (reigned 1156–1192.)
The " Victory Monument" (Turkish: "") was crafted by Austrian sculptor Heinrich Krippel in 1925 and was erected in 1927 at Ulus Square. The monument is made of marble and bronze and features an equestrian statue of Mustafa Kemal Atatürk, who wears a Republic era modern military uniform, with the rank Field Marshal.
Located at Zafer Square (Turkish: "Zafer Meydanı"), the marble and bronze statue was crafted by the renowned Italian sculptor Pietro Canonica in 1927 and depicts a standing Atatürk who wears a Republic era modern military uniform, with the rank Field Marshal.
This monument, located in Güven Park near Kızılay Square, was erected in 1935 and bears Atatürk's advice to his people: "Turk! Be proud, work hard, and believe in yourself."
The monument was depicted on the reverse of the Turkish 5 lira banknote of 1937–1952 and of the 1000 lira banknotes of 1939–1946.
Erected in 1978 at Sıhhiye Square, this impressive monument symbolizes the Hatti Sun Disc (which was later adopted by the Hittites) and commemorates Anatolia's earliest known civilization. The Hatti Sun Disc has been used in the previous logo of Ankara Metropolitan Municipality. It was also used in the previous logo of the Ministry of Culture & Tourism.
Suluhan is a historical Inn in Ankara. It is also called the "Hasanpaşa Han". It is about southeast of Ulus Square and situated in the Hacıdoğan neighbourhood. According to the "vakfiye" (inscription) of the building, the Ottoman era "han" was commissioned by Hasan Pasha, a regional beylerbey, and was constructed between 1508 and 1511, during the final years of the reign of Sultan Bayezid II.
There are 102 rooms (now shops) which face the two yards. In each room there is a window, a niche and a chimney.
Çengelhan Rahmi Koç Museum is a museum of industrial technology situated in , an Ottoman era Inn which was completed in 1523, during the early years of the reign of Sultan Suleiman the Magnificent. The exhibits include industrial/technological artifacts from the 1850s onwards. There are also sections about Mustafa Kemal Atatürk, the founder of modern Turkey; Vehbi Koç, Rahmi Koç's father and one of the first industrialists of Turkey, and Ankara city.
Foreign visitors to Ankara usually like to visit the old shops in "Çıkrıkçılar Yokuşu" (Weavers' Road) near Ulus, where myriad things ranging from traditional fabrics, hand-woven carpets and leather products can be found at bargain prices. "Bakırcılar Çarşısı" (Bazaar of Coppersmiths) is particularly popular, and many interesting items, not just of copper, can be found here like jewelry, carpets, costumes, antiques and embroidery. Up the hill to the castle gate, there are many shops selling a huge and fresh collection of spices, dried fruits, nuts, and other produce.
Modern shopping areas are mostly found in Kızılay, or on Tunalı Hilmi Avenue, including the modern mall of Karum (named after the ancient Assyrian merchant colonies called "Kârum" that were established in central Anatolia at the beginning of the 2nd millennium BC) which is located towards the end of the Avenue; and in Çankaya, the quarter with the highest elevation in the city. Atakule Tower next to Atrium Mall in Çankaya has views over Ankara and also has a revolving restaurant at the top. The symbol of the Armada Shopping Mall is an anchor, and there's a large anchor monument at its entrance, as a reference to the ancient Greek name of the city, Ἄγκυρα (Ánkyra), which means anchor. Likewise, the anchor monument is also related with the Spanish name of the mall, Armada, which means naval fleet.
As Ankara started expanding westward in the 1970s, several modern, suburbia-style developments and mini-cities began to rise along the western highway, also known as the Eskişehir Road. The "Armada", "CEPA" and "Kentpark" malls on the highway, the "Galleria", "Arcadium" and "Gordion" in Ümitköy, and a huge mall, "Real" in Bilkent Center, offer North American and European style shopping opportunities (these places can be reached through the Eskişehir Highway.) There is also the newly expanded "ANKAmall" at the outskirts, on the Istanbul Highway, which houses most of the well-known international brands. This mall is the largest throughout the Ankara region. In 2014 a few more shopping malls were open in Ankara. They are "Next Level" and "Taurus" on the Boulevard of Mevlana (also known as Konya Road).
Turkish State Opera and Ballet, the national directorate of opera and ballet companies of Turkey, has its headquarters in Ankara, and serves the city with three venues:
Ankara is host to five classical music orchestras:
There are four concert halls in the city:
The city has been host to several well-established, annual theatre, music, film festivals:
Ankara also has a number of concert venues such as "Eskiyeni", "IF Performance Hall", "Jolly Joker", "Kite", "Nefes Bar", "Noxus Pub", "Passage Pub" and "Route", which host the live performances and events of popular musicians.
The Turkish State Theatres also has its head office in Ankara and runs the following stages in the city:
In addition, the city is served by several private theatre companies, among which , who have their own stage in the city center, is a notable example.
There are about 50 museums in the city.
The Museum of Anatolian Civilizations ("Anadolu Medeniyetleri Müzesi") is situated at the entrance of the Ankara Castle. It is an old 15th century bedesten (covered bazaar) that has been restored and now houses a collection of Paleolithic, Neolithic, Hatti, Hittite, Phrygian, Urartian and Roman works as well as a major section dedicated to Lydian treasures.
Anıtkabir is located on an imposing hill, which forms the "Anıttepe" quarter of the city, where the mausoleum of Mustafa Kemal Atatürk, founder of the Republic of Turkey, stands. Completed in 1953, it is an impressive fusion of ancient and modern architectural styles. An adjacent museum houses a wax statue of Atatürk, his writings, letters and personal items, as well as an exhibition of photographs recording important moments in his life and during the establishment of the Republic. Anıtkabir is open every day, while the adjacent museum is open every day except Mondays.
Ankara Ethnography Museum ("Etnoğrafya Müzesi") is located opposite to the Ankara Opera House on Talat Paşa Boulevard, in the Ulus district. There is a fine collection of folkloric items, as well as artifacts from the Seljuk and Ottoman periods. In front of the museum building, there is a marble and bronze equestrian statue of Mustafa Kemal Atatürk (who wears a Republic era modern military uniform, with the rank Field Marshal) which was crafted in 1927 by the renowned Italian sculptor Pietro Canonica.
The State Art and Sculpture Museum ("Resim-Heykel Müzesi") which opened to the public in 1980 is close to the Ethnography Museum and houses a rich collection of Turkish art from the late 19th century to the present day. There are also galleries which host guest exhibitions.
Cer Modern is the modern-arts museum of Ankara, inaugurated on 1 April 2010. It is situated in the renovated building of the historic TCDD Cer Atölyeleri, formerly a workshop of the Turkish State Railways. The museum incorporates the largest exhibition hall in Turkey. The museum holds periodic exhibitions of modern and contemporary art as well as hosting other contemporary arts events.
The War of Independence Museum ("Kurtuluş Savaşı Müzesi") is located on Ulus Square. It was originally the first Parliament building (TBMM) of the Republic of Turkey. The War of Independence was planned and directed here as recorded in various photographs and items presently on exhibition. In another display, wax figures of former presidents of the Republic of Turkey are on exhibit.
The Mehmet Akif Literature Museum Library is an important literary museum and archive opened in 2011 and dedicated to Mehmet Akif Ersoy (1873–1936), the poet of the Turkish National Anthem.
The TCDD Open Air Steam Locomotive Museum is an open-air museum which traces the history of steam locomotives.
Ankara Aviation Museum ("Hava Kuvvetleri Müzesi Komutanlığı") is located near the Istanbul Road in Etimesgut. The museum opened to the public in September 1998. It is home to various missiles, avionics, aviation materials and aircraft that have served in the Turkish Air Force (e.g. combat aircraft such as the F-86 Sabre, F-100 Super Sabre, F-102 Delta Dagger, F-104 Starfighter, F-5 Freedom Fighter, F-4 Phantom; and cargo planes such as the Transall C-160.) Also a Hungarian MiG-21, a Pakistani MiG-19, and a Bulgarian MiG-17 are on display at the museum.
The METU Science and Technology Museum ("ODTÜ Bilim ve Teknoloji Müzesi") is located inside the Middle East Technical University campus.
As with all other cities of Turkey, football is the most popular sport in Ankara. The city has two football clubs currently competing in the Turkish Süper Lig: Ankaragücü, founded in 1910, is the oldest club in Ankara and is associated with Ankara's military arsenal manufacturing company MKE. They were the Turkish Cup winners in 1972 and 1981. Gençlerbirliği, founded in 1923, are known as the "Ankara Gale" or the "Poppies" because of their colors: red and black. They were the Turkish Cup winners in 1987 and 2001. Gençlerbirliği's B team, Hacettepe S.K. (formerly known as Gençlerbirliği OFTAŞ) played in the Turkish Super League but currently plays in the TFF Second League. A fourth team, Büyükşehir Belediye Ankaraspor, played in the Turkish Super League until 2010, when they were expelled. The club was reconstituted in 2014 as Osmanlıspor and currently play in the TFF First League at the Osmanlı Stadium in the Sincan district of Yenikent, outside the city center. Keçiörengücü were promoted to the TFF First League for the 2019–20 season.
Ankara has a large number of minor teams, playing at regional levels. In the TFF Second League: BAKspor in Sincan, Ankara Demirspor in Çankaya, Etimesgut Belediyespor in Etimesgut; in the TFF Third League: Çankaya FK in Keçiören; Altındağ Belediyespor in Altındağ; in the Amateur League: Turanspor in Etimesgut, Türk Telekomspor owned by the phone company in Yenimahalle, Çubukspor in Çubuk, and Bağlumspor in Keçiören.
In the Turkish Basketball League, Ankara is represented by Türk Telekom, whose home is the Ankara Arena, and CASA TED Kolejliler, whose home is the TOBB Sports Hall.
Halkbank Ankara is currently the leading domestic powerhouse in Men's Volleyball, having won many championships and cups in the Turkish Men's Volleyball League and even the CEV Cup in 2013.
Ankara Buz Pateni Sarayı is where the ice skating and ice hockey competitions take place in the city.
There are many popular spots for skateboarding which is active in the city since the 1980s. Skaters in Ankara usually meet in the park near the Grand National Assembly of Turkey.
The 2012-built THF Sport Hall hosts the Handball Super League and Women's Handball Super League matches scheduled in Ankara.
Ankara has many parks and open spaces mainly established in the early years of the Republic and well maintained and expanded thereafter. The most important of these parks are: Gençlik Parkı (houses an amusement park with a large pond for rowing), the Botanical garden, Seğmenler Park, Anayasa Park, Kuğulu Park (famous for the swans received as a gift from the Chinese government), Abdi İpekçi Park, Esertepe Parkı, Güven Park (see above for the monument), Kurtuluş Park (has an ice-skating rink), Altınpark (also a prominent exposition/fair area), Harikalar Diyarı (claimed to be Biggest Park of Europe inside city borders) and Göksu Park.
Gençlik Park was depicted on the reverse of the Turkish 100 lira banknotes of 1952–1976.
Atatürk Forest Farm and Zoo ("Atatürk Orman Çiftliği") is an expansive recreational farming area which houses a zoo, several small agricultural farms, greenhouses, restaurants, a dairy farm and a brewery. It is a pleasant place to spend a day with family, be it for having picnics, hiking, biking or simply enjoying good food and nature. There is also an exact replica of the house where Atatürk was born in 1881, in Thessaloniki, Greece. Visitors to the "Çiftlik" (farm) as it is affectionately called by Ankarans, can sample such famous products of the farm such as old-fashioned beer and ice cream, fresh dairy products and meat rolls/kebaps made on charcoal, at a traditional restaurant ("Merkez Lokantası", Central Restaurant), cafés and other establishments scattered around the farm.
Ankara is noted, within Turkey, for the multitude of universities it is home to. These include the following, several of them being among the most reputable in the country:
Ankara is home to a world-famous domestic cat breed – the Turkish Angora, called "Ankara kedisi" (Ankara cat) in Turkish. Turkish Angoras are one of the ancient, naturally occurring cat breeds, having originated in Ankara and its surrounding region in central Anatolia.
They mostly have a white, silky, medium to long length coat, no undercoat and a fine bone structure. There seems to be a connection between the Angora Cats and Persians, and the Turkish Angora is also a distant cousin of the Turkish Van. Although they are known for their shimmery white coat, currently there are more than twenty varieties including black, blue and reddish fur. They come in tabby and tabby-white, along with smoke varieties, and are in every color other than pointed, lavender, and cinnamon (all of which would indicate breeding to an outcross.)
Eyes may be blue, green, or amber, or even one blue and one amber or green. The W gene which is responsible for the white coat and blue eye is closely related to the hearing ability, and the presence of a blue eye can indicate that the cat is deaf to the side the blue eye is located. However, a great many blue and odd-eyed white cats have normal hearing, and even deaf cats lead a very normal life if kept indoors.
Ears are pointed and large, eyes are almond shaped and the head is massive with a two plane profile. Another characteristic is the tail, which is often kept parallel to the back.
The Angora goat () is a breed of domestic goat that originated in Ankara and its surrounding region in central Anatolia.
This breed was first mentioned in the time of Moses, roughly in 1500 BC. The first Angora goats were brought to Europe by Charles V, Holy Roman Emperor, about 1554, but, like later imports, were not very successful. Angora goats were first introduced in the United States in 1849 by Dr. James P. Davis. Seven adult goats were a gift from Sultan Abdülmecid I in appreciation for his services and advice on the raising of cotton.
The fleece taken from an Angora goat is called mohair. A single goat produces between of hair per year. Angoras are shorn twice a year, unlike sheep, which are shorn only once. Angoras have high nutritional requirements due to their rapid hair growth. A poor quality diet will curtail mohair development. The United States, Turkey, and South Africa are the top producers of mohair.
For a long period of time, Angora goats were bred for their white coat. In 1998, the Colored Angora Goat Breeders Association was set up to promote breeding of colored Angoras. Today, Angora goats produce white, black (deep black to greys and silver), red (the color fades significantly as the goat gets older), and brownish fiber.
Angora goats were depicted on the reverse of the Turkish 50 lira banknotes of 1938–1952.
The Angora rabbit () is a variety of domestic rabbit bred for its long, soft hair. The Angora is one of the oldest types of domestic rabbit, originating in Ankara and its surrounding region in central Anatolia, along with the Angora cat and Angora goat. The rabbits were popular pets with French royalty in the mid-18th century, and spread to other parts of Europe by the end of the century. They first appeared in the United States in the early 20th century. They are bred largely for their long Angora wool, which may be removed by shearing, combing, or plucking (gently pulling loose wool.)
Angoras are bred mainly for their wool because it is silky and soft. They have a humorous appearance, as they oddly resemble a fur ball. Most are calm and docile but should be handled carefully. Grooming is necessary to prevent the fiber from matting and felting on the rabbit. A condition called "wool block" is common in Angora rabbits and should be treated quickly. Sometimes they are shorn in the summer as the long fur can cause the rabbits to overheat.
Ankara is twinned with:
43. ilişki durumu evli izle
Attribution | https://en.wikipedia.org/wiki?curid=802 |
Arabic
Arabic (, ', or , ', or ) is a Semitic language that first emerged in the 1st to 4th centuries CE. It is now the of the Arab world. It is named after the Arabs, a term initially used to describe peoples living in the area bounded by Mesopotamia in the east and the Anti-Lebanon mountains in the west, in Northwestern Arabia and in the Sinai Peninsula. The ISO assigns language codes to thirty varieties of Arabic, including its standard form, Modern Standard Arabic, also referred to as Literary Arabic, which is modernized Classical Arabic. This distinction exists primarily among Western linguists; Arabic speakers themselves generally do not distinguish between Modern Standard Arabic and Classical Arabic, but rather refer to both as (, "the purest Arabic") or simply "" ().
Arabic is widely taught in schools and universities and is used to varying degrees in workplaces, government and the media. Arabic, in its standard form, is the official language of 26 states, as well as the liturgical language of the religion of Islam, since the Quran and Hadith were written in Arabic.
During the Middle Ages, Arabic was a major vehicle of culture in Europe, especially in science, mathematics and philosophy. As a result, many European languages have also borrowed many words from it. Arabic influence, mainly in vocabulary, is seen in European languages—mainly Spanish and to a lesser extent Portuguese and Catalan—owing to both the proximity of Christian European and Muslim Arab civilizations and the long-lasting Arabic culture and language presence mainly in Southern Iberia during the Al-Andalus era. Sicilian has about 500 Arabic words, many of which relate to agriculture and related activities, as a legacy of the Emirate of Sicily from the mid-9th to mid-10th centuries, while Maltese language is a Semitic language developed from a dialect of Arabic and written in the Latin alphabet. The Balkan languages, including Greek and Bulgarian, have also acquired a significant number of Arabic words through contact with Ottoman Turkish.
Arabic has influenced many other languages around the globe throughout its history. Some of the most influenced languages are Persian, Turkish, Hindustani (Hindi and Urdu), Kashmiri, Kurdish, Bosnian, Kazakh, Bengali, Malay (Indonesian and Malaysian), Maldivian, Pashto, Punjabi, Albanian, Armenian, Azerbaijani, Sicilian, Spanish, Greek, Bulgarian, Tagalog, Assamese, Sindhi, Odia and Hausa and some languages in parts of Africa. Conversely, Arabic has borrowed words from other languages, including Hebrew, Greek, Aramaic, and Persian in medieval times and languages such as English and French in modern times.
Arabic is the liturgical language of 1.8 billion Muslims, and Arabic is one of six official languages of the United Nations. All varieties of Arabic combined are spoken by perhaps as many as 422 million speakers (native and non-native) in the Arab world, making it the fifth most spoken language in the world. Arabic is written with the Arabic alphabet, which is an abjad script and is written from right to left, although the spoken varieties are sometimes written in ASCII Latin from left to right with no standardized orthography.
Arabic is usually, but not universally, classified as a Central Semitic language. It is related to languages in other subgroups of the Semitic language group (Northwest Semitic, South Semitic, East Semitic, West Semitic), such as Aramaic, Syriac, Hebrew, Ugaritic, Phoenician, Canaanite, Amorite, Ammonite, Eblaite, epigraphic Ancient North Arabian, epigrahic Ancient South Arabian, Ethiopic, Modern South Arabian, and numerous other dead and modern languages. Linguists still differ as to the best classification of Semitic language sub-groups.
The Semitic languages changed a great deal between Proto-Semitic and the emergence of the Central Semitic languages, particularly in grammar. Innovations of the Central Semitic languages—all maintained in Arabic—include:
There are several features which Classical Arabic, the modern Arabic varieties, as well as the Safaitic and Hismaic inscriptions share which are unattested in any other Central Semitic language variety, including the Dadanitic and Taymanitic languages of the northern Hejaz. These features are evidence of common descent from a hypothetical ancestor, Proto-Arabic. The following features can be reconstructed with confidence for Proto-Arabic:
Arabia boasted a wide variety of Semitic languages in antiquity. In the southwest, various Central Semitic languages both belonging to and outside of the Ancient South Arabian family (e.g. Southern Thamudic) were spoken. It is also believed that the ancestors of the Modern South Arabian languages (non-Central Semitic languages) were also spoken in southern Arabia at this time. To the north, in the oases of northern Hejaz, Dadanitic and Taymanitic held some prestige as inscriptional languages. In Najd and parts of western Arabia, a language known to scholars as Thamudic C is attested. In eastern Arabia, inscriptions in a script derived from ASA attest to a language known as Hasaitic. Finally, on the northwestern frontier of Arabia, various languages known to scholars as Thamudic B, Thamudic D, Safaitic, and Hismaic are attested. The last two share important isoglosses with later forms of Arabic, leading scholars to theorize that Safaitic and Hismaic are in fact early forms of Arabic and that they should be considered Old Arabic.
Linguists generally believe that "Old Arabic" (a collection of related dialects that constitute the precursor of Arabic) first emerged around the 1st century CE. Previously, the earliest attestation of Old Arabic was thought to be a single 1st century CE inscription in Sabaic script at Qaryat Al-Faw, in southern present-day Saudi Arabia. However, this inscription does not participate in several of the key innovations of the Arabic language group, such as the conversion of Semitic mimation to nunation in the singular. It is best reassessed as a separate language on the Central Semitic dialect continuum.
It was also thought that Old Arabic coexisted alongside—and then gradually displaced--epigraphic Ancient North Arabian (ANA), which was theorized to have been the regional tongue for many centuries. ANA, despite its name, was considered a very distinct language, and mutually unintelligible, from "Arabic". Scholars named its variant dialects after the towns where the inscriptions were discovered (Dadanitic, Taymanitic, Hismaic, Safaitic). However, most arguments for a single ANA language or language family were based on the shape of the definite article, a prefixed h-. It has been argued that the h- is an archaism and not a shared innovation, and thus unsuitable for language classification, rendering the hypothesis of an ANA language family untenable. Safaitic and Hismaic, previously considered ANA, should be considered Old Arabic due to the fact that they participate in the innovations common to all forms of Arabic.
The earliest attestation of continuous Arabic text in an ancestor of the modern Arabic script are three lines of poetry by a man named Garm(')allāhe found in En Avdat, Israel, and dated to around 125 CE. This is followed by the epitaph of the Lakhmid king Mar 'al-Qays bar 'Amro, dating to 328 CE, found at Namaraa, Syria. From the 4th to the 6th centuries, the Nabataean script evolves into the Arabic script recognizable from the early Islamic era. There are inscriptions in an undotted, 17-letter Arabic script dating to the 6th century CE, found at four locations in Syria (Zabad, Jabal ‘Usays, Harran, Umm al-Jimaal). The oldest surviving papyrus in Arabic dates to 643 CE, and it uses dots to produce the modern 28-letter Arabic alphabet. The language of that papyrus and of the Qur'an are referred to by linguists as "Quranic Arabic", as distinct from its codification soon thereafter into "Classical Arabic".
In late pre-Islamic times, a transdialectal and transcommunal variety of Arabic emerged in the Hejaz which continued living its parallel life after literary Arabic had been institutionally standardized in the 2nd and 3rd century of the Hijra, most strongly in Judeo-Christian texts, keeping alive ancient features eliminated from the "learned" tradition (Classical Arabic). This variety and both its classicizing and "lay" iterations have been termed Middle Arabic in the past, but they are thought to continue an Old Higazi register. It is clear that the orthography of the Qur'an was not developed for the standardized form of Classical Arabic; rather, it shows the attempt on the part of writers to record an archaic form of Old Higazi.
In the late 6th century AD, a relatively uniform intertribal "poetic koine" distinct from the spoken vernaculars developed based on the Bedouin dialects of Najd, probably in connection with the court of al-Ḥīra. During the first Islamic century, the majority of Arabic poets and Arabic-writing persons spoke Arabic as their mother tongue. Their texts, although mainly preserved in far later manuscripts, contain traces of non-standardized Classical Arabic elements in morphology and syntax. The standardization of Classical Arabic reached completion around the end of the 8th century. The first comprehensive description of the "ʿarabiyya" "Arabic", Sībawayhi's "al"-"Kitāb", is based first of all upon a corpus of poetic texts, in addition to Qur'an usage and Bedouin informants whom he considered to be reliable speakers of the "ʿarabiyya". By the 8th century, knowledge of Classical Arabic had become an essential prerequisite for rising into the higher classes throughout the Islamic world.
Charles Ferguson's koine theory (Ferguson 1959) claims that the modern Arabic dialects collectively descend from a single military koine that sprang up during the Islamic conquests; this view has been challenged in recent times. Ahmad al-Jallad proposes that there were at least two considerably distinct types of Arabic on the eve of the conquests: Northern and Central (Al-Jallad 2009). The modern dialects emerged from a new contact situation produced following the conquests. Instead of the emergence of a single or multiple koines, the dialects contain several sedimentary layers of borrowed and areal features, which they absorbed at different points in their linguistic histories.
According to Veersteegh and Bickerton, colloquial Arabic dialects arose from pidginized Arabic formed from contact between Arabs and conquered peoples. Pidginization and subsequent creolization among Arabs and arabized peoples could explain relative morphological and phonological simplicity of vernacular Arabic compared to Classical and MSA.
In around the 11th and 12th centuries in al-Andalus, the "zajal" and "muwashah" poetry forms developed in the dialectical Arabic of Cordoba and the Maghreb.
In the wake of the industrial revolution and European hegemony and colonialism, pioneering Arabic presses, such as the Amiri Press established by Muhammad Ali (1819), dramatically changed the diffusion and consumption of Arabic literature and publications.
The "Nahda" cultural renaissance saw the creation of a number of Arabic academies modeled after the "Académie française", starting with the Arab Academy of Damascus (1918), which aimed to develop the Arabic lexicon to suit these transformations. This gave rise to what Western scholars call Modern Standard Arabic.
"Arabic" usually refers to Standard Arabic, which Western linguists divide into Classical Arabic and Modern Standard Arabic. It could also refer to any of a variety of regional vernacular Arabic dialects, which are not necessarily mutually intelligible.
Classical Arabic is the language found in the Quran, used from the period of Pre-Islamic Arabia to that of the Abbasid Caliphate. Classical Arabic is prescriptive, according to the syntactic and grammatical norms laid down by classical grammarians (such as Sibawayh) and the vocabulary defined in classical dictionaries (such as the "Lisān al-ʻArab").
Modern Standard Arabic largely follows the grammatical standards of Classical Arabic and uses much of the same vocabulary. However, it has discarded some grammatical constructions and vocabulary that no longer have any counterpart in the spoken varieties and has adopted certain new constructions and vocabulary from the spoken varieties. Much of the new vocabulary is used to denote concepts that have arisen in the industrial and post-industrial era, especially in modern times. Due to its grounding in Classical Arabic, Modern Standard Arabic is removed over a millennium from everyday speech, which is construed as a multitude of dialects of this language. These dialects and Modern Standard Arabic are described by some scholars as not mutually comprehensible. The former are usually acquired in families, while the latter is taught in formal education settings. However, there have been studies reporting some degree of comprehension of stories told in the standard variety among preschool-aged children. The relation between Modern Standard Arabic and these dialects is sometimes compared to that of Classical Latin and Vulgar Latin vernaculars (which became Romance languages) in medieval and early modern Europe. This view though does not take into account the widespread use of Modern Standard Arabic as a medium of audiovisual communication in today's mass media—a function Latin has never performed.
MSA is the variety used in most current, printed Arabic publications, spoken by some of the Arabic media across North Africa and the Middle East, and understood by most educated Arabic speakers. "Literary Arabic" and "Standard Arabic" ( ") are less strictly defined terms that may refer to Modern Standard Arabic or Classical Arabic.
Some of the differences between Classical Arabic (CA) and Modern Standard Arabic (MSA) are as follows:
MSA uses much Classical vocabulary (e.g., ' 'to go') that is not present in the spoken varieties, but deletes Classical words that sound obsolete in MSA. In addition, MSA has borrowed or coined many terms for concepts that did not exist in Quranic times, and MSA continues to evolve. Some words have been borrowed from other languages—notice that transliteration mainly indicates spelling and not real pronunciation (e.g., ' 'film' or " 'democracy').
However, the current preference is to avoid direct borrowings, preferring to either use loan translations (e.g., ' 'branch', also used for the branch of a company or organization; ' 'wing', is also used for the wing of an airplane, building, air force, etc.), or to coin new words using forms within existing roots ( ' 'apoptosis', using the root "m/w/t" 'death' put into the Xth form, or ' 'university', based on ' 'to gather, unite'; ' 'republic', based on ' 'multitude'). An earlier tendency was to redefine an older word although this has fallen into disuse (e.g., ' 'telephone' < 'invisible caller (in Sufism)'; "" 'newspaper' < 'palm-leaf stalk').
"Colloquial" or "dialectal" Arabic refers to the many national or regional varieties which constitute the everyday spoken language and evolved from Classical Arabic. Colloquial Arabic has many regional variants; geographically distant varieties usually differ enough to be mutually unintelligible, and some linguists consider them distinct languages. The varieties are typically unwritten. They are often used in informal spoken media, such as soap operas and talk shows, as well as occasionally in certain forms of written media such as poetry and printed advertising.
The only variety of modern Arabic to have acquired official language status is Maltese, which is spoken in (predominantly Catholic) Malta and written with the Latin script. It is descended from Classical Arabic through Siculo-Arabic, but is not mutually intelligible with any other variety of Arabic. Most linguists list it as a separate language rather than as a dialect of Arabic.
Even during Muhammad's lifetime, there were dialects of spoken Arabic. Muhammad spoke in the dialect of Mecca, in the western Arabian peninsula, and it was in this dialect that the Quran was written down. However, the dialects of the eastern Arabian peninsula were considered the most prestigious at the time, so the language of the Quran was ultimately converted to follow the eastern phonology. It is this phonology that underlies the modern pronunciation of Classical Arabic. The phonological differences between these two dialects account for some of the complexities of Arabic writing, most notably the writing of the glottal stop or "hamzah" (which was preserved in the eastern dialects but lost in western speech) and the use of ' (representing a sound preserved in the western dialects but merged with ' in eastern speech).
The sociolinguistic situation of Arabic in modern times provides a prime example of the linguistic phenomenon of diglossia, which is the normal use of two separate varieties of the same language, usually in different social situations. "Tawleed" is the process of giving a new shade of meaning to an old classical word. For example, "al-hatif" lexicographically, means the one whose sound is heard but whose person remains unseen. Now the term "al-hatif" is used for a telephone. Therefore, the process of "tawleed" can express the needs of modern civilization in a manner that would appear to be originally Arabic. In the case of Arabic, educated Arabs of any nationality can be assumed to speak both their school-taught Standard Arabic as well as their native, mutually unintelligible "dialects"; these dialects linguistically constitute separate languages which may have dialects of their own. When educated Arabs of different dialects engage in conversation (for example, a Moroccan speaking with a Lebanese), many speakers code-switch back and forth between the dialectal and standard varieties of the language, sometimes even within the same sentence. Arabic speakers often improve their familiarity with other dialects via music or film.
The issue of whether Arabic is one language or many languages is politically charged, in the same way it is for the varieties of Chinese, Hindi and Urdu, Serbian and Croatian, Scots and English, etc. In contrast to speakers of Hindi and Urdu who claim they cannot understand each other even when they can, speakers of the varieties of Arabic will claim they can all understand each other even when they cannot. The issue of diglossia between spoken and written language is a significant complicating factor: A single written form, significantly different from any of the spoken varieties learned natively, unites a number of sometimes divergent spoken forms. For political reasons, Arabs mostly assert that they all speak a single language, despite significant issues of mutual incomprehensibility among differing spoken versions.
From a linguistic standpoint, it is often said that the various spoken varieties of Arabic differ among each other collectively about as much as the Romance languages. This is an apt comparison in a number of ways. The period of divergence from a single spoken form is similar—perhaps 1500 years for Arabic, 2000 years for the Romance languages. Also, while it is comprehensible to people from the Maghreb, a linguistically innovative variety such as Moroccan Arabic is essentially incomprehensible to Arabs from the Mashriq, much as French is incomprehensible to Spanish or Italian speakers but relatively easily learned by them. This suggests that the spoken varieties may linguistically be considered separate languages.
The influence of Arabic has been most important in Islamic countries, because it is the language of the Islamic sacred book, the Quran. Arabic is also an important source of vocabulary for languages such as Amharic, Azerbaijani, Baluchi, Bengali, Berber, Bosnian, Chaldean, Chechen, Chittagonian, Croatian, Dagestani, English, German, Gujarati, Hausa, Hindi, Kazakh, Kurdish, Kutchi, Kyrgyz, Malay (Malaysian and Indonesian), Pashto, Persian, Punjabi, Rohingya, Romance languages (French, Catalan, Italian, Portuguese, Sicilian, Spanish, etc.) Saraiki, Sindhi, Somali, Sylheti, Swahili, Tagalog, Tigrinya, Turkish, Turkmen, Urdu, Uyghur, Uzbek, Visayan and Wolof, as well as other languages in countries where these languages are spoken. The Education Minister of France has recently been emphasizing the learning and usage of Arabic in their schools.
In addition, English has many Arabic loanwords, some directly, but most via other Mediterranean languages. Examples of such words include admiral, adobe, alchemy, alcohol, algebra, algorithm, alkaline, almanac, amber, arsenal, assassin, candy, carat, cipher, coffee, cotton, ghoul, hazard, jar, kismet, lemon, loofah, magazine, mattress, sherbet, sofa, sumac, tariff, and zenith. Other languages such as Maltese and Kinubi derive ultimately from Arabic, rather than merely borrowing vocabulary or grammatical rules.
Terms borrowed range from religious terminology (like Berber "taẓallit", "prayer", from "salat" ( "")), academic terms (like Uyghur "mentiq", "logic"), and economic items (like English "coffee") to placeholders (like Spanish "fulano", "so-and-so"), everyday terms (like Hindustani "lekin", "but", or Spanish "taza" and French "tasse", meaning "cup"), and expressions (like Catalan "a betzef", "galore, in quantity"). Most Berber varieties (such as Kabyle), along with Swahili, borrow some numbers from Arabic. Most Islamic religious terms are direct borrowings from Arabic, such as ("salat"), "prayer", and ("imam"), "prayer leader."
In languages not directly in contact with the Arab world, Arabic loanwords are often transferred indirectly via other languages rather than being transferred directly from Arabic. For example, most Arabic loanwords in Hindustani and Turkish entered though Persian is an Indo-Iranian language. Older Arabic loanwords in Hausa were borrowed from Kanuri.
Arabic words also made their way into several West African languages as Islam spread across the Sahara. Variants of Arabic words such as "kitāb" ("book") have spread to the languages of African groups who had no direct contact with Arab traders.
Since throughout the Islamic world, Arabic occupied a position similar to that of Latin in Europe, many of the Arabic concepts in the fields of science, philosophy, commerce, etc. were coined from Arabic roots by non-native Arabic speakers, notably by Aramaic and Persian translators, and then found their way into other languages. This process of using Arabic roots, especially in Kurdish and Persian, to translate foreign concepts continued through to the 18th and 19th centuries, when swaths of Arab-inhabited lands were under Ottoman rule.
The most important sources of borrowings into (pre-Islamic) Arabic are from the related (Semitic) languages Aramaic, which used to be the principal, international language of communication throughout the ancient Near and Middle East, Ethiopic, and to a lesser degree Hebrew (mainly religious concepts). In addition, many cultural, religious and political terms have entered Arabic from Iranian languages, notably Middle Persian, Parthian, and (Classical) Persian, and Hellenistic Greek ("kīmiyāʼ" has as origin the Greek "khymia", meaning in that language the melting of metals; see Roger Dachez, "Histoire de la Médecine de l'Antiquité au XXe siècle", Tallandier, 2008, p. 251), "alembic" (distiller) from "ambix" (cup), "almanac" (climate) from "almenichiakon" (calendar). (For the origin of the last three borrowed words, see Alfred-Louis de Prémare, "Foundations of Islam", Seuil, L'Univers Historique, 2002.) Some Arabic borrowings from Semitic or Persian languages are, as presented in De Prémare's above-cited book:
There have been many instances of national movements to convert Arabic script into Latin script or to Romanize the language. Currently, the only language derived from Classical Arabic to use Latin script is Maltese.
The Beirut newspaper "La Syrie" pushed for the change from Arabic script to Latin letters in 1922. The major head of this movement was Louis Massignon, a French Orientalist, who brought his concern before the Arabic Language Academy in Damascus in 1928. Massignon's attempt at Romanization failed as the Academy and population viewed the proposal as an attempt from the Western world to take over their country. Sa'id Afghani, a member of the Academy, mentioned that the movement to Romanize the script was a Zionist plan to dominate Lebanon.
After the period of colonialism in Egypt, Egyptians were looking for a way to reclaim and re-emphasize Egyptian culture. As a result, some Egyptians pushed for an Egyptianization of the Arabic language in which the formal Arabic and the colloquial Arabic would be combined into one language and the Latin alphabet would be used. There was also the idea of finding a way to use Hieroglyphics instead of the Latin alphabet, but this was seen as too complicated to use. A scholar, Salama Musa agreed with the idea of applying a Latin alphabet to Arabic, as he believed that would allow Egypt to have a closer relationship with the West. He also believed that Latin script was key to the success of Egypt as it would allow for more advances in science and technology. This change in alphabet, he believed, would solve the problems inherent with Arabic, such as a lack of written vowels and difficulties writing foreign words that made it difficult for non-native speakers to learn. Ahmad Lutfi As Sayid and Muhammad Azmi, two Egyptian intellectuals, agreed with Musa and supported the push for Romanization. The idea that Romanization was necessary for modernization and growth in Egypt continued with Abd Al-Aziz Fahmi in 1944. He was the chairman for the Writing and Grammar Committee for the Arabic Language Academy of Cairo. However, this effort failed as the Egyptian people felt a strong cultural tie to the Arabic alphabet. In particular, the older Egyptian generations believed that the Arabic alphabet had strong connections to Arab values and history, due to the long history of the Arabic alphabet (Shrivtiel, 189) in Muslim societies.
The Quran introduced a new way of writing to the world. People began studying and applying the unique styles they learned from the Quran to not only their own writing, but also their culture. Writers studied the unique structure and format of the Quran in order to identify and apply the figurative devices and their impact on the reader.
The Quran inspired musicality in poetry through the internal rhythm of the verses. The arrangement of words, how certain sounds create harmony, and the agreement of rhymes create the sense of rhythm within each verse. At times, the chapters of the Quran only have the rhythm in common.
The repetition in the Quran introduced the true power and impact repetition can have in poetry. The repetition of certain words and phrases made them appear more firm and explicit in the Quran. The Quran uses constant metaphors of blindness and deafness to imply unbelief. Metaphors were not a new concept to poetry, however the strength of extended metaphors was. The explicit imagery in the Quran inspired many poets to include and focus on the feature in their own work. The poet ibn al-Mu'tazz wrote a book regarding the figures of speech inspired by his study of the Quran. Poets such as badr Shakir al sayyab expresses his political opinion in his work through imagery inspired by the forms of more harsher imagery used in the Quran.
The Quran uses figurative devices in order to express the meaning in the most beautiful form possible. The study of the pauses in the Quran as well as other rhetoric allow it to be approached in a multiple ways.
Although the Quran is known for its fluency and harmony, the structure can be best described as not always being inherently chronological, but can also flow thematically instead(the chapters in the Quran have segments that flow in chronological order, however segments can transition into other segments not related in chronology, but could be related in topic). The suras, also known as chapters of the Quran, are not placed in chronological order. The only constant in their structure is that the longest are placed first and shorter ones follow. The topics discussed in the chapters can also have no direct relation to each other (as seen in many suras) and can share in their sense of rhyme. The Quran introduces to poetry the idea of abandoning order and scattering narratives throughout the text. Harmony is also present in the sound of the Quran. The elongations and accents present in the Quran create a harmonious flow within the writing. Unique sound of the Quran recited, due to the accents, create a deeper level of understanding through a deeper emotional connection.
The Quran is written in a language that is simple and understandable by people. The simplicity of the writing inspired later poets to write in a more clear and clear-cut style. The words of the Quran, although unchanged, are to this day understandable and frequently used in both formal and informal Arabic. The simplicity of the language makes memorizing and reciting the Quran a slightly easier task.
The writer al-Khattabi explains how culture is a required element to create a sense of art in work as well as understand it. He believes that the fluency and harmony which the Quran possess are not the only elements that make it beautiful and create a bond between the reader and the text.
While a lot of poetry was deemed comparable to the Quran in that it is equal to or better than the composition of the Quran, a debate rose that such statements are not possible because humans are incapable of composing work comparable to the Quran.
Because the structure of the Quran made it difficult for a clear timeline to be seen, Hadith were the main source of chronological order. The Hadith were passed down from generation to generation and this tradition became a large resource for understanding the context. Poetry after the Quran began possessing this element of tradition by including ambiguity and background information to be required to understand the meaning.
After the Quran came down to the people, the tradition of memorizing the verses became present. It is believed that the greater the amount of the Quran memorized, the greater the faith. As technology improved over time, hearing recitations of the Quran became more available as well as more tools to help memorize the verses.
The tradition of Love Poetry served as a symbolic representation of a Muslim's desire for a closer contact with their Lord.
While the influence of the Quran on Arabic poetry is explained and defended by numerous writers, some writers such as Al-Baqillani believe that poetry and the Quran are in no conceivable way related due to the uniqueness of the Quran. Poetry's imperfections prove his points that they cannot be compared with the fluency the Quran holds.
Classical Arabic is the language of poetry and literature (including news); it is also mainly the language of the Quran. Classical Arabic is closely associated with the religion of Islam because the Quran was written in it. Most of the world's Muslims do not speak Classical Arabic as their native language, but many can read the Quranic script and recite the Quran. Among non-Arab Muslims, translations of the Quran are most often accompanied by the original text. At present, Modern Standard Arabic (MSA) is also used in modernized versions of literary forms of the Quran.
Some Muslims present a monogenesis of languages and claim that the Arabic language was the language revealed by God for the benefit of mankind and the original language as a prototype system of symbolic communication, based upon its system of triconsonantal roots, spoken by man from which all other languages were derived, having first been corrupted. Judaism has a similar account with the Tower of Babel.
"Colloquial Arabic" is a collective term for the spoken dialects of Arabic used throughout the Arab world, which differ radically from the literary language. The main dialectal division is between the varieties within and outside of the Arabian peninsula, followed by that between sedentary varieties and the much more conservative Bedouin varieties. All the varieties outside of the Arabian peninsula (which include the large majority of speakers) have many features in common with each other that are not found in Classical Arabic. This has led researchers to postulate the existence of a prestige koine dialect in the one or two centuries immediately following the Arab conquest, whose features eventually spread to all newly conquered areas. (These features are present to varying degrees inside the Arabian peninsula. Generally, the Arabian peninsula varieties have much more diversity than the non-peninsula varieties, but these have been understudied.)
Within the non-peninsula varieties, the largest difference is between the non-Egyptian North African dialects (especially Moroccan Arabic) and the others. Moroccan Arabic in particular is hardly comprehensible to Arabic speakers east of Libya (although the converse is not true, in part due to the popularity of Egyptian films and other media).
One factor in the differentiation of the dialects is influence from the languages previously spoken in the areas, which have typically provided a significant number of new words and have sometimes also influenced pronunciation or word order; however, a much more significant factor for most dialects is, as among Romance languages, retention (or change of meaning) of different classical forms. Thus Iraqi "aku", Levantine "fīh" and North African "kayən" all mean 'there is', and all come from Classical Arabic forms ("yakūn", "fīhi", "kā'in" respectively), but now sound very different.
Transcription is a broad IPA transcription, so minor differences were ignored for easier comparison. Also, the pronunciation of Modern Standard Arabic differs significantly from region to region.
According to Charles A. Ferguson, the following are some of the characteristic features of the koiné that underlies all the modern dialects outside the Arabian peninsula. Although many other features are common to most or all of these varieties, Ferguson believes that these features in particular are unlikely to have evolved independently more than once or twice and together suggest the existence of the koine:
Of the 29 Proto-Semitic consonants, only one has been lost: , which merged with , while became (see Semitic languages). Various other consonants have changed their sound too, but have remained distinct. An original lenited to , and – consistently attested in pre-Islamic Greek transcription of Arabic languages – became palatalized to or by the time of the Quran and , , or after early Muslim conquests and in MSA (see Arabic phonology#Local variations for more detail). An original voiceless alveolar lateral fricative became . Its emphatic counterpart was considered by Arabs to be the most unusual sound in Arabic (Hence the Classical Arabic's appellation ' or "language of the '"); for most modern dialects, it has become an emphatic stop with loss of the laterality or with complete loss of any pharyngealization or velarization, . (The classical "" pronunciation of pharyngealization still occurs in the Mehri language, and the similar sound without velarization, , exists in other Modern South Arabian languages.)
Other changes may also have happened. Classical Arabic pronunciation is not thoroughly recorded and different reconstructions of the sound system of Proto-Semitic propose different phonetic values. One example is the emphatic consonants, which are pharyngealized in modern pronunciations but may have been velarized in the eighth century and glottalized in Proto-Semitic.
Reduction of and between vowels occurs in a number of circumstances and is responsible for much of the complexity of third-weak ("defective") verbs. Early Akkadian transcriptions of Arabic names shows that this reduction had not yet occurred as of the early part of the 1st millennium BC.
The Classical Arabic language as recorded was a poetic koine that reflected a consciously archaizing dialect, chosen based on the tribes of the western part of the Arabian Peninsula, who spoke the most conservative variants of Arabic. Even at the time of Muhammed and before, other dialects existed with many more changes, including the loss of most glottal stops, the loss of case endings, the reduction of the diphthongs and into monophthongs , etc. Most of these changes are present in most or all modern varieties of Arabic.
An interesting feature of the writing system of the Quran (and hence of Classical Arabic) is that it contains certain features of Muhammad's native dialect of Mecca, corrected through diacritics into the forms of standard Classical Arabic. Among these features visible under the corrections are the loss of the glottal stop and a differing development of the reduction of certain final sequences containing : Evidently, final became as in the Classical language, but final became a different sound, possibly (rather than again in the Classical language). This is the apparent source of the "alif maqṣūrah" 'restricted alif' where a final is reconstructed: a letter that would normally indicate or some similar high-vowel sound, but is taken in this context to be a logical variant of "alif" and represent the sound .
Although Classical Arabic was a unitary language and is now used in Quran, its pronunciation varies somewhat from country to country and from region to region within a country. It is influenced by colloquial dialects.
The "colloquial" spoken dialects of Arabic are learned at home and constitute the native languages of Arabic speakers. "Formal" Literary Arabic (usually specifically Modern Standard Arabic) is learned at school; although many speakers have a native-like command of the language, it is technically not the native language of any speakers. Both varieties can be both written and spoken, although the colloquial varieties are rarely written down and the formal variety is spoken mostly in formal circumstances, e.g., in radio and TV broadcasts, formal lectures, parliamentary discussions and to some extent between speakers of different colloquial dialects. Even when the literary language is spoken, however, it is normally only spoken in its pure form when reading a prepared text out loud and communication between speakers of different colloquial dialects. When speaking extemporaneously (i.e. making up the language on the spot, as in a normal discussion among people), speakers tend to deviate somewhat from the strict literary language in the direction of the colloquial varieties. In fact, there is a continuous range of "in-between" spoken varieties: from nearly pure Modern Standard Arabic (MSA), to a form that still uses MSA grammar and vocabulary but with significant colloquial influence, to a form of the colloquial language that imports a number of words and grammatical constructions in MSA, to a form that is close to pure colloquial but with the "rough edges" (the most noticeably "vulgar" or non-Classical aspects) smoothed out, to pure colloquial. The particular variant (or "register") used depends on the social class and education level of the speakers involved and the level of formality of the speech situation. Often it will vary within a single encounter, e.g., moving from nearly pure MSA to a more mixed language in the process of a radio interview, as the interviewee becomes more comfortable with the interviewer. This type of variation is characteristic of the diglossia that exists throughout the Arabic-speaking world.
Although Modern Standard Arabic (MSA) is a unitary language, its pronunciation varies somewhat from country to country and from region to region within a country. The variation in individual "accents" of MSA speakers tends to mirror corresponding variations in the colloquial speech of the speakers in question, but with the distinguishing characteristics moderated somewhat. It is important in descriptions of "Arabic" phonology to distinguish between pronunciation of a given colloquial (spoken) dialect and the pronunciation of MSA by these same speakers. Although they are related, they are not the same. For example, the phoneme that derives from Classical Arabic has many different pronunciations in the modern spoken varieties, e.g., including the proposed original . Speakers whose native variety has either or will use the same pronunciation when speaking MSA. Even speakers from Cairo, whose native Egyptian Arabic has , normally use when speaking MSA. The of Persian Gulf speakers is the only variant pronunciation which isn't found in MSA; is used instead, but may use [j] in MSA for comfortable pronunciation. Another reason of different pronunciations is influence of colloquial dialects. The differentiation of pronunciation of colloquial dialects is the influence from other languages previously spoken and some still presently spoken in the regions, such as Coptic in Egypt, Berber, Punic, or Phoenician in North Africa, Himyaritic, Modern South Arabian, and Old South Arabian in Yemen and Oman, and Aramaic and Canaanite languages (including Phoenician) in the Levant and Mesopotamia.
Another example: Many colloquial varieties are known for a type of vowel harmony in which the presence of an "emphatic consonant" triggers backed allophones of nearby vowels (especially of the low vowels , which are backed to in these circumstances and very often fronted to in all other circumstances). In many spoken varieties, the backed or "emphatic" vowel allophones spread a fair distance in both directions from the triggering consonant; in some varieties (most notably Egyptian Arabic), the "emphatic" allophones spread throughout the entire word, usually including prefixes and suffixes, even at a distance of several syllables from the triggering consonant. Speakers of colloquial varieties with this vowel harmony tend to introduce it into their MSA pronunciation as well, but usually with a lesser degree of spreading than in the colloquial varieties. (For example, speakers of colloquial varieties with extremely long-distance harmony may allow a moderate, but not extreme, amount of spreading of the harmonic allophones in their MSA speech, while speakers of colloquial varieties with moderate-distance harmony may only harmonize immediately adjacent vowels in MSA.)
Modern Standard Arabic has six pure vowels (while most modern dialects have eight pure vowels which includes the long vowels ), with short and corresponding long vowels . There are also two diphthongs: and .
The pronunciation of the vowels differs from speaker to speaker, in a way that tends to reflect the pronunciation of the corresponding colloquial variety. Nonetheless, there are some common trends. Most noticeable is the differing pronunciation of and , which tend towards fronted , or in most situations, but a back in the neighborhood of emphatic consonants. Some accents and dialects, such as those of the Hejaz region, have an open or a central in all situations. The vowel varies towards too. Listen to the final vowel in the recording of " at the beginning of this article, for example. The point is, Arabic has only three short vowel phonemes, so those phonemes can have a very wide range of allophones. The vowels and are often affected somewhat in emphatic neighborhoods as well, with generally more back or centralized allophones, but the differences are less great than for the low vowels. The pronunciation of short and tends towards and , respectively, in many dialects.
The definition of both "emphatic" and "neighborhood" vary in ways that reflect (to some extent) corresponding variations in the spoken dialects. Generally, the consonants triggering "emphatic" allophones are the pharyngealized consonants ; ; and , if not followed immediately by . Frequently, the fricatives also trigger emphatic allophones; occasionally also the pharyngeal consonants (the former more than the latter). Many dialects have multiple emphatic allophones of each vowel, depending on the particular nearby consonants. In most MSA accents, emphatic coloring of vowels is limited to vowels immediately adjacent to a triggering consonant, although in some it spreads a bit farther: e.g., ' 'time'; ' 'homeland'; " 'downtown' (sometimes or similar).
In a non-emphatic environment, the vowel in the diphthong tends to be fronted even more than elsewhere, often pronounced or : hence ' 'sword' but ' 'summer'. However, in accents with no emphatic allophones of (e.g., in the Hejaz), the pronunciation or occurs in all situations.
The phoneme is represented by the Arabic letter ' () and has many standard pronunciations. is characteristic of north Algeria, Iraq, and most of the Arabian peninsula but with an allophonic in some positions; occurs in most of the Levant and most of North Africa; and is used in most of Egypt and some regions in Yemen and Oman. Generally this corresponds with the pronunciation in the colloquial dialects. In some regions in Sudan and Yemen, as well as in some Sudanese and Yemeni dialects, it may be either or , representing the original pronunciation of Classical Arabic. Foreign words containing may be transcribed with , , , , , or , mainly depending on the regional spoken variety of Arabic or the commonly diacriticized Arabic letter. In northern Egypt, where the Arabic letter ' () is normally pronounced , a separate phoneme , which may be transcribed with , occurs in a small number of mostly non-Arabic loanwords, e.g., 'jacket'.
In many varieties, () are epiglottal in Western Asia.
The emphatic consonant was actually pronounced , or possibly —either way, a highly unusual sound. The medieval Arabs actually termed their language " 'the language of the Ḍād' (the name of the letter used for this sound), since they thought the sound was unique to their language. (In fact, it also exists in a few other minority Semitic languages, e.g., Mehri.)
Arabic has consonants traditionally termed "emphatic" (), which exhibit simultaneous pharyngealization as well as varying degrees of velarization (depending on the region), so they may be written with the "Velarized or pharyngealized" diacritic () as: . This simultaneous articulation is described as "Retracted Tongue Root" by phonologists. In some transcription systems, emphasis is shown by capitalizing the letter, for example, is written ; in others the letter is underlined or has a dot below it, for example, .
Vowels and consonants can be phonologically short or long. Long (geminate) consonants are normally written doubled in Latin transcription (i.e. bb, dd, etc.), reflecting the presence of the Arabic diacritic mark ', which indicates doubled consonants. In actual pronunciation, doubled consonants are held twice as long as short consonants. This consonant lengthening is phonemically contrastive: ' 'he accepted' vs. " 'he kissed'.
Arabic has two kinds of syllables: open syllables (CV) and (CVV)—and closed syllables (CVC), (CVVC) and (CVCC). The syllable types with two morae (units of time), i.e. CVC and CVV, are termed "heavy syllables", while those with three morae, i.e. CVVC and CVCC, are "superheavy syllables". Superheavy syllables in Classical Arabic occur in only two places: at the end of the sentence (due to pausal pronunciation) and in words such as ' 'hot', ' 'stuff, substance', ' 'they disputed with each other', where a long ' occurs before two identical consonants (a former short vowel between the consonants has been lost). (In less formal pronunciations of Modern Standard Arabic, superheavy syllables are common at the end of words or before clitic suffixes such as "" 'us, our', due to the deletion of final short vowels.)
In surface pronunciation, every vowel must be preceded by a consonant (which may include the glottal stop ). There are no cases of hiatus within a word (where two vowels occur next to each other, without an intervening consonant). Some words do have an underlying vowel at the beginning, such as the definite article "al-" or words such as ' 'he bought', ' 'meeting'. When actually pronounced, one of three things happens:
Word stress is not phonemically contrastive in Standard Arabic. It bears a strong relationship to vowel length. The basic rules for Modern Standard Arabic are:
Examples:' 'book', ' 'writer', ' 'desk', ' 'desks', ' 'library' (but ' 'library' in short pronunciation), ' (Modern Standard Arabic) 'they wrote' = ' (dialect), ' (Modern Standard Arabic) 'they wrote it' = ' (dialect), ' (Modern Standard Arabic) 'they (dual, fem) wrote', ' (Modern Standard Arabic) 'I wrote' = ' (short form or dialect). Doubled consonants count as two consonants: ' 'magazine', "" "place".
These rules may result in differently stressed syllables when final case endings are pronounced, vs. the normal situation where they are not pronounced, as in the above example of ' 'library' in full pronunciation, but ' 'library' in short pronunciation.
The restriction on final long vowels does not apply to the spoken dialects, where original final long vowels have been shortened and secondary final long vowels have arisen from loss of original final "-hu/hi".
Some dialects have different stress rules. In the Cairo (Egyptian Arabic) dialect a heavy syllable may not carry stress more than two syllables from the end of a word, hence ' 'school', ' 'Cairo'. This also affects the way that Modern Standard Arabic is pronounced in Egypt. In the Arabic of Sanaa, stress is often retracted: ' 'two houses', ' 'their table', ' 'desks', ' 'sometimes', "" 'their school'. (In this dialect, only syllables with long vowels or diphthongs are considered heavy; in a two-syllable word, the final syllable can be stressed only if the preceding syllable is light; and in longer words, the final syllable cannot be stressed.)
The final short vowels (e.g., the case endings "-a -i -u" and mood endings "-u -a") are often not pronounced in this language, despite forming part of the formal paradigm of nouns and verbs. The following levels of pronunciation exist:
This is the most formal level actually used in speech. All endings are pronounced as written, except at the end of an utterance, where the following changes occur:
This is a formal level of pronunciation sometimes seen. It is somewhat like pronouncing all words as if they were in pausal position (with influence from the colloquial varieties). The following changes occur:
This is the pronunciation used by speakers of Modern Standard Arabic in extemporaneous speech, i.e. when producing new sentences rather than simply reading a prepared text. It is similar to formal short pronunciation except that the rules for dropping final vowels apply "even" when a clitic suffix is added. Basically, short-vowel case and mood endings are never pronounced and certain other changes occur that echo the corresponding colloquial pronunciations. Specifically:
As mentioned above, many spoken dialects have a process of "emphasis spreading", where the "emphasis" (pharyngealization) of emphatic consonants spreads forward and back through adjacent syllables, pharyngealizing all nearby consonants and triggering the back allophone in all nearby low vowels. The extent of emphasis spreading varies. For example, in Moroccan Arabic, it spreads as far as the first full vowel (i.e. sound derived from a long vowel or diphthong) on either side; in many Levantine dialects, it spreads indefinitely, but is blocked by any or ; while in Egyptian Arabic, it usually spreads throughout the entire word, including prefixes and suffixes. In Moroccan Arabic, also have emphatic allophones and , respectively.
Unstressed short vowels, especially , are deleted in many contexts. Many sporadic examples of short vowel change have occurred (especially → and interchange ↔). Most Levantine dialects merge short /i u/ into in most contexts (all except directly before a single final consonant). In Moroccan Arabic, on the other hand, short triggers labialization of nearby consonants (especially velar consonants and uvular consonants), and then short /a i u/ all merge into , which is deleted in many contexts. (The labialization plus is sometimes interpreted as an underlying phoneme .) This essentially causes the wholesale loss of the short-long vowel distinction, with the original long vowels remaining as half-long , phonemically , which are used to represent "both" short and long vowels in borrowings from Literary Arabic.
Most spoken dialects have monophthongized original to in most circumstances, including adjacent to emphatic consonants, while keeping them as the original diphthongs in others e.g. . In most of the Moroccan, Algerian and Tunisian (except Sahel and Southeastern) Arabic dialects, they have subsequently merged into original .
In most dialects, there may be more or fewer phonemes than those listed in the chart above. For example, is considered a native phoneme in most Arabic dialects except in Levantine dialects like Syrian or Lebanese where is pronounced and is pronounced . or () is considered a native phoneme in most dialects except in Egyptian and a number of Yemeni and Omani dialects where is pronounced . or and are distinguished in the dialects of Egypt, Sudan, the Levant and the Hejaz, but they have merged as in most dialects of the Arabian Peninsula, Iraq and Tunisia and have merged as in Morocco and Algeria. The usage of non-native and depends on the usage of each speaker but they might be more prevalent in some dialects than others. The Iraqi and Gulf Arabic also has the sound and writes it and with the Persian letters and , as in "plum"; "truffle".
Early in the expansion of Arabic, the separate emphatic phonemes and coalesced into a single phoneme . Many dialects (such as Egyptian, Levantine, and much of the Maghreb) subsequently lost fricatives, converting into . Most dialects borrow "learned" words from the Standard language using the same pronunciation as for inherited words, but some dialects without interdental fricatives (particularly in Egypt and the Levant) render original in borrowed words as .
Another key distinguishing mark of Arabic dialects is how they render the original velar and uvular plosives , (Proto-Semitic ), and :
Pharyngealization of the emphatic consonants tends to weaken in many of the spoken varieties, and to spread from emphatic consonants to nearby sounds. In addition, the "emphatic" allophone automatically triggers pharyngealization of adjacent sounds in many dialects. As a result, it may difficult or impossible to determine whether a given coronal consonant is phonemically emphatic or not, especially in dialects with long-distance emphasis spreading. (A notable exception is the sounds vs. in Moroccan Arabic, because the former is pronounced as an affricate but the latter is not.)
As in other Semitic languages, Arabic has a complex and unusual morphology (i.e. method of constructing words from a basic root). Arabic has a nonconcatenative "root-and-pattern" morphology: A root consists of a set of bare consonants (usually three), which are fitted into a discontinuous pattern to form words. For example, the word for 'I wrote' is constructed by combining the root ' 'write' with the pattern ' 'I Xed' to form ' 'I wrote'. Other verbs meaning 'I Xed' will typically have the same pattern but with different consonants, e.g. ' 'I read', ' 'I ate', ' 'I went', although other patterns are possible (e.g. ' 'I drank', ' 'I said', ' 'I spoke', where the subpattern used to signal the past tense may change but the suffix ' is always used).
From a single root , numerous words can be formed by applying different patterns:
Nouns in Literary Arabic have three grammatical cases (nominative, accusative, and genitive [also used when the noun is governed by a preposition]); three numbers (singular, dual and plural); two genders (masculine and feminine); and three "states" (indefinite, definite, and construct). The cases of singular nouns (other than those that end in long ā) are indicated by suffixed short vowels (/-u/ for nominative, /-a/ for accusative, /-i/ for genitive).
The feminine singular is often marked by ـَة /-at/, which is pronounced as /-ah/ before a pause. Plural is indicated either through endings (the sound plural) or internal modification (the broken plural). Definite nouns include all proper nouns, all nouns in "construct state" and all nouns which are prefixed by the definite article اَلْـ /al-/. Indefinite singular nouns (other than those that end in long ā) add a final /-n/ to the case-marking vowels, giving /-un/, /-an/ or /-in/ (which is also referred to as nunation or tanwīn).
Adjectives in Literary Arabic are marked for case, number, gender and state, as for nouns. However, the plural of all non-human nouns is always combined with a singular feminine adjective, which takes the ـَة /-at/ suffix.
Pronouns in Literary Arabic are marked for person, number and gender. There are two varieties, independent pronouns and enclitics. Enclitic pronouns are attached to the end of a verb, noun or preposition and indicate verbal and prepositional objects or possession of nouns. The first-person singular pronoun has a different enclitic form used for verbs (ـنِي /-nī/) and for nouns or prepositions (ـِي /-ī/ after consonants, ـيَ /-ya/ after vowels).
Nouns, verbs, pronouns and adjectives agree with each other in all respects. However, non-human plural nouns are grammatically considered to be feminine singular. Furthermore, a verb in a verb-initial sentence is marked as singular regardless of its semantic number when the subject of the verb is explicitly mentioned as a noun. Numerals between three and ten show "chiasmic" agreement, in that grammatically masculine numerals have feminine marking and vice versa.
Verbs in Literary Arabic are marked for person (first, second, or third), gender, and number. They are conjugated in two major paradigms (past and non-past); two voices (active and passive); and six moods (indicative, imperative, subjunctive, jussive, shorter energetic and longer energetic), the fifth and sixth moods, the energetics, exist only in Classical Arabic but not in MSA. There are also two participles (active and passive) and a verbal noun, but no infinitive.
The past and non-past paradigms are sometimes also termed perfective and imperfective, indicating the fact that they actually represent a combination of tense and aspect. The moods other than the indicative occur only in the non-past, and the future tense is signaled by prefixing سَـ ' or سَوْفَ ' onto the non-past. The past and non-past differ in the form of the stem (e.g., past كَتَبـ' vs. non-past ـكْتُبـ '), and also use completely different sets of affixes for indicating person, number and gender: In the past, the person, number and gender are fused into a single suffixal morpheme, while in the non-past, a combination of prefixes (primarily encoding person) and suffixes (primarily encoding gender and number) are used. The passive voice uses the same person/number/gender affixes but changes the vowels of the stem.
The following shows a paradigm of a regular Arabic verb, كَتَبَ "" 'to write'. In Modern Standard, the energetic mood (in either long or short form, which have the same meaning) is almost never used.
Like other Semitic languages, and unlike most other languages, Arabic makes much more use of nonconcatenative morphology (applying many templates applied roots) to derive words than adding prefixes or suffixes to words.
For verbs, a given root can occur in many different derived verb stems (of which there are about fifteen), each with one or more characteristic meanings and each with its own templates for the past and non-past stems, active and passive participles, and verbal noun. These are referred to by Western scholars as "Form I", "Form II", and so on through "Form XV" (although Forms XI to XV are rare). These stems encode grammatical functions such as the causative, intensive and reflexive. Stems sharing the same root consonants represent separate verbs, albeit often semantically related, and each is the basis for its own conjugational paradigm. As a result, these derived stems are part of the system of derivational morphology, not part of the inflectional system.
Examples of the different verbs formed from the root كتب ' 'write' (using حمر ' 'red' for Form IX, which is limited to colors and physical defects):
Form II is sometimes used to create transitive denominative verbs (verbs built from nouns); Form V is the equivalent used for intransitive denominatives.
The associated participles and verbal nouns of a verb are the primary means of forming new lexical nouns in Arabic. This is similar to the process by which, for example, the English gerund "meeting" (similar to a verbal noun) has turned into a noun referring to a particular type of social, often work-related event where people gather together to have a "discussion" (another lexicalized verbal noun). Another fairly common means of forming nouns is through one of a limited number of patterns that can be applied directly to roots, such as the "nouns of location" in "ma-" (e.g. ' 'desk, office' < ' 'write', ' 'kitchen' < ' 'cook').
The only three genuine suffixes are as follows:
The spoken dialects have lost the case distinctions and make only limited use of the dual (it occurs only on nouns and its use is no longer required in all circumstances). They have lost the mood distinctions other than imperative, but many have since gained new moods through the use of prefixes (most often /bi-/ for indicative vs. unmarked subjunctive). They have also mostly lost the indefinite "nunation" and the internal passive.
The following is an example of a regular verb paradigm in Egyptian Arabic.
The Arabic alphabet derives from the Aramaic through Nabatean, to which it bears a loose resemblance like that of Coptic or Cyrillic scripts to Greek script. Traditionally, there were several differences between the Western (North African) and Middle Eastern versions of the alphabet—in particular, the "faʼ" had a dot underneath and "qaf" a single dot above in the Maghreb, and the order of the letters was slightly different (at least when they were used as numerals).
However, the old Maghrebi variant has been abandoned except for calligraphic purposes in the Maghreb itself, and remains in use mainly in the Quranic schools (zaouias) of West Africa. Arabic, like all other Semitic languages (except for the Latin-written Maltese, and the languages with the Ge'ez script), is written from right to left. There are several styles of scripts such as thuluth, muhaqqaq, tawqi, rayhan and notably naskh, which is used in print and by computers, and ruqʻah, which is commonly used for correspondence.
Originally Arabic was made up of only "rasm" without diacritical marks Later diacritical points (which in Arabic are referred to as "nuqaṯ") were added (which allowed readers to distinguish between letters such as b, t, th, n and y). Finally signs known as "Tashkil" were used for short vowels known as "harakat" and other uses such as final postnasalized or long vowels.
After Khalil ibn Ahmad al Farahidi finally fixed the Arabic script around 786, many styles were developed, both for the writing down of the Quran and other books, and for inscriptions on monuments as decoration.
Arabic calligraphy has not fallen out of use as calligraphy has in the Western world, and is still considered by Arabs as a major art form; calligraphers are held in great esteem. Being cursive by nature, unlike the Latin script, Arabic script is used to write down a verse of the Quran, a hadith, or simply a proverb. The composition is often abstract, but sometimes the writing is shaped into an actual form such as that of an animal. One of the current masters of the genre is Hassan Massoudy.
In modern times the intrinsically calligraphic nature of the written Arabic form is haunted by the thought that a typographic approach to the language, necessary for digitized unification, will not always accurately maintain meanings conveyed through calligraphy.
There are a number of different standards for the romanization of Arabic, i.e. methods of accurately and efficiently representing Arabic with the Latin script. There are various conflicting motivations involved, which leads to multiple systems. Some are interested in transliteration, i.e. representing the "spelling" of Arabic, while others focus on transcription, i.e. representing the "pronunciation" of Arabic. (They differ in that, for example, the same letter is used to represent both a consonant, as in "you" or "yet", and a vowel, as in "me" or "eat".) Some systems, e.g. for scholarly use, are intended to accurately and unambiguously represent the phonemes of Arabic, generally making the phonetics more explicit than the original word in the Arabic script. These systems are heavily reliant on diacritical marks such as "š" for the sound equivalently written "sh" in English. Other systems (e.g. the Bahá'í orthography) are intended to help readers who are neither Arabic speakers nor linguists with intuitive pronunciation of Arabic names and phrases. These less "scientific" systems tend to avoid diacritics and use digraphs (like "sh" and "kh"). These are usually simpler to read, but sacrifice the definiteness of the scientific systems, and may lead to ambiguities, e.g. whether to interpret "sh" as a single sound, as in "gash", or a combination of two sounds, as in "gashouse". The ALA-LC romanization solves this problem by separating the two sounds with a prime symbol ( ′ ); e.g., "as′hal" 'easier'.
During the last few decades and especially since the 1990s, Western-invented text communication technologies have become prevalent in the Arab world, such as personal computers, the World Wide Web, email, bulletin board systems, IRC, instant messaging and mobile phone text messaging. Most of these technologies originally had the ability to communicate using the Latin script only, and some of them still do not have the Arabic script as an optional feature. As a result, Arabic speaking users communicated in these technologies by transliterating the Arabic text using the Latin script, sometimes known as IM Arabic.
To handle those Arabic letters that cannot be accurately represented using the Latin script, numerals and other characters were appropriated. For example, the numeral "3" may be used to represent the Arabic letter . There is no universal name for this type of transliteration, but some have named it Arabic Chat Alphabet. Other systems of transliteration exist, such as using dots or capitalization to represent the "emphatic" counterparts of certain consonants. For instance, using capitalization, the letter , may be represented by d. Its emphatic counterpart, , may be written as D.
In most of present-day North Africa, the Western Arabic numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, 9) are used. However, in Egypt and Arabic-speaking countries to the east of it, the Eastern Arabic numerals ( – – – – – – – – – ) are in use. When representing a number in Arabic, the lowest-valued position is placed on the right, so the order of positions is the same as in left-to-right scripts. Sequences of digits such as telephone numbers are read from left to right, but numbers are spoken in the traditional Arabic fashion, with units and tens reversed from the modern English usage. For example, 24 is said "four and twenty" just like in the German language ("vierundzwanzig") and Classical Hebrew, and 1975 is said "a thousand and nine-hundred and five and seventy" or, more eloquently, "a thousand and nine-hundred five seventy"
Academy of the Arabic Language is the name of a number of language-regulation bodies formed in the Arab League. The most active are in Damascus and Cairo. They review language development, monitor new words and approve inclusion of new words into their published standard dictionaries. They also publish old and historical Arabic manuscripts.
Arabic has been taught worldwide in many elementary and secondary schools, especially Muslim schools. Universities around the world have classes that teach Arabic as part of their foreign languages, Middle Eastern studies, and religious studies courses. Arabic language schools exist to assist students to learn Arabic outside the academic world. There are many Arabic language schools in the Arab world and other Muslim countries. Because the Quran is written in Arabic and all Islamic terms are in Arabic, millions of Muslims (both Arab and non-Arab) study the language. Software and books with tapes are also important part of Arabic learning, as many of Arabic learners may live in places where there are no academic or Arabic language school classes available. Radio series of Arabic language classes are also provided from some radio stations. A number of websites on the Internet provide online classes for all levels as a means of distance education; most teach Modern Standard Arabic, but some teach regional varieties from numerous countries.
With the sole example of Medieval linguist Abu Hayyan al-Gharnati – who, while a scholar of the Arabic language, was not ethnically Arab – Medieval scholars of the Arabic language made no efforts at studying comparative linguistics, considering all other languages inferior.
In modern times, the educated upper classes in the Arab world have taken a nearly opposite view. Yasir Suleiman wrote in 2011 that "studying and knowing English or French in most of the Middle East and North Africa have become a badge of sophistication and modernity and ... feigning, or asserting, weakness or lack of facility in Arabic is sometimes paraded as a sign of status, class, and perversely, even education through a mélange of code-switching practises." | https://en.wikipedia.org/wiki?curid=803 |
Alfred Hitchcock
Sir Alfred Joseph Hitchcock (13 August 1899 – 29 April 1980) was an English film director and producer. He is one of the most influential and extensively studied filmmakers in the history of cinema. Known as the "Master of Suspense", he directed over 50 feature films in a career spanning six decades, becoming as well known as any of his actors thanks to his many interviews, his cameo roles in most of his films, and his hosting and producing of the television anthology "Alfred Hitchcock Presents" (1955–1965). His films garnered a total of 46 Oscar nominations and six wins.
Born in Leytonstone, Essex, Hitchcock entered the film industry in 1919 as a title card designer after training as a technical clerk and copy writer for a telegraph-cable company. He made his directorial debut with the British-German silent film "The Pleasure Garden" (1925). His first successful film, "" (1927), helped to shape the thriller genre, while his 1929 film, "Blackmail", was the first British "". Two of his 1930s thrillers, "The 39 Steps" (1935) and "The Lady Vanishes" (1938), are ranked among the greatest British films of the 20th century.
By 1939, Hitchcock was a filmmaker of international importance, and film producer David O. Selznick persuaded him to move to Hollywood. A string of successful films followed, including "Rebecca" (1940), "Foreign Correspondent" (1940), "Suspicion" (1941), "Shadow of a Doubt" (1943), and "Notorious" (1946). "Rebecca" won the Academy Award for Best Picture, although Hitchcock himself was only nominated as Best Director; he was also nominated for "Lifeboat" (1944) and "Spellbound" (1945), although he never won the Best Director Academy Award.
The "Hitchcockian" style includes the use of camera movement to mimic a person's gaze, thereby turning viewers into voyeurs, and framing shots to maximise anxiety and fear. The film critic Robin Wood wrote that the meaning of a Hitchcock film "is there in the method, in the progression from shot to shot. A Hitchcock film is an organism, with the whole implied in every detail and every detail related to the whole."
After a brief lull of commercial success in the late 1940s, Hitchcock returned to form with "Strangers on a Train" (1951) and "Dial M For Murder" (1954). By 1960 Hitchcock had directed four films often ranked among the greatest of all time: "Rear Window" (1954), "Vertigo" (1958), "North by Northwest" (1959), and "Psycho" (1960), the first and last of these garnering him Best Director nominations. In 2012, "Vertigo" replaced Orson Welles's "Citizen Kane" (1941) as the British Film Institute's greatest film ever made based on its world-wide poll of hundreds of film critics. By 2018 eight of his films had been selected for preservation in the United States National Film Registry, including his personal favourite, "Shadow of a Doubt" (1943). He received the BAFTA Fellowship in 1971, the AFI Life Achievement Award in 1979 and was knighted in December that year, four months before he died.
Hitchcock was born on 13 August 1899 in the flat above his parents' leased grocer's shop at 517 High Road, Leytonstone, on the outskirts of east London (then part of Essex), the youngest of three children: William Daniel (1890–1943), Ellen Kathleen ("Nellie") (1892–1979), and Alfred Joseph (1899-1980). His parents, Emma Jane Hitchcock, née Whelan (1863–1942), and William Edgar Hitchcock (1862–1914), were both Roman Catholics, with partial roots in Ireland; William was a greengrocer as his father had been.
There was a large extended family, including Uncle John Hitchcock with his five-bedroom Victorian house on Campion Road, Putney, complete with maid, cook, chauffeur and gardener. Every summer John rented a seaside house for the family in Cliftonville, Kent. Hitchcock said that he first became class-conscious there, noticing the differences between tourists and locals.
Describing himself as a well-behaved boy—his father called him his "little lamb without a spot"—Hitchcock said he could not remember ever having had a playmate. One of his favourite stories for interviewers was about his father sending him to the local police station with a note when he was five; the policeman looked at the note and locked him in a cell for a few minutes, saying, "This is what we do to naughty boys." The experience left him, he said, with a lifelong fear of policemen; in 1973 he told Tom Snyder that he was "scared stiff of anything ... to do with the law" and wouldn't even drive a car in case he got a parking ticket.
When he was six, the family moved to Limehouse and leased two stores at 130 and 175 Salmon Lane, which they ran as a fish-and-chips shop and fishmongers' respectively; they lived above the former. It seems that Hitchcock was seven when he attended his first school, the Howrah House Convent in Poplar, which he entered in 1907. According to Patrick McGilligan, he stayed at Howrah House for at most two years. He also attended a convent school, the Wode Street School "for the daughters of gentlemen and little boys", run by the Faithful Companions of Jesus; briefly attended a primary school near his home; and was for a very short time, when he was nine, a boarder at Salesian College in Battersea.
The family moved again when he was 11, this time to Stepney, and on 5 October 1910 Hitchcock was sent to St Ignatius College in Stamford Hill, Tottenham (now in the London Borough of Haringey), a Jesuit grammar school with a reputation for discipline. The priests used a hard rubber cane on the boys, always at the end of the day, so the boys had to sit through classes anticipating the punishment once they knew they'd been written up for it. He said it was here that he developed his sense of fear. The school register lists his year of birth as 1900 rather than 1899; Spoto writes that it seems he was deliberately enrolled as a 10-year-old, perhaps because he was a year behind with his schooling.
While biographer Gene Adair reports that Hitchcock was "an average, or slightly above-average, pupil", Hitchcock said he was "usually among the four or five at the top of the class"; at the end of his first year, his work in Latin, English, French and religious education was noted. His favourite subject was geography, and he became interested in maps, and railway and bus timetables; according to Taylor, he could recite all the stops on the Orient Express. He told Peter Bogdanovich: "The Jesuits taught me organization, control and, to some degree, analysis."
Hitchcock told his parents that he wanted to be an engineer, and on 25 July 1913, he left St Ignatius and enrolled in night classes at the London County Council School of Engineering and Navigation in Poplar. In a book-length interview in 1962, he told François Truffaut that he had studied "mechanics, electricity, acoustics, and navigation". Then on 12 December 1914 his father, who had been suffering from emphysema and kidney disease, died at the age of 52. To support himself and his mother—his older siblings had left home by then—Hitchcock took a job, for 15 shillings a week (£ in 2017), as a technical clerk at the Henley Telegraph and Cable Company in Blomfield Street near London Wall. He kept up his night classes, this time in art history, painting, economics, and political science. His older brother ran the family shops, while he and his mother continued to live in Salmon Lane.
Hitchcock was too young to enlist when the First World War broke out in July 1914, and when he reached the required age of 18 in 1917, he received a C3 classification ("free from serious organic disease, able to stand service conditions in garrisons at home ... only suitable for sedentary work"). He joined a cadet regiment of the Royal Engineers and took part in theoretical briefings, weekend drills, and exercises. John Russell Taylor wrote that, in one session of practical exercises in Hyde Park, Hitchcock was required to wear puttees. He could never master wrapping them around his legs, and they repeatedly fell down around his ankles.
After the war, Hitchcock began dabbling in creative writing. In June 1919 he became a founding editor and business manager of Henley's in-house publication, "The Henley Telegraph" (sixpence a copy), to which he submitted several short stories. Henley's promoted him to the advertising department, where he wrote copy and drew graphics for advertisements for electric cable. He apparently loved the job and would stay late at the office to examine the proofs; he told Truffaut that this was his "first step toward cinema". He enjoyed watching films, especially American cinema, and from the age of 16 read the trade papers; he watched Charlie Chaplin, D. W. Griffith and Buster Keaton, and particularly liked Fritz Lang's "Der müde Tod" (1921).
While still at Henley's, he read in a trade paper that Famous Players-Lasky, the production arm of Paramount Pictures, was opening a studio in London. They were planning to film "The Sorrows of Satan" by Marie Corelli, so he produced some drawings for the title cards and sent his work to the studio. They hired him, and in 1919 he began working for Islington Studios in Poole Street, Hoxton, as a title-card designer.
Donald Spoto writes that most of the staff were Americans with strict job specifications, but the English workers were encouraged to try their hand at anything, which meant that Hitchcock gained experience as a co-writer, art director and production manager on at least 18 silent films. "The Times" wrote in February 1922 about the studio's "special art title department under the supervision of Mr. A. J. Hitchcock". His work there included "Number 13" (1922), also known as "Mrs. Peabody", cancelled because of financial problems—the few finished scenes are lost—and "Always Tell Your Wife" (1923), which he and Seymour Hicks finished together when Hicks was about to give up on it. Hicks wrote later about being helped by "a fat youth who was in charge of the property room ... [n]one other than Alfred Hitchcock".
When Paramount pulled out of London in 1922, Hitchcock was hired as an assistant director by a new firm run in the same location by Michael Balcon, later known as Gainsborough Pictures. Hitchcock worked on "Woman to Woman" (1923) with the director Graham Cutts, designing the set, writing the script and producing. He said: "It was the first film that I had really got my hands onto." The editor and "script girl" on "Woman to Woman" was Alma Reville, his future wife. He also worked as an assistant to Cutts on "The White Shadow" (1924), "The Passionate Adventure" (1924), "The Blackguard" (1925), and "The Prude's Fall" (1925). "The Blackguard" was produced at the Babelsberg Studios in Potsdam, where Hitchcock watched part of the making of F. W. Murnau's film "The Last Laugh" (1924). He was impressed with Murnau's work and later used many of his techniques for the set design in his own productions.
In the summer of 1925, Balcon asked Hitchcock to direct "The Pleasure Garden" (1925), starring Virginia Valli, a co-production of Gainsborough and the German firm Emelka at the Geiselgasteig studio near Munich. Reville, by then Hitchcock's fiancée, was assistant director-editor. Although the film was a commercial flop, Balcon liked Hitchcock's work; a "Daily Express" headline called him the "Young man with a master mind". Balcon asked him to direct a second film in Munich, "The Mountain Eagle" (1926), based on an original story titled "Fear o' God". The film is lost; Hitchcock called it "a very bad movie".
Hitchcock's luck changed with his first thriller, "" (1927), about the hunt for a serial killer who, wearing a black cloak and carrying a black bag, is murdering young blonde women in London, and only on Tuesdays. A landlady suspects that her lodger is the killer, but he turns out to be innocent. To convey the impression footsteps were being heard from an upper floor, Hitchcock had a glass floor made so that the audience could see the lodger pacing up and down in his room above the landlady. Hitchcock had wanted the leading man to be guilty, or for the film at least to end ambiguously, but the star was Ivor Novello, a matinée idol, and the "star system" meant that Novello could not be the villain. Hitchcock told Truffaut: "You have to clearly spell it out in big letters: 'He is innocent.'" (He had the same problem years later with Cary Grant in "Suspicion" (1941).)
Released in January 1927, "The Lodger" was a commercial and critical success in the UK. Hitchcock told Truffaut that the film was the first of his to be influenced by the Expressionist techniques he had witnessed in Germany: "In truth, you might almost say that "The Lodger" was my first picture." He made his first cameo appearance in the film because an extra was needed, and was depicted sitting in a newsroom. A second appearance, standing in a crowd as the leading man is arrested, is in doubt.
On 2 December 1926, Hitchcock married the English-American screenwriter Alma Reville (1899–1982) at the Brompton Oratory in South Kensington. The couple honeymooned in Paris, Lake Como and St. Moritz, before returning to London to live in a leased flat on the top two floors of 153 Cromwell Road, Kensington. Reville, who was born just hours after Hitchcock, converted from Protestantism to Catholicism, apparently at the insistence of Hitchcock's mother; she was baptised on 31 May 1927 and confirmed at Westminster Cathedral by Cardinal Francis Bourne on 5 June.
In 1928, when they learned that she was pregnant, the Hitchcocks purchased "Winter's Grace", a Tudor farmhouse set in 11 acres on Stroud Lane, Shamley Green, Surrey, for £2,500. Their daughter and only child, Patricia Alma Hitchcock, was born on 7 July that year.
Reville became her husband's closest collaborator; Charles Champlin wrote in 1982: "The Hitchcock touch had four hands, and two were Alma's." When Hitchcock accepted the AFI Life Achievement Award in 1979, he said he wanted to mention "four people who have given me the most affection, appreciation and encouragement, and constant collaboration. The first of the four is a film editor, the second is a scriptwriter, the third is the mother of my daughter, Pat, and the fourth is as fine a cook as ever performed miracles in a domestic kitchen. And their names are Alma Reville." Reville wrote or co-wrote on many of Hitchcock's films, including "Shadow of a Doubt", "Suspicion" and "The 39 Steps".
Hitchcock began work on his tenth film, "Blackmail" (1929), when its production company, British International Pictures (BIP), converted its Elstree studios to sound. The film was the first British ""; it followed the first American sound feature film, "The Jazz Singer" (1927). "Blackmail" began the Hitchcock tradition of using famous landmarks as a backdrop for suspense sequences, with the climax taking place on the dome of the British Museum. It also features one of his longest cameo appearances, which shows him being bothered by a small boy as he reads a book on the London Underground. In the PBS series "The Men Who Made The Movies", Hitchcock explained how he used early sound recording as a special element of the film, stressing the word "knife" in a conversation with the woman suspected of murder. During this period, Hitchcock directed segments for a BIP revue, "Elstree Calling" (1930), and directed a short film, "An Elastic Affair" (1930), featuring two "Film Weekly" scholarship winners. "An Elastic Affair" is one of the lost films.
In 1933 Hitchcock was once again working for Michael Balcon at Gaumont-British. His first film for the company, "The Man Who Knew Too Much" (1934), was a success; his second, "The 39 Steps" (1935), was acclaimed in the UK and made Hitchcock a star in the US. It also established the quintessential English "Hitchcock blonde" (Madeleine Carroll) as the template for his succession of ice-cold, elegant leading ladies. Screenwriter Robert Towne remarked, "It's not much of an exaggeration to say that all contemporary escapist entertainment begins with "The 39 Steps"". This film was one of the first to introduce the "MacGuffin" plot device, a term coined by the English screenwriter Angus MacPhail. The MacGuffin is an item or goal the protagonist is pursuing, one that otherwise has no narrative value; in "The 39 Steps", the MacGuffin is a stolen set of design plans.
Hitchcock released two spy thrillers in 1936. "Sabotage" was loosely based on Joseph Conrad's novel, "The Secret Agent" (1907), about a woman who discovers that her husband is a terrorist, and "Secret Agent", based on two stories in "" (1928) by W. Somerset Maugham.
At this time, Hitchcock also became notorious for pranks against the cast and crew. These jokes ranged from simple and innocent to crazy and maniacal. For instance, he hosted a dinner party where he dyed all the food blue because, as he claimed, there weren't enough blue foods. He also had a horse delivered to the dressing room of his friend, actor Sir Gerald du Maurier.
Hitchcock's next major success was "The Lady Vanishes" (1938), "one of the greatest train movies from the genre's golden era", according to Philip French, in which Miss Froy (May Whitty), a British spy posing as a governess, disappears on a train journey through the fictional European country of Bandrika. The film saw Hitchcock receive the 1938 New York Film Critics Circle Award for Best Director. Benjamin Crisler, the "New York Times" film critic, wrote in June 1938: "Three unique and valuable institutions the British have that we in America have not: Magna Carta, the Tower Bridge and Alfred Hitchcock, the greatest director of screen melodramas in the world."
David O. Selznick signed Hitchcock to a seven-year contract beginning in March 1939, and the Hitchcocks moved to Hollywood. In June that year "Life" magazine called him the "greatest master of melodrama in screen history". The working arrangements with Selznick were less than ideal. Selznick suffered from constant financial problems, and Hitchcock was often unhappy about Selznick's creative control over his films. In a later interview, Hitchcock said: "[Selznick] was the Big Producer. ... Producer was king. The most flattering thing Mr. Selznick ever said about me—and it shows you the amount of control—he said I was the 'only director' he'd 'trust with a film'." At the same time, Selznick complained about Hitchcock's "goddamn jigsaw cutting", which meant that the producer had to follow Hitchcock's vision of the finished product.
Selznick lent Hitchcock to the larger studios more often than producing Hitchcock's films himself. Selznick made only a few films each year, as did fellow independent producer Samuel Goldwyn, so he did not always have projects for Hitchcock to direct. Goldwyn had also negotiated with Hitchcock on a possible contract, only to be outbid by Selznick. Hitchcock was quickly impressed by the superior resources of the American studios compared to the financial limits he had often faced in Britain.
The Selznick picture "Rebecca" (1940) was Hitchcock's first American film, set in a Hollywood version of England's Cornwall and based on a novel by English novelist Daphne du Maurier. The film, starring Laurence Olivier and Joan Fontaine, concerns a naïve (and unnamed) young woman who marries a widowed aristocrat. She goes to live in his huge English country house, and struggles with the lingering reputation of his elegant and worldly first wife Rebecca, who died under mysterious circumstances. The film won Best Picture at the 13th Academy Awards; the statuette was given to Selznick, as the film's producer. Hitchcock was nominated for Best Director, his first of five such nominations.
Hitchcock's second American film was the thriller "Foreign Correspondent" (1940), set in Europe, based on Vincent Sheean's book "Personal History" (1935) and produced by Walter Wanger. It was nominated for Best Picture that year. Hitchcock felt uneasy living and working in Hollywood while his country was at war; his concern resulted in a film that overtly supported the British war effort. Filmed in the first year of the Second World War, it was inspired by the rapidly changing events in Europe, as covered by an American newspaper reporter played by Joel McCrea. Mixing footage of European scenes with scenes filmed on a Hollywood backlot, the film avoided direct references to Nazism, Nazi Germany, and Germans, to comply with Hollywood's Motion Picture Production Code censorship at the time.
In September 1940 the Hitchcocks bought the Cornwall Ranch near Scotts Valley, California, in the Santa Cruz Mountains. Their primary residence was an English-style home in Bel Air, purchased in 1942. Hitchcock's films were diverse during this period, ranging from the romantic comedy "Mr. & Mrs. Smith" (1941) to the bleak film noir "Shadow of a Doubt" (1943).
"Suspicion" (1941) marked Hitchcock's first film as a producer and director. It is set in England; Hitchcock used the north coast of Santa Cruz for the English coastline sequence. The film is the first of four projects on which Cary Grant worked with Hitchcock, and it is one of the rare occasions that Grant was cast in a sinister role. Grant plays Johnnie Aysgarth, an English con man whose actions raise suspicion and anxiety in his shy young English wife, Lina McLaidlaw (Joan Fontaine). In one scene Hitchcock placed a light inside a glass of milk, perhaps poisoned, that Grant is bringing to his wife; the light makes sure that the audience's attention is on the glass. Grant's character is a killer in the book on which the film was based, "Before the Fact" by Francis Iles, but the studio felt that Grant's image would be tarnished by that. Hitchcock therefore settled for an ambiguous finale, although, as he told François Truffaut, he would have preferred to end with the wife's murder. Fontaine won Best Actress for her performance.
"Saboteur" (1942) is the first of two films that Hitchcock made for Universal during the decade. Hitchcock was forced by Universal Studios to use Universal contract player Robert Cummings and Priscilla Lane, a freelancer who signed a one-picture deal with Universal, both known for their work in comedies and light dramas. Breaking with Hollywood conventions of the time, Hitchcock did extensive location filming, especially in New York City, and depicted a confrontation between a suspected saboteur (Cummings) and a real saboteur (Norman Lloyd) atop the Statue of Liberty. He also directed "Have You Heard?" (1942), a photographic dramatisation for "Life" magazine of the dangers of rumours during wartime. In 1943 he wrote a mystery story for "Look" magazine, "The Murder of Monty Woolley", a sequence of captioned photographs inviting the reader to find clues to the murderer's identity; Hitchcock cast the performers as themselves, such as Woolley, Doris Merrick, and make-up man Guy Pearce.
"Shadow of a Doubt" (1943) was Hitchcock's personal favourite and the second of the early Universal films. Charlotte "Charlie" Newton (Teresa Wright) suspects her beloved uncle Charlie Oakley (Joseph Cotten) of being a serial killer. Hitchcock again filmed extensively on location, this time in the Northern California city of Santa Rosa.
Working at 20th Century Fox, Hitchcock approached John Steinbeck with an idea for a film, which recorded the experiences of the survivors of a German U-boat attack. Steinbeck then began work on the script which would become the film "Lifeboat" (1944). However, Steinbeck was unhappy with the film and asked that his name be removed from the credits, to no avail. The idea was rewritten as a short story by Harry Sylvester and published in "Collier's" in 1943. The action sequences were shot in a small boat in the studio water tank. The locale posed problems for Hitchcock's traditional cameo appearance. That was solved by having Hitchcock's image appear in a newspaper that William Bendix is reading in the boat, showing the director in a before-and-after advertisement for "Reduco-Obesity Slayer". He told Truffaut in 1962:
Hitchcock's typical dinner before the weight loss had been a roast chicken, boiled ham, potatoes, bread, vegetables, relishes, salad, dessert, a bottle of wine and some brandy. To lose weight, he stopped drinking, drank black coffee for breakfast and lunch, and ate steak and salad for dinner, but it was hard to maintain; Spoto writes that his weight fluctuated considerably over the next 40 years. At the end of 1943, despite the weight loss, the Occidental Insurance Company of Los Angeles refused him life insurance.
Hitchcock returned to the UK for an extended visit in late 1943 and early 1944. While there he made two short propaganda films, "Bon Voyage" (1944) and "Aventure Malgache" (1944), for the Ministry of Information. In June and July 1945 Hitchcock served as "treatment advisor" on a Holocaust documentary that used Allied Forces footage of the liberation of Nazi concentration camps. The film was assembled in London and produced by Sidney Bernstein of the Ministry of Information, who brought Hitchcock (a friend of his) on board. It was originally intended to be broadcast to the Germans, but the British government deemed it too traumatic to be shown to a shocked post-war population. Instead, it was transferred in 1952 from the British War Office film vaults to London's Imperial War Museum and remained unreleased until 1985, when an edited version was broadcast as an episode of PBS "Frontline", under the title the Imperial War Museum had given it: "Memory of the Camps". The full-length version of the film, "German Concentration Camps Factual Survey", was restored in 2014 by scholars at the Imperial War Museum.
Hitchcock worked for David Selznick again when he directed "Spellbound" (1945), which explores psychoanalysis and features a dream sequence designed by Salvador Dalí. The dream sequence as it appears in the film is ten minutes shorter than was originally envisioned; Selznick edited it to make it "play" more effectively. Gregory Peck plays amnesiac Dr. Anthony Edwardes under the treatment of analyst Dr. Peterson (Ingrid Bergman), who falls in love with him while trying to unlock his repressed past. Two point-of-view shots were achieved by building a large wooden hand (which would appear to belong to the character whose point of view the camera took) and out-sized props for it to hold: a bucket-sized glass of milk and a large wooden gun. For added novelty and impact, the climactic gunshot was hand-coloured red on some copies of the black-and-white film. The original musical score by Miklós Rózsa makes use of the theremin, and some of it was later adapted by the composer into Rozsa's Piano Concerto Op. 31 (1967) for piano and orchestra.
"Notorious" (1946) followed "Spellbound".
Hitchcock told François Truffaut that Selznick had sold him, Ingrid Bergman, Cary Grant, and the screenplay by Ben Hecht, to RKO Radio Pictures as a "package" for $500,000 () because of cost overruns on Selznick's "Duel in the Sun" (1946). "Notorious" stars Bergman and Grant, both Hitchcock regulars, and features a plot about Nazis, uranium and South America. His prescient use of uranium as a plot device led to him being briefly placed under surveillance by the Federal Bureau of Investigation. According to McGilligan, in or around March 1945 Hitchcock and Ben Hecht consulted Robert Millikan of the California Institute of Technology about the development of a uranium bomb. Selznick complained that the notion was "science fiction", only to be confronted by the news of the detonation of two atomic bombs on Hiroshima and Nagasaki in Japan in August 1945.
Hitchcock formed an independent production company, Transatlantic Pictures, with his friend Sidney Bernstein. He made two films with Transatlantic, one of which was his first colour film. With "Rope" (1948), Hitchcock experimented with marshalling suspense in a confined environment, as he had done earlier with "Lifeboat" (1944). The film appears to have been shot in a single take, but it was actually shot in 10 takes ranging from 4- to 10 minutes each; a 10-minute length of film was the most that a camera's film magazine could hold at the time. Some transitions between reels were hidden by having a dark object fill the entire screen for a moment. Hitchcock used those points to hide the cut, and began the next take with the camera in the same place. The film features James Stewart in the leading role, and was the first of four films that Stewart made with Hitchcock. It was inspired by the Leopold and Loeb case of the 1920s. The film was not well received.
"Under Capricorn" (1949), set in 19th-century Australia, also uses the short-lived technique of long takes, but to a more limited extent. He again used Technicolor in this production, then returned to black-and-white films for several years. Transatlantic Pictures became inactive after these two unsuccessful films. Hitchcock filmed "Stage Fright" (1950) at studios in Elstree, England, where he had worked during his British International Pictures contract many years before. He matched one of Warner Bros.' most popular stars, Jane Wyman, with the expatriate German actor Marlene Dietrich and used several prominent British actors, including Michael Wilding, Richard Todd and Alastair Sim. This was Hitchcock's first proper production for Warner Bros., which had distributed "Rope" and "Under Capricorn", because Transatlantic Pictures was experiencing financial difficulties.
His film "Strangers on a Train" (1951) was based on the novel of the same name by Patricia Highsmith. Hitchcock combined many elements from his preceding films. He approached Dashiell Hammett to write the dialogue, but Raymond Chandler took over, then left over disagreements with the director. In the film, two men casually meet, one of whom speculates on a foolproof method to murder; he suggests that two people, each wishing to do away with someone, should each perform the other's murder. Farley Granger's role was as the innocent victim of the scheme, while Robert Walker, previously known for "boy-next-door" roles, played the villain. "I Confess" (1953) was set in Quebec with Montgomery Clift as a Catholic priest.
"I Confess" was followed by three colour films starring Grace Kelly: "Dial M for Murder" (1954), "Rear Window" (1954), and "To Catch a Thief" (1955). In "Dial M for Murder", Ray Milland plays the villain who tries to murder his unfaithful wife (Kelly) for her money. She kills the hired assassin in self-defence, so Milland manipulates the evidence to make it look like murder. Her lover, Mark Halliday (Robert Cummings), and Police Inspector Hubbard (John Williams) save her from execution. Hitchcock experimented with 3D cinematography for "Dial M".
Hitchcock moved to Paramount Pictures and filmed "Rear Window" (1954), starring James Stewart and Kelly again, as well as Thelma Ritter and Raymond Burr. Stewart's character is a photographer (based on Robert Capa) who must temporarily use a wheelchair. Out of boredom, he begins observing his neighbours across the courtyard, then becomes convinced that one of them (Raymond Burr) has murdered his wife. Stewart eventually manages to convince his policeman buddy (Wendell Corey) and his girlfriend (Kelly). As with "Lifeboat" and "Rope", the principal characters are depicted in confined or cramped quarters, in this case Stewart's studio apartment. Hitchcock uses close-ups of Stewart's face to show his character's reactions, "from the comic voyeurism directed at his neighbours to his helpless terror watching Kelly and Burr in the villain's apartment".
From 1955 to 1965, Hitchcock was the host of the television series "Alfred Hitchcock Presents". With his droll delivery, gallows humour and iconic image, the series made Hitchcock a celebrity. The title-sequence of the show pictured a minimalist caricature of his profile (he drew it himself; it is composed of only nine strokes), which his real silhouette then filled. The series theme tune was "Funeral March of a Marionette" by the French composer Charles Gounod (1818–1893).
His introductions always included some sort of wry humour, such as the description of a recent multi-person execution hampered by having only one electric chair, while two are shown with a sign "Two chairs—no waiting!" He directed 18 episodes of the series, which aired from 1955 to 1965. It became "The Alfred Hitchcock Hour" in 1962, and NBC broadcast the final episode on 10 May 1965. In the 1980s, a new version of "Alfred Hitchcock Presents" was produced for television, making use of Hitchcock's original introductions in a colourised form.
In 1955 Hitchcock became a United States citizen. The same year, his third Grace Kelly film, "To Catch a Thief", was released; it is set in the French Riviera, and pairs Kelly with Cary Grant. Grant plays retired thief John Robie, who becomes the prime suspect for a spate of robberies in the Riviera. A thrill-seeking American heiress played by Kelly surmises his true identity and tries to seduce him. "Despite the obvious age disparity between Grant and Kelly and a lightweight plot, the witty script (loaded with double entendres) and the good-natured acting proved a commercial success." It was Hitchcock's last film with Kelly. She married Prince Rainier of Monaco in 1956, and ended her film career. Hitchcock then remade his own 1934 film "The Man Who Knew Too Much" in 1956. This time, the film starred James Stewart and Doris Day, who sang the theme song "Que Sera, Sera", which won the Oscar for Best Original Song and became a big hit for her. They play a couple whose son is kidnapped to prevent them from interfering with an assassination. As in the 1934 film, the climax takes place at the Royal Albert Hall, London.
"The Wrong Man" (1957), Hitchcock's final film for Warner Bros., is a low-key black-and-white production based on a real-life case of mistaken identity reported in "Life" magazine in 1953. This was the only film of Hitchcock to star Henry Fonda, playing a Stork Club musician mistaken for a liquor store thief, who is arrested and tried for robbery while his wife (Vera Miles) emotionally collapses under the strain. Hitchcock told Truffaut that his lifelong fear of the police attracted him to the subject and was embedded in many scenes.
Hitchcock's next film, "Vertigo" (1958) again starred James Stewart, this time with Kim Novak and Barbara Bel Geddes. He had wanted Vera Miles to play the lead, but she was pregnant. He told Oriana Fallaci: "I was offering her a big part, the chance to become a beautiful sophisticated blonde, a real actress. We'd have spent a heap of dollars on it, and she has the bad taste to get pregnant. I hate pregnant women, because then they have children."
In the film, James Stewart plays Scottie, a former police investigator suffering from acrophobia, who develops an obsession with a woman he has been hired to shadow (Kim Novak). Scottie's obsession leads to tragedy, and this time Hitchcock does not opt for a happy ending. Some critics, including Donald Spoto and Roger Ebert, agree that "Vertigo" is the director's most personal and revealing film, dealing with the "Pygmalion"-like obsessions of a man who crafts a woman into the woman he desires. "Vertigo" explores more frankly and at greater length his interest in the relation between sex and death than any other work in his filmography.
"Vertigo" contains a camera technique developed by Irmin Roberts, commonly referred to as a dolly zoom, that has been copied many times by filmmakers. The film premiered at the San Sebastián International Film Festival, where Hitchcock won a Silver Seashell. "Vertigo" is considered a classic, but it attracted some negative reviews and poor box-office receipts at the time, and it was the last collaboration between Stewart and Hitchcock. In the 2002 "Sight & Sound" polls, it ranked just behind "Citizen Kane" (1941); ten years later, in the same magazine, critics chose it as the best film ever made.
Hitchcock followed "Vertigo" with three more successful films, which are also recognised as among his best: "North by Northwest" (1959), "Psycho" (1960) and "The Birds" (1963). In "North by Northwest", Cary Grant portrays Roger Thornhill, a Madison Avenue advertising executive who is mistaken for a government secret agent. He is hotly pursued across the United States by enemy agents, including (it appears) Eve Kendall (Eva Marie Saint). Thornhill at first believes Kendall is helping him, then that she is an enemy agent; he eventually learns that she is working undercover for the CIA. During its opening two-week run at Radio City Music Hall, the film grossed $404,056 (), setting a record in that theatre's non-holiday gross. "Time" magazine called the film "smoothly troweled and thoroughly entertaining".
"Psycho" (1960) is arguably Hitchcock's best-known film. Based on Robert Bloch's novel "Psycho" (1959), which was inspired by the case of Ed Gein, the film was produced on a constrained budget of $800,000 () and shot in black-and-white on a spare set using crew members from "Alfred Hitchcock Presents". The unprecedented violence of the shower scene, the early death of the heroine, and the innocent lives extinguished by a disturbed murderer became the hallmarks of a new horror-film genre. The public loved the film, with lines stretching outside cinemas as people had to wait for the next showing. It broke box-office records in the United Kingdom, France, South America, the United States and Canada and was a moderate success in Australia for a brief period.
The film was the most profitable of Hitchcock's career; he personally earned well in excess of $15 million (equivalent to $ million in ). He subsequently swapped his rights to "Psycho" and his TV anthology for 150,000 shares of MCA, making him the third largest shareholder and his own boss at Universal, in theory at least, although that did not stop them from interfering with him. Following the first film, "Psycho" became an American horror franchise: "Psycho II", "Psycho III", "Bates Motel", "", and a colour 1998 remake of the original.
On 13 August 1962, Hitchcock's 63rd birthday, the French director François Truffaut began a 50-hour interview of Hitchcock, filmed over eight days at Universal Studios, during which Hitchcock agreed to answer 500 questions. It took four years to transcribe the tapes and organise the images; it was published as a book in 1967, which Truffaut nicknamed the "Hitchbook". The audio tapes were used as the basis of a documentary in 2015. Truffaut sought the interview because it was clear to him that Hitchcock was not simply the mass-market entertainer the American media made him out to be. It was obvious from his films, Truffaut wrote, that Hitchcock had "given more thought to the potential of his art than any of his colleagues". He compared the interview to "Oedipus' consultation of the oracle".
The film scholar Peter William Evans writes that "The Birds" (1963) and "Marnie" (1964) are regarded as "undisputed masterpieces". Hitchcock had intended to film "Marnie" first, and in March 1962 it was announced that Grace Kelly, Princess Grace of Monaco since 1956, would come out of retirement to star in it. When Kelly asked Hitchcock to postpone "Marnie" until 1963 or 1964, he recruited Evan Hunter, author of "The Blackboard Jungle" (1954), to develop a screenplay based on a Daphne du Maurier short story, "The Birds" (1952), which Hitchcock had republished in his "My Favorites in Suspense" (1959). He hired Tippi Hedren to play the lead role. It was her first role; she had been a model in New York when Hitchcock saw her, in October 1961, in an NBC television ad for Sego, a diet drink: "I signed her because she is a classic beauty. Movies don't have them any more. Grace Kelly was the last." He insisted, without explanation, that her first name be written in single quotation marks: 'Tippi'.
In "The Birds", Melanie Daniels, a young socialite, meets lawyer Mitch Brenner (Rod Taylor) in a bird shop; Jessica Tandy plays his possessive mother. Hedren visits him in Bodega Bay (where "The Birds" was filmed) carrying a pair of lovebirds as a gift. Suddenly waves of birds start gathering, watching, and attacking. The question: "What do the birds want?" is left unanswered. Hitchcock made the film with equipment from the Revue Studio, which made "Alfred Hitchcock Presents". He said it was his most technically challenging film yet, using a combination of trained and mechanical birds against a backdrop of wild ones. Every shot was sketched in advance.
An HBO/BBC television film, "The Girl" (2012), depicted Hedren's experiences on set; she said that Hitchcock became obsessed with her and sexually harassed her. He reportedly isolated her from the rest of the crew, had her followed, whispered obscenities to her, had her handwriting analysed, and had a ramp built from his private office directly into her trailer. Diane Baker, her co-star in "Marnie", said: "[N]othing could have been more horrible for me than to arrive on that movie set and to see her being treated the way she was." While filming the attack scene in the attic—which took a week to film—she was placed in a caged room while two men wearing elbow-length protective gloves threw live birds at her. Toward the end of the week, to stop the birds flying away from her too soon, one leg of each bird was attached by nylon thread to elastic bands sewn inside her clothes. She broke down after a bird cut her lower eyelid, and filming was halted on doctor's orders.
In June 1962, Grace Kelly announced that she had decided against appearing in "Marnie" (1964). Hedren had signed an exclusive seven-year, $500-a-week contract with Hitchcock in October 1961, and he decided to cast her in the lead role opposite Sean Connery. In 2016, describing Hedren's performance as "one of the greatest in the history of cinema", Richard Brody called the film a "story of sexual violence" inflicted on the character played by Hedren: "The film is, to put it simply, sick, and it's so because Hitchcock was sick. He suffered all his life from furious sexual desire, suffered from the lack of its gratification, suffered from the inability to transform fantasy into reality, and then went ahead and did so virtually, by way of his art." A 1964 "New York Times" film review called it Hitchcock's "most disappointing film in years", citing Hedren's and Connery's lack of experience, an amateurish script and "glaringly fake cardboard backdrops".
In the film, Marnie Edgar (Hedren) steals $10,000 () from her employer and goes on the run. She applies for a job at Mark Rutland's (Connery) company in Philadelphia and steals from there too. Earlier she is shown having a panic attack during a thunderstorm and fearing the colour red. Mark tracks her down and blackmails her into marrying him. She explains that she does not want to be touched, but during the "honeymoon", Mark rapes her. Marnie and Mark discover that Marnie's mother had been a prostitute when Marnie was a child, and that, while the mother was fighting with a client during a thunderstorm—the mother believed the client had tried to molest Marnie—Marnie had killed the client to save her mother. Cured of her fears when she remembers what happened, she decides to stay with Mark.
No longer speaking to her because she had rebuffed him, Hitchcock apparently referred to Hedren throughout as "the girl" rather than by name. He told Robert Burks, the cinematographer, that the camera had to be placed as close as possible to Hedren when he filmed her face. Evan Hunter, the screenwriter of "The Birds" who was writing "Marnie" too, explained to Hitchcock that, if Mark loved Marnie, he would comfort her, not rape her. Hitchcock reportedly replied: "Evan, when he sticks it in her, I want that camera right on her face!" When Hunter submitted two versions of the script, one without the rape scene, Hitchcock replaced him with Jay Presson Allen.
Failing health reduced Hitchcock's output during the last two decades of his life. Biographer Stephen Rebello claimed Universal "forced" two movies on him, "Torn Curtain" (1966) and "Topaz" (1969). Both were spy thrillers with Cold War-related themes. "Torn Curtain", with Paul Newman and Julie Andrews, precipitated the bitter end of the 12-year collaboration between Hitchcock and composer Bernard Herrmann. Hitchcock was unhappy with Herrmann's score and replaced him with John Addison, Jay Livingston and Ray Evans. "Topaz" (1967), based on a Leon Uris novel, is partly set in Cuba. Both films received mixed reviews.
Hitchcock returned to Britain to make his penultimate film, "Frenzy" (1972), based on the novel "Goodbye Piccadilly, Farewell Leicester Square" (1966). After two espionage films, the plot marked a return to the murder-thriller genre. Richard Blaney (Jon Finch), a volatile barman with a history of explosive anger, becomes the prime suspect in the investigation into the "Necktie Murders", which are actually committed by his friend Bob Rusk (Barry Foster). This time, Hitchcock makes the victim and villain kindreds, rather than opposites as in "Strangers on a Train".
In "Frenzy", Hitchcock allowed nudity for the first time. Two scenes show naked women, one of whom is being raped and strangled; Spoto called the latter "one of the most repellent examples of a detailed murder in the history of film". Both actors, Barbara Leigh-Hunt and Anna Massey, refused to do the scenes, so models were used instead. Biographers have noted that Hitchcock had always pushed the limits of film censorship, often managing to fool Joseph Breen, the longtime head of Hollywood's Motion Picture Production Code. Many times Hitchcock slipped in subtle hints of improprieties forbidden by censorship until the mid-1960s. Yet McGilligan wrote that Breen and others often realised that Hitchcock was inserting such things and were actually amused, as well as alarmed by Hitchcock's "inescapable inferences".
"Family Plot" (1976) was Hitchcock's last film. It relates the escapades of "Madam" Blanche Tyler, played by Barbara Harris, a fraudulent spiritualist, and her taxi-driver lover Bruce Dern, making a living from her phony powers. While "Family Plot" was based on the Victor Canning novel "The Rainbird Pattern" (1972), the novel's tone is more sinister. Screenwriter Ernest Lehman originally wrote the film with a dark tone but was pushed to a lighter, more comical tone by Hitchcock.
Toward the end of his life, Hitchcock was working on the script for a spy thriller, "The Short Night", collaborating with James Costigan, Ernest Lehman and David Freeman. Despite preliminary work, it was never filmed. Hitchcock's health was declining and he was worried about his wife, who had suffered a stroke. The screenplay was eventually published in Freeman's book "The Last Days of Alfred Hitchcock" (1999).
Having refused a CBE in 1962, Hitchcock was appointed a Knight Commander of the Most Excellent Order of the British Empire (KBE) in the 1980 New Year Honours. He was too ill to travel to London—he had a pacemaker and was being given cortisone injections for his arthritis—so on 3 January 1980 the British consul general presented him with the papers at Universal Studios. Asked by a reporter after the ceremony why it had taken the Queen so long, Hitchcock quipped, "I suppose it was a matter of carelessness." Cary Grant, Janet Leigh, and others attended a luncheon afterwards.
His last public appearance was on 16 March 1980, when he introduced the next year's winner of the American Film Institute award. He died of kidney failure the following month, on 29 April, in his Bel Air home. Donald Spoto, one of Hitchcock's biographers, wrote that Hitchcock had declined to see a priest, but according to Jesuit priest Mark Henninger, he and another priest, Tom Sullivan, celebrated Mass at the filmmaker's home, and Sullivan heard his confession. Hitchcock was survived by his wife and daughter. His funeral was held at Good Shepherd Catholic Church in Beverly Hills on 30 April, after which his body was cremated. His remains were scattered over the Pacific Ocean on 10 May 1980.
Hitchcock returned several times to cinematic devices such as the audience as voyeur, suspense, the wrong man or woman, and the "MacGuffin," a plot device essential to the characters but irrelevant to the audience.
Hitchcock appears briefly in most of his own films. For example, he is seen struggling to get a double bass onto a train ("Strangers on a Train"), walking dogs out of a pet shop ("The Birds"), fixing a neighbour's clock ("Rear Window"), as a shadow ("Family Plot"), sitting at a table in a photograph ("Dial M for Murder"), and riding a bus ("North by Northwest").
Hitchcock's portrayal of women has been the subject of much scholarly debate. Bidisha wrote in "The Guardian" in 2010: "There's the vamp, the tramp, the snitch, the witch, the slink, the double-crosser and, best of all, the demon mommy. Don't worry, they all get punished in the end." In a widely cited essay in 1975, Laura Mulvey introduced the idea of the male gaze; the view of the spectator in Hitchcock's films, she argued, is that of the heterosexual male protagonist. "The female characters in his films reflected the same qualities over and over again", Roger Ebert wrote in 1996. "They were blonde. They were icy and remote. They were imprisoned in costumes that subtly combined fashion with fetishism. They mesmerised the men, who often had physical or psychological handicaps. Sooner or later, every Hitchcock woman was humiliated."
The victims in "The Lodger" are all blondes. In "The 39 Steps" (1935), Madeleine Carroll is put in handcuffs. Ingrid Bergman, whom Hitchcock directed three times ("Spellbound" (1945), "Notorious" (1946), and "Under Capricorn" (1949)), is dark blonde. In "Rear Window" (1954), Lisa (Grace Kelly) risks her life by breaking into Lars Thorwald's apartment. In "To Catch a Thief" (1955), Francie (Grace Kelly again) offers to help a man she believes is a burglar. In "Vertigo" (1958) and "North by Northwest" (1959) respectively, Kim Novak and Eva Marie Saint play the blonde heroines. In "Psycho" (1960), Janet Leigh's character steals $40,000 () and is murdered by Norman Bates, a reclusive psychopath. Tippi Hedren, a blonde, appears to be the focus of the attacks in "The Birds" (1963). In "Marnie" (1964), the title character, again played by Hedren, is a thief. In "Topaz", French actresses Dany Robin as Stafford's wife and Claude Jade as Stafford's daughter are blonde heroines, the mistress was played by brunette Karin Dor. Hitchcock's last blonde heroine was Barbara Harris as a phony psychic turned amateur sleuth in "Family Plot" (1976), his final film. In the same film, the diamond smuggler played by Karen Black wears a long blonde wig in several scenes.
His films often feature characters struggling in their relationships with their mothers, such as Norman Bates in "Psycho". In "North by Northwest" (1959), Roger Thornhill (Cary Grant) is an innocent man ridiculed by his mother for insisting that shadowy, murderous men are after him. In "The Birds" (1963), the Rod Taylor character, an innocent man, finds his world under attack by vicious birds, and struggles to free himself from a clinging mother (Jessica Tandy). The killer in "Frenzy" (1972) has a loathing of women but idolises his mother. The villain Bruno in "Strangers on a Train" hates his father, but has an incredibly close relationship with his mother (played by Marion Lorne). Sebastian (Claude Rains) in "Notorious" has a clearly conflicting relationship with his mother, who is (rightly) suspicious of his new bride, Alicia Huberman (Ingrid Bergman).
Hitchcock became known for having remarked that "actors are cattle". During the filming of "Mr. & Mrs. Smith" (1941), Carole Lombard brought three cows onto the set wearing the name tags of Lombard, Robert Montgomery, and Gene Raymond, the stars of the film, to surprise him.
Hitchcock believed that actors should concentrate on their performances and leave work on script and character to the directors and screenwriters. He told Bryan Forbes in 1967: "I remember discussing with a method actor how he was taught and so forth. He said, 'We're taught using improvisation. We are given an idea and then we are turned loose to develop in any way we want to.' I said 'That's not acting. That's writing.'" Walter Slezak said that Hitchcock knew the mechanics of acting better than anyone he knew.
Critics observed that, despite his reputation as a man who disliked actors, actors who worked with him often gave brilliant performances. He used the same actors in many of his films; Cary Grant and James Stewart both worked with Hitchcock four times, and Ingrid Bergman three. James Mason said that Hitchcock regarded actors as "animated props". For Hitchcock, the actors were part of the film's setting. He told François Truffaut: "The chief requisite for an actor is the ability to do nothing well, which is by no means as easy as it sounds. He should be willing to be used and wholly integrated into the picture by the director and the camera. He must allow the camera to determine the proper emphasis and the most effective dramatic highlights."
Hitchcock planned his scripts in detail with his writers. In "Writing with Hitchcock" (2001), Steven DeRosa noted that Hitchcock supervised them through every draft, asking that they tell the story visually. As Hitchcock told Roger Ebert in 1969:
Hitchcock's films were extensively storyboarded to the finest detail. He was reported to have never even bothered looking through the viewfinder, since he did not need to, although in publicity photos he was shown doing so. He also used this as an excuse to never have to change his films from his initial vision. If a studio asked him to change a film, he would claim that it was already shot in a single way, and that there were no alternative takes to consider.
This view of Hitchcock as a director who relied more on pre-production than on the actual production itself has been challenged by Bill Krohn, the American correspondent of French film magazine "Cahiers du cinéma", in his book "Hitchcock at Work". After investigating script revisions, notes to other production personnel written by or to Hitchcock, and other production material, Krohn observed that Hitchcock's work often deviated from how the screenplay was written or how the film was originally envisioned. He noted that the myth of storyboards in relation to Hitchcock, often regurgitated by generations of commentators on his films, was to a great degree perpetuated by Hitchcock himself or the publicity arm of the studios. For example, the celebrated crop-spraying sequence of "North by Northwest" was not storyboarded at all. After the scene was filmed, the publicity department asked Hitchcock to make storyboards to promote the film, and Hitchcock in turn hired an artist to match the scenes in detail.
Even when storyboards were made, scenes that were shot differed from them significantly. Krohn's analysis of the production of Hitchcock classics like "Notorious" reveals that Hitchcock was flexible enough to change a film's conception during its production. Another example Krohn notes is the American remake of "The Man Who Knew Too Much," whose shooting schedule commenced without a finished script and moreover went over schedule, something that, as Krohn notes, was not an uncommon occurrence on many of Hitchcock's films, including "Strangers on a Train" and "Topaz". While Hitchcock did do a great deal of preparation for all his films, he was fully cognisant that the actual film-making process often deviated from the best-laid plans and was flexible to adapt to the changes and needs of production as his films were not free from the normal hassles faced and common routines used during many other film productions.
Krohn's work also sheds light on Hitchcock's practice of generally shooting in chronological order, which he notes sent many films over budget and over schedule and, more importantly, differed from the standard operating procedure of Hollywood in the Studio System Era. Equally important is Hitchcock's tendency to shoot alternative takes of scenes. This differed from coverage in that the films were not necessarily shot from varying angles so as to give the editor options to shape the film how they choose (often under the producer's aegis). Rather they represented Hitchcock's tendency to give himself options in the editing room, where he would provide advice to his editors after viewing a rough cut of the work.
According to Krohn, this and a great deal of other information revealed through his research of Hitchcock's personal papers, script revisions and the like refute the notion of Hitchcock as a director who was always in control of his films, whose vision of his films did not change during production, which Krohn notes has remained the central long-standing myth of Alfred Hitchcock. Both his fastidiousness and attention to detail also found their way into each film poster for his films. Hitchcock preferred to work with the best talent of his day—film poster designers such as Bill Gold and Saul Bass—who would produce posters that accurately represented his films.
Hitchcock was inducted into the Hollywood Walk of Fame on 8 February 1960 with two stars: one for television and a second for his motion pictures. In 1978 John Russell Taylor described him as "the most universally recognizable person in the world" and "a straightforward middle-class Englishman who just happened to be an artistic genius". In 2002 "MovieMaker" named him the most influential director of all time, and a 2007 "The Daily Telegraph" critics' poll ranked him Britain's greatest director. David Gritten, the newspaper's film critic, wrote: "Unquestionably the greatest filmmaker to emerge from these islands, Hitchcock did more than any director to shape modern cinema, which would be utterly different without him. His flair was for narrative, cruelly withholding crucial information (from his characters and from us) and engaging the emotions of the audience like no one else."
He won two Golden Globes, eight Laurel Awards, and five lifetime achievement awards, including the first BAFTA Academy Fellowship Award and, in 1979, an AFI Life Achievement Award. He was nominated five times for an Academy Award for Best Director. "Rebecca", nominated for 11 Oscars, won the Academy Award for Best Picture of 1940; another Hitchcock film, "Foreign Correspondent", was also nominated that year. By 2018, eight of his films had been selected for preservation by the US National Film Registry: "Rebecca" (1940; inducted 2018), "Shadow of a Doubt" (1943; inducted 1991), "Notorious" (1946; inducted 2006), "Rear Window" (1954; inducted 1997), "Vertigo" (1958; inducted 1989), "North by Northwest" (1959; inducted 1995), "Psycho" (1960; inducted 1992), and "The Birds" (1963; inducted 2016).
In 2012 Hitchcock was selected by artist Sir Peter Blake, author of the Beatles' "Sgt. Pepper's Lonely Hearts Club Band" album cover, to appear in a new version of the cover, along with other British cultural figures, and he was featured that year in a BBC Radio 4 series, "The New Elizabethans", as someone "whose actions during the reign of Elizabeth II have had a significant impact on lives in these islands and given the age its character". In June 2013 nine restored versions of Hitchcock's early silent films, including "The Pleasure Garden" (1925), were shown at the Brooklyn Academy of Music's Harvey Theatre; known as "The Hitchcock 9", the travelling tribute was organised by the British Film Institute.
The Alfred Hitchcock Collection is housed at the Academy Film Archive in Hollywood, California. It includes home movies, 16mm film shot on the set of "Blackmail" (1929) and "Frenzy" (1972), and the earliest known colour footage of Hitchcock. The Academy Film Archive has preserved many of his home movies. The Alfred Hitchcock Papers are housed at the Academy's Margaret Herrick Library. The David O. Selznick and the Ernest Lehman collections housed at the Harry Ransom Humanities Research Center in Austin, Texas, contain material related to Hitchcock's work on the production of "The Paradine Case", "Rebecca", "Spellbound", "North by Northwest" and "Family Plot."
Silent films
Sound films
Biographies
"(chronological)"
Miscellaneous | https://en.wikipedia.org/wiki?curid=808 |
Altaic languages
Altaic () is a "Sprachbund" and proposed language family that would include the Turkic, Mongolian and Tungusic language families and possibly also the Japonic and Koreanic languages. Speakers of these languages are currently scattered over most of Asia north of 35 °N and in some eastern parts of Europe, extending in longitude from Turkey to Japan. The group is named after the Altai mountain range in the center of Asia. Most comparative linguists today reject the hypothesis, and it only has a few supporters left.
The Altaic family was first proposed in the 18th century. It was widely accepted until the 1960s and is still listed in many encyclopedias and handbooks. Since the 1950s, many comparative linguists have rejected the proposal, after supposed cognates were found not to be valid, hypothesized sound shifts were not found and Turkic and Mongolic languages were found to be converging rather than diverging over the centuries. Opponents of the theory proposed that the similarities are due to mutual linguistic influences between the groups concerned.
The original hypothesis unified only the Turkic, Mongolian and Tungusic groups. Later proposals to include the Korean and Japanese languages into a "Macro-Altaic" family have always been controversial. (The original proposal was sometimes called "Micro-Altaic" by retronymy.) Most proponents of Altaic continue to support the inclusion of Korean. A common ancestral Proto-Altaic language for the "Macro" family has been tentatively reconstructed by Sergei Starostin and others.
Micro-Altaic includes about 66 living languages, to which Macro-Altaic would add Korean, Jeju, Japanese and the Ryukyuan languages, for a total of 74 (depending on what is considered a language and what is considered a dialect). These numbers do not include earlier states of languages, such as Middle Mongol, Old Korean or Old Japanese.
The earliest known texts in a Turkic language are the Orkhon inscriptions, 720–735 AD. They were deciphered in 1893 by the Danish linguist Vilhelm Thomsen in a scholarly race with his rival, the German–Russian linguist Wilhelm Radloff. However, Radloff was the first to publish the inscriptions.
The first Tungusic language to be attested is Jurchen, the language of the ancestors of the Manchus. A writing system for it was devised in 1119 AD and an inscription using this system is known from 1185 (see List of Jurchen inscriptions).
The earliest Mongolic language of which we have written evidence is known as Middle Mongol. It is first attested by an inscription dated to 1224 or 1225 AD, the Stele of Yisüngge, and by the "Secret History of the Mongols", written in 1228 (see Mongolic languages). The earliest Para-Mongolic text is the Memorial for Yelü Yanning, written in the Khitan large script and dated to 986 AD. However, the Inscription of Hüis Tolgoi, discovered in 1975 and analysed as being in an early form of Mongolic, has been dated to 604-620 AD. The Bugut inscription dates back to 584 AD.
Japanese is first attested in the form of names contained in a few short inscriptions in Classical Chinese from the 5th century AD, such as found on the Inariyama Sword. The first substantial text in Japanese, however, is the Kojiki, which dates from 712 AD. It is followed by the Nihon shoki, completed in 720, and then by the Man'yōshū, which dates from c. 771–785, but includes material that is from about 400 years earlier.
The most important text for the study of early Korean is the Hyangga, a collection of 25 poems, of which some go back to the Three Kingdoms period (57 BC–668 AD), but are preserved in an orthography that only goes back to the 9th century AD. Korean is copiously attested from the mid-15th century on in the phonetically precise Hangul system of writing.
A proposed grouping of the Turkic, Mongolic, and Tungusic languages was published in 1730 by Philip Johan von Strahlenberg, a Swedish officer who traveled in the eastern Russian Empire while a prisoner of war after the Great Northern War. However, he may not have intended to imply a closer relationship among those languages.
In 1844, the Finnish philologist Matthias Castrén proposed a broader grouping, that later came to be called the Ural–Altaic family, which included Turkic, Mongolian, and Manchu-Tungus (=Tungusic) as an "Altaic" branch, and also the Finno-Ugric and Samoyedic languages as the "Uralic" branch (though Castrén himself used the terms "Tataric" and "Chudic"). The name "Altaic" referred to the Altai Mountains in East-Central Asia, which are approximately the center of the geographic range of the three main families. The name "Uralic" referred to the Ural Mountains.
While the Ural-Altaic family hypothesis can still be found in some encyclopedias, atlases, and similar general references, after the 1960s it has been heavily criticized. Even linguists who accept the basic Altaic family, like Sergei Starostin, completely discard the inclusion of the "Uralic" branch.
In 1857, the Austrian scholar Anton Boller suggested adding Japanese to the Ural–Altaic family.
In the 1920s, G.J. Ramstedt and E.D. Polivanov advocated the inclusion of Korean. Decades later, in his 1952 book, Ramstedt rejected the Ural–Altaic hypothesis but again included Korean in Altaic, an inclusion followed by most leading Altaicists (supporters of the theory) to date. His book contained the first comprehensive attempt to identify regular correspondences among the sound systems within the Altaic language families.
In 1960, Nicholas Poppe published what was in effect a heavily revised version of Ramstedt's volume on phonology that has since set the standard in Altaic studies. Poppe considered the issue of the relationship of Korean to Turkic-Mongolic-Tungusic not settled. In his view, there were three possibilities: (1) Korean did not belong with the other three genealogically, but had been influenced by an Altaic substratum; (2) Korean was related to the other three at the same level they were related to each other; (3) Korean had split off from the other three before they underwent a series of characteristic changes.
Roy Andrew Miller's 1971 book "Japanese and the Other Altaic Languages" convinced most Altaicists that Japanese also belonged to Altaic. Since then, the "Macro-Altaic" has been generally assumed to include Turkic, Mongolic, Tungusic, Korean, and Japanese.
In 1990, Unger advocated a family consisting of Tungusic, Korean, and Japonic languages, but not Turkic or Mongolic.
However, many linguists dispute the alleged affinities of Korean and Japanese to the other three groups. Some authors instead tried to connect Japanese to the Austronesian languages.
In 2017 Martine Robbeets proposed that Japanese (and possibly Korean) originated as a hybrid language. She proposed that the ancestral home of the Turkic, Mongolic, and Tungusic languages was somewhere in northwestern Manchuria. A group of those proto-Altaic ("Transeurasian") speakers would have migrated south into the modern Liaoning province, where they would have been mostly assimilated by an agricultural community with an Austronesian-like language. The fusion of the two languages would have resulted in proto-Japanese and proto-Korean.
In 1962 John C. Street proposed an alternative classification, with Turkic-Mongolic-Tungusic in one grouping and Korean-Japanese-Ainu in another, joined in what he designated as the "North Asiatic" family. The inclusion of Ainu was adopted also by James Patrie in 1982.
The Turkic-Mongolic-Tungusic and Korean-Japanese-Ainu groupings were also posited in 2000–2002 by Joseph Greenberg. However, he treated them as independent members of a larger family, which he termed Eurasiatic.
The inclusion of Ainu is not widely accepted by Altaicists. In fact, no convincing genealogical relationship between Ainu and any other language family has been demonstrated, and it is generally regarded as a language isolate.
Starting in the late 1950s, some linguists became increasingly critical of even the minimal Altaic family hypothesis, disputing the alleged evidence of genetic connection between Turkic, Mongolic and Tungusic languages.
Among the earlier critics were Gerard Clauson (1956), Gerhard Doerfer (1963), and Alexander Shcherbak. They claimed that the words and features shared by Turkic, Mongolic, and Tungusic languages were for the most part borrowings and that the rest could be attributed to chance resemblances. In 1988, Doerfer again rejected all the genetic claims over these major groups.
A major continuing supporter of the Altaic hypothesis has been S. Starostin, who published a comparative lexical analysis of the Altaic languages in (1991). He concluded that the analysis supported the Altaic grouping, although it was "older than most other language families in Eurasia, such as Indo-European or Finno-Ugric, and this is the reason why the modern Altaic languages preserve few common elements".
In 1991 and again in 1996, Roy Miller defended the Altaic hypothesis and claimed that the criticisms of Clauson and Doerfer apply exclusively to the lexical correspondences, whereas the most pressing evidence for the theory is the similarities in verbal morphology.
In 2003, Claus Schönig published a critical overview of the history of the Altaic hypothesis up to that time, siding with the earlier criticisms of Clauson, Doerfer, and Shcherbak.
In 2003, Starostin, Anna Dybo and Oleg Mudrak published the "Etymological Dictionary of the Altaic Languages", which expanded the 1991 lexical lists and added other phonological and grammatical arguments.
Starostin's book was criticized by Stefan Georg in 2004 and 2005, and by Alexander Vovin in 2005.
Other defenses of the theory, in response to the criticisms of Georg and Vovin, were published by Starostin in 2005, Blažek in 2006, Robbeets in 2007, and Dybo and G. Starostin in 2008
In 2010, Lars Johanson echoed Miller's 1996 rebuttal to the critics, and called for a muting of the polemic.
The list below comprises linguists who have worked specifically on the Altaic problem since the publication of the first volume of Ramstedt's "Einführung" in 1952. The dates given are those of works concerning Altaic. For supporters of the theory, the version of Altaic they favor is given at the end of the entry, if other than the prevailing one of Turkic–Mongolic–Tungusic–Korean–Japanese.
The original arguments for grouping the "micro-Altaic" languages within a Uralo-Altaic family were based on such shared features as vowel harmony and agglutination.
According to Roy Miller, the most pressing evidence for the theory is the similarities in verbal morphology.
The "Etymological Dictionary" by Starostin and others (2003) proposes a set of sound change laws that would explain the evolution from Proto-Altaic to the descendant languages. For example, although most of today's Altaic languages have vowel harmony, Proto-Altaic as reconstructed by them lacked it; instead, various vowel assimilations between the first and second syllables of words occurred in Turkic, Mongolic, Tungusic, Korean, and Japonic. They also included a number of grammatical correspondences between the languages.
Starostin claimed in 1991 that the members of the proposed Altaic group shared about 15–20% of apparent cognates within a 110-word Swadesh-Yakhontov list; in particular, Turkic–Mongolic 20%, Turkic–Tungusic 18%, Turkic–Korean 17%, Mongolic–Tungusic 22%, Mongolic–Korean 16%, and Tungusic–Korean 21%. The 2003 "Etymological Dictionary" includes a list of 2,800 proposed cognate sets, as well as a few important changes to the reconstruction of Proto-Altaic. The authors tried hard to distinguish loans between Turkic and Mongolic and between Mongolic and Tungusic from cognates; and suggest words that occur in Turkic and Tungusic but not in Mongolic. All other combinations between the five branches also occur in the book. It lists 144 items of shared basic vocabulary, including words for such items as 'eye', 'ear', 'neck', 'bone', 'blood', 'water', 'stone', 'sun', and 'two'.
Robbeets and Bouckaert (2018) use Bayesian phylolinguistic methods to argue for the coherence of the Altaic languages, which they refer to as the "Transeurasian" languages. Their results include the following phylogenetic tree:
According to G. Clauson (1956), G. Doerfer (1963), and A. Shcherbak (1963), many of the typological features of the supposed Altaic languages, particularly agglutinative strongly suffixing morphology and subject–object–verb (SOV) word order, often occur together in languages.
Those critics also argued that the words and features shared by Turkic, Mongolic, and Tungusic languages were for the most part borrowings and that the rest could be attributed to chance resemblances. They noted that there was little vocabulary shared by Turkic and Tungusic languages, though more shared with Mongolic languages. They reasoned that, if all three families had a common ancestor, we should expect losses to happen at random, and not only at the geographical margins of the family; and that the observed pattern is consistent with borrowing.
According to C. Schönig (2003), after accounting for areal effects, the shared lexicon that could have a common genetic origin was reduced to a small number of monosyllabic lexical roots, including the personal pronouns and a few other deictic and auxiliary items, whose sharing could be explained in other ways; not the kind of sharing expected in cases of genetic relationship.
Instead of a common genetic origin, Clauson, Doerfer, and Shcherbak proposed (in 1956-1966) that Turkic, Mongolic, and Tungusic languages form a "Sprachbund": a set of languages with similarities due to convergence through intensive borrowing and long contact, rather than common origin.
Asya Pereltsvaig further observed in 2011 that, in general, genetically related languages and families tend to diverge over time: the earlier forms are more similar than modern forms. However, she claims that an analysis of the earliest written records of Mongolic and Turkic languages shows the opposite, suggesting that they do not share a common traceable ancestor, but rather have become more similar through language contact and areal effects.
The prehistory of the peoples speaking the "Altaic" languages is largely unknown. Whereas for certain other language families, such as the speakers of Indo-European, Uralic, and Austronesian, it is possible to frame substantial hypotheses, in the case of the proposed Altaic family much remains to be done.
Some scholars have conjectured a possible Uralic and Altaic homeland in the Central Asian steppes.
According to Juha Janhunen, the ancestral languages of Turkic, Mongolic, Tungusic, Korean, and Japanese were spoken in a relatively small area comprising present-day North Korea, Southern Manchuria, and Southeastern Mongolia. However Janhunen is sceptical about an affiliation of Japanese to Altaic, while András Róna-Tas remarked that a relationship between Altaic and Japanese, if it ever existed, must be more remote than the relationship of any two of the Indo-European languages. Ramsey stated that "the genetic relationship between Korean and Japanese, if it in fact exists, is probably more complex and distant than we can imagine on the basis of our present state of knowledge".
Supporters of the Altaic hypothesis formerly set the date of the Proto-Altaic language at around 4000 BC, but today at around 5000 BC or 6000 BC. This would make Altaic a language family about as old as Indo-European (4000 to 7,000 BC according to several hypotheses) but considerably younger than Afroasiatic (c. 10,000 BC or 11,000 to 16,000 BC according to different sources). | https://en.wikipedia.org/wiki?curid=824 |
Austrian German
Austrian German (), Austrian Standard German (ASG), Standard Austrian German (), or Austrian High German (), is the variety of Standard German written and spoken in Austria. It has the highest sociolinguistic prestige locally, as it is the variation used in the media and for other formal situations. In less formal situations, Austrians tend to use forms closer to or identical with the Bavarian and Alemannic dialects, traditionally spoken – but rarely written – in Austria.
Austrian German has its beginning in the mid-18th century, when empress Maria Theresa and her son Joseph II introduced compulsory schooling (in 1774) and several reforms of administration in their multilingual Habsburg empire. At the time, the written standard was "Oberdeutsche Schreibsprache", which was highly influenced by the Bavarian and Alemannic dialects of Austria. Another option was to create a new standard based on the Southern German dialects, as proposed by the linguist Johann Siegmund Popowitsch. Instead they decided for pragmatic reasons to adopt the already standardized Chancellery language of Saxony ("Sächsische Kanzleisprache" or "Meißner Kanzleideutsch"), which was based on the administrative language of the non-Austrian area of Meißen and Dresden.
Thus Standard Austrian German has the same geographic origin as the German Standard German ("Bundesdeutsches Hochdeutsch") and Swiss High German ("Schweizer Hochdeutsch", not to be confused with the Alemannic Swiss German dialects).
The process of introducing the new written standard was led by Joseph von Sonnenfels.
Since 1951 the standardized form of Austrian German for official texts and schools has been defined by the "Austrian Dictionary" (""), published under the authority of the Austrian Federal Ministry of Education, Arts and Culture.
As German is a pluricentric language, Austrian German is one among several varieties of Standard German. Much like the relationship between British English and American English, the German varieties differ in minor respects (e.g., spelling, word usage and grammar) but are recognizably equivalent and largely mutually intelligible.
The official Austrian dictionary, "das Österreichische Wörterbuch", prescribes grammatical and spelling rules defining the official language.
Austrian delegates participated in the international working group that drafted the German spelling reform of 1996—several conferences leading up to the reform were hosted in Vienna at the invitation of the Austrian federal government—and adopted it as a signatory, along with Germany, Switzerland, and Liechtenstein, of an international memorandum of understanding (Wiener Absichtserklärung) signed in Vienna in 1996.
The "sharp s" (ß) is used in Austria, as in Germany.
Because of the German language's pluricentric nature, German dialects in Austria should not be confused with the variety of Standard German spoken by most Austrians, which is distinct from that of Germany or Switzerland.
Distinctions in vocabulary persist, for example, in culinary terms, where communication with Germans is frequently difficult, and administrative and legal language, which is due to Austria's exclusion from the development of a German nation-state in the late 19th century and its manifold particular traditions. A comprehensive collection of Austrian-German legal, administrative and economic terms is offered in "Markhardt, Heidemarie: Wörterbuch der österreichischen Rechts-, Wirtschafts- und Verwaltungsterminologie" (Peter Lang, 2006).
The former standard, used for about 300 years or more in speech in refined language, was the ", a sociolect spoken by the imperial Habsburg family and the nobility of Austria-Hungary. It differed from other dialects in vocabulary and pronunciation; it appears to have been spoken with a slight degree of nasality. This was not a standard in a modern technical sense, as it was just the social standard of upper-class speech.
For many years, Austria had a special form of the language for official government documents. This form is known as ", or "Austrian chancellery language". It is a very traditional form of the language, probably derived from medieval deeds and documents, and has a very complex structure and vocabulary generally reserved for such documents. For most speakers (even native speakers), this form of the language is generally difficult to understand, as it contains many highly specialised terms for diplomatic, internal, official, and military matters. There are no regional variations, because this special written form has mainly been used by a government that has now for centuries been based in Vienna.
' is now used less and less, thanks to various administrative reforms that reduced the number of traditional civil servants ('). As a result, Standard German is replacing it in government and administrative texts.
When Austria became a member of the European Union, 23 food-related terms were listed in its accession agreement as having the same legal status as the equivalent terms used in Germany. Austrian German is the only variety of a pluricentric language recognized under international law or EU primary law.
In Austria, as in the German-speaking parts of Switzerland and in southern Germany, verbs that express a state tend to use "" as the auxiliary verb in the perfect, as well as verbs of movement. Verbs which fall into this category include "sitzen" (to sit), "liegen" (to lie) and, in parts of Carinthia, "schlafen" (to sleep). Therefore, the perfect of these verbs would be "ich bin gesessen", "ich bin gelegen" and "ich bin geschlafen" respectively (note: "ich bin geschlafen" is a rarely used form, more commonly "ich habe geschlafen" is used).
In Germany, the words "stehen" (to stand) and "gestehen" (to confess) are identical in the present perfect: "habe gestanden". The Austrian variant avoids this potential ambiguity ("bin gestanden" from "stehen", "to stand"; and "habe gestanden" from "gestehen", "to confess", e.g. ""der Verbrecher ist vor dem Richter gestanden und hat gestanden"").
In addition, the preterite (simple past) is very rarely used in Austria, especially in the spoken language, with the exception of some modal verbs (i.e. "ich sollte", "ich wollte").
There are many official terms that differ in Austrian German from their usage in most parts of Germany. Words used in Austria are "Jänner" (January) rather than "Januar", "Feber" (seldom, February) along with "Februar", "heuer" (this year) along with "dieses Jahr", "Stiege" (stairs) along with "Treppen", "Rauchfang" (chimney) instead of "Schornstein", many administrative, legal and political terms, and many food terms, including the following:
There are, however, some false friends between the two regional varieties:
In addition to the standard variety, in everyday life most Austrians speak one of a number of Upper German dialects.
While strong forms of the various dialects are not fully mutually intelligible to northern Germans, communication is much easier in Bavaria, especially rural areas, where the Bavarian dialect still predominates as the mother tongue. The Central Austro-Bavarian dialects are more intelligible to speakers of Standard German than the Southern Austro-Bavarian dialects of Tyrol.
Viennese, the Austro-Bavarian dialect of Vienna, is seen for many in Germany as quintessentially Austrian. The people of Graz, the capital of Styria, speak yet another dialect which is not very Styrian and more easily understood by people from other parts of Austria than other Styrian dialects, for example from western Styria.
Simple words in the various dialects are very similar, but pronunciation is distinct for each and, after listening to a few spoken words, it may be possible for an Austrian to realise which dialect is being spoken. However, in regard to the dialects of the deeper valleys of the Tyrol, other Tyroleans are often unable to understand them. Speakers from the different states of Austria can easily be distinguished from each other by their particular accents (probably more so than Bavarians), those of Carinthia, Styria, Vienna, Upper Austria, and the Tyrol being very characteristic. Speakers from those regions, even those speaking Standard German, can usually be easily identified by their accent, even by an untrained listener.
Several of the dialects have been influenced by contact with non-Germanic linguistic groups, such as the dialect of Carinthia, where in the past many speakers were bilingual with Slovene, and the dialect of Vienna, which has been influenced by immigration during the Austro-Hungarian period, particularly from what is today Czechia. The German dialects of South Tyrol have been influenced by local Romance languages, particularly noticeable with the many loanwords from Italian and Ladin.
The geographic borderlines between the different accents (isoglosses) coincide strongly with the borders of the states and also with the border with Bavaria, with Bavarians having a markedly different rhythm of speech in spite of the linguistic similarities. | https://en.wikipedia.org/wiki?curid=825 |
Axiom of choice
In mathematics, the axiom of choice, or AC, is an axiom of set theory equivalent to the statement that "a Cartesian product of a collection of non-empty sets is non-empty". Informally put, the axiom of choice says that given any collection of bins, each containing at least one object, it is possible to make a selection of exactly one object from each bin, even if the collection is infinite. Formally, it states that for every indexed family formula_1 of nonempty sets there exists an indexed family formula_2 of elements such that formula_3 for every formula_4. The axiom of choice was formulated in 1904 by Ernst Zermelo in order to formalize his proof of the well-ordering theorem.
In many cases, such a selection can be made without invoking the axiom of choice; this is in particular the case if the number of sets is finite, or if a selection rule is available – some distinguishing property that happens to hold for exactly one element in each set. An illustrative example is sets picked from the natural numbers. From such sets, one may always select the smallest number, e.g. in the smallest elements are {4, 10, 1}. In this case, "select the smallest number" is a choice function. Even if infinitely many sets were collected from the natural numbers, it will always be possible to choose the smallest element from each set to produce a set. That is, the choice function provides the set of chosen elements. However, no choice function is known for the collection of all non-empty subsets of the real numbers (if there are non-constructible reals). In that case, the axiom of choice must be invoked.
Bertrand Russell coined an analogy: for any (even infinite) collection of pairs of shoes, one can pick out the left shoe from each pair to obtain an appropriate selection; this makes it possible to directly define a choice function. For an "infinite" collection of pairs of socks (assumed to have no distinguishing features), there is no obvious way to make a function that selects one sock from each pair, without invoking the axiom of choice.
Although originally controversial, the axiom of choice is now used without reservation by most mathematicians, | https://en.wikipedia.org/wiki?curid=840 |
Attila
Attila (; ), frequently called Attila the Hun, was the ruler of the Huns from 434 until his death in March 453. He was also the leader of a tribal empire consisting of Huns, Ostrogoths, and Alans among others, in Central and Eastern Europe.
During his reign, he was one of the most feared enemies of the Western and Eastern Roman Empires. He crossed the Danube twice and plundered the Balkans, but was unable to take Constantinople. His unsuccessful campaign in Persia was followed in 441 by an invasion of the Eastern Roman (Byzantine) Empire, the success of which emboldened Attila to invade the West. He also attempted to conquer Roman Gaul (modern France), crossing the Rhine in 451 and marching as far as Aurelianum (Orléans) before being stopped in the Battle of the Catalaunian Plains.
He subsequently invaded Italy, devastating the northern provinces, but was unable to take Rome. He planned for further campaigns against the Romans, but died in 453. After Attila's death, his close adviser, Ardaric of the Gepids, led a Germanic revolt against Hunnic rule, after which the Hunnic Empire quickly collapsed.
There is no surviving first-hand account of Attila's appearance, but there is a possible second-hand source provided by Jordanes, who cites a description given by Priscus.
Some scholars have suggested that this description is typically East Asian, because it has all the combined features that fit the physical type of people from Eastern Asia, and Attila's ancestors may have come from there. Other historians also believed that the same descriptions were also evident on some Scythian people.
Many scholars have argued that Attila derives from East Germanic origin; "Attila" is formed from the Gothic or Gepidic noun "atta", "father", by means of the diminutive suffix "-ila", meaning "little father". The Gothic etymology was first proposed by Jacob and Wilhelm Grimm in the early 19th century. Maenchen-Helfen notes that this derivation of the name "offers neither phonetic nor semantic difficulties", and Gerhard Doerfer notes that the name is simply correct Gothic. The name has sometimes been interpreted as a Germanization of a name of Hunnic origin.
Other scholars have argued for a Turkic origin of the name. Omeljan Pritsak considered "Ἀττίλα" (Attíla) a composite title-name which derived from Turkic *"es" (great, old), and *"til" (sea, ocean), and the suffix /a/. The stressed back syllabic "til" assimilated the front member "es", so it became *"as". It is a nominative, in form of "attíl-" (< *"etsíl" < *"es tíl") with the meaning "the oceanic, universal ruler". J. J. Mikkola connected it with Turkic "āt" (name, fame). As another Turkic possibility, H. Althof (1902) considered it was related to Turkish "atli" (horseman, cavalier), or Turkish "at" (horse) and "dil" (tongue). Maenchen-Helfen argues that Pritsak's derivation is "ingenious but for many reasons unacceptable", while dismissing Mikkola's as "too farfetched to be taken seriously". M. Snædal similarly notes that none of these proposals has achieved wide acceptance. Criticizing the proposals of finding Turkic or other etymologies for Attila, Doerfer notes that King George VI of England had a name of Greek origin, and Süleyman the Magnificent had a name of Arabic origin, yet that does not make them Greeks or Arabs: it is therefore plausible that Attila would have a name not of Hunnic origin. Historian Hyun Jin Kim, however, has argued that the Turkic etymology is "more probable".
M. Snædal, in a paper that rejects the Germanic derivation but notes the problems with the existing proposed Turkic etymologies, argues that Attila's name could have originated from Turkic-Mongolian "at, adyy/agta" (gelding, warhorse) and Turkish "atli" (horseman, cavalier), meaning "possessor of geldings, provider of warhorses".
The historiography of Attila is faced with a major challenge, in that the only complete sources are written in Greek and Latin by the enemies of the Huns. Attila's contemporaries left many testimonials of his life, but only fragments of these remain. Priscus was a Byzantine diplomat and historian who wrote in Greek, and he was both a witness to and an actor in the story of Attila, as a member of the embassy of Theodosius II at the Hunnic court in 449. He was obviously biased by his political position, but his writing is a major source for information on the life of Attila, and he is the only person known to have recorded a physical description of him. He wrote a history of the late Roman Empire in eight books covering the period from 430 to 476.
Only fragments of Priscus' work remain. It was cited extensively by 6th-century historians Procopius and Jordanes, especially in Jordanes' "The Origin and Deeds of the Goths", which contains numerous references to Priscus's history, and it is also an important source of information about the Hunnic empire and its neighbors. He describes the legacy of Attila and the Hunnic people for a century after Attila's death. Marcellinus Comes, a chancellor of Justinian during the same era, also describes the relations between the Huns and the Eastern Roman Empire.
Numerous ecclesiastical writings contain useful but scattered information, sometimes difficult to authenticate or distorted by years of hand-copying between the 6th and 17th centuries. The Hungarian writers of the 12th century wished to portray the Huns in a positive light as their glorious ancestors, and so repressed certain historical elements and added their own legends.
The literature and knowledge of the Huns themselves was transmitted orally, by means of epics and chanted poems that were handed down from generation to generation. Indirectly, fragments of this oral history have reached us via the literature of the Scandinavians and Germans, neighbors of the Huns who wrote between the 9th and 13th centuries. Attila is a major character in many Medieval epics, such as the Nibelungenlied, as well as various Eddas and sagas.
Archaeological investigation has uncovered some details about the lifestyle, art, and warfare of the Huns. There are a few traces of battles and sieges, but the tomb of Attila and the location of his capital have not yet been found.
The Huns were a group of Eurasian nomads, appearing from east of the Volga, who migrated further into Western Europe c. 370 and built up an enormous empire there. Their main military techniques were mounted archery and javelin throwing. They were in the process of developing settlements before their arrival in Western Europe, yet the Huns were a society of pastoral warriors whose primary form of nourishment was meat and milk, products of their herds.
The origin and language of the Huns has been the subject of debate for centuries. According to some theories, their leaders at least may have spoken a Turkic language, perhaps closest to the modern Chuvash language. One scholar suggests a relationship to Yeniseian. According to the "Encyclopedia of European Peoples", "the Huns, especially those who migrated to the west, may have been a combination of central Asian Turkic, Mongolic, and Ugric stocks".
Attila's father Mundzuk was the brother of kings Octar and Ruga, who reigned jointly over the Hunnic empire in the early fifth century. This form of diarchy was recurrent with the Huns, but historians are unsure whether it was institutionalized, merely customary, or an occasional occurrence. His family was from a noble lineage, but it is uncertain whether they constituted a royal dynasty. Attila's birthdate is debated; journalist Éric Deschodt and writer Herman Schreiber have proposed a date of 395. However, historian Iaroslav Lebedynsky and archaeologist Katalin Escher prefer an estimate between the 390s and the first decade of the fifth century. Several historians have proposed 406 as the date.
Attila grew up in a rapidly changing world. His people were nomads who had only recently arrived in Europe. They crossed the Volga river during the 370s and annexed the territory of the Alans, then attacked the Gothic kingdom between the Carpathian mountains and the Danube. They were a very mobile people, whose mounted archers had acquired a reputation for invincibility, and the Germanic tribes seemed unable to withstand them. Vast populations fleeing the Huns moved from Germania into the Roman Empire in the west and south, and along the banks of the Rhine and Danube. In 376, the Goths crossed the Danube, initially submitting to the Romans but soon rebelling against Emperor Valens, whom they killed in the Battle of Adrianople in 378. Large numbers of Vandals, Alans, Suebi, and Burgundians crossed the Rhine and invaded Roman Gaul on December 31, 406 to escape the Huns. The Roman Empire had been split in half since 395 and was ruled by two distinct governments, one based in Ravenna in the West, and the other in Constantinople in the East. The Roman Emperors, both East and West, were generally from the Theodosian family in Attila's lifetime (despite several power struggles).
The Huns dominated a vast territory with nebulous borders determined by the will of a constellation of ethnically varied peoples. Some were assimilated to Hunnic nationality, whereas many retained their own identities and rulers but acknowledged the suzerainty of the king of the Huns. The Huns were also the indirect source of many of the Romans' problems, driving various Germanic tribes into Roman territory, yet relations between the two empires were cordial: the Romans used the Huns as mercenaries against the Germans and even in their civil wars. Thus, the usurper Joannes was able to recruit thousands of Huns for his army against Valentinian III in 424. It was Aëtius, later Patrician of the West, who managed this operation. They exchanged ambassadors and hostages, the alliance lasting from 401 to 450 and permitting the Romans numerous military victories. The Huns considered the Romans to be paying them tribute, whereas the Romans preferred to view this as payment for services rendered. The Huns had become a great power by the time that Attila came of age during the reign of his uncle Ruga, to the point that Nestorius, the Patriarch of Constantinople, deplored the situation with these words: "They have become both masters and slaves of the Romans".
The death of Rugila (also known as Rua or Ruga) in 434 left the sons of his brother Mundzuk, Attila and Bleda, in control of the united Hun tribes. At the time of the two brothers' accession, the Hun tribes were bargaining with Eastern Roman Emperor Theodosius II's envoys for the return of several renegades who had taken refuge within the Eastern Roman Empire, possibly Hunnic nobles who disagreed with the brothers' assumption of leadership.
The following year, Attila and Bleda met with the imperial legation at Margus (Požarevac), all seated on horseback in the Hunnic manner, and negotiated an advantageous treaty. The Romans agreed to return the fugitives, to double their previous tribute of 350 Roman pounds (c. 115 kg) of gold, to open their markets to Hunnish traders, and to pay a ransom of eight "solidi" for each Roman taken prisoner by the Huns. The Huns, satisfied with the treaty, decamped from the Roman Empire and returned to their home in the Great Hungarian Plain, perhaps to consolidate and strengthen their empire. Theodosius used this opportunity to strengthen the walls of Constantinople, building the city's first sea wall, and to build up his border defenses along the Danube.
The Huns remained out of Roman sight for the next few years while they invaded the Sassanid Empire. They were defeated in Armenia by the Sassanids, abandoned their invasion, and turned their attentions back to Europe. In 440, they reappeared in force on the borders of the Roman Empire, attacking the merchants at the market on the north bank of the Danube that had been established by the treaty of 435.
Crossing the Danube, they laid waste to the cities of Illyricum and forts on the river, including (according to Priscus) Viminacium, a city of Moesia. Their advance began at Margus, where they demanded that the Romans turn over a bishop who had retained property that Attila regarded as his. While the Romans discussed the bishop's fate, he slipped away secretly to the Huns and betrayed the city to them.
While the Huns attacked city-states along the Danube, the Vandals (led by Geiseric) captured the Western Roman province of Africa and its capital of Carthage. Carthage was the richest province of the Western Empire and a main source of food for Rome. The Sassanid Shah Yazdegerd II invaded Armenia in 441.
The Romans stripped the Balkan area of forces, sending them to Sicily in order to mount an expedition against the Vandals in Africa. This left Attila and Bleda a clear path through Illyricum into the Balkans, which they invaded in 441. The Hunnish army sacked Margus and Viminacium, and then took Singidunum (Belgrade) and Sirmium. During 442, Theodosius recalled his troops from Sicily and ordered a large issue of new coins to finance operations against the Huns. He believed that he could defeat the Huns and refused the Hunnish kings' demands.
Attila responded with a campaign in 443. For the first time (as far as the Romans knew) his forces were equipped with battering rams and rolling siege towers, with which they successfully assaulted the military centers of Ratiara and Naissus (Niš) and massacred the inhabitants. Priscus said "When we arrived at Naissus we found the city deserted, as though it had been sacked; only a few sick persons lay in the churches. We halted at a short distance from the river, in an open space, for all the ground adjacent to the bank was full of the bones of men slain in war."
Advancing along the Nišava River, the Huns next took Serdica (Sofia), Philippopolis (Plovdiv), and Arcadiopolis (Lüleburgaz). They encountered and destroyed a Roman army outside Constantinople but were stopped by the double walls of the Eastern capital. They defeated a second army near Callipolis (Gelibolu).
Theodosius, unable to make effective armed resistance, admitted defeat, sending the "Magister militum per Orientem" Anatolius to negotiate peace terms. The terms were harsher than the previous treaty: the Emperor agreed to hand over 6,000 Roman pounds (c. 2000 kg) of gold as punishment for having disobeyed the terms of the treaty during the invasion; the yearly tribute was tripled, rising to 2,100 Roman pounds (c. 700 kg) in gold; and the ransom for each Roman prisoner rose to 12 "solidi".
Their demands were met for a time, and the Hun kings withdrew into the interior of their empire. Bleda died following the Huns' withdrawal from Byzantium (probably around 445). Attila then took the throne for himself, becoming the sole ruler of the Huns.
In 447, Attila again rode south into the Eastern Roman Empire through Moesia. The Roman army, under Gothic "magister militum" Arnegisclus, met him in the Battle of the Utus and was defeated, though not without inflicting heavy losses. The Huns were left unopposed and rampaged through the Balkans as far as Thermopylae.
Constantinople itself was saved by the Isaurian troops of "magister militum per Orientem" Zeno and protected by the intervention of prefect Constantinus, who organized the reconstruction of the walls that had been previously damaged by earthquakes and, in some places, to construct a new line of fortification in front of the old. Callinicus, in his "Life of Saint Hypatius", wrote:
In 450, Attila proclaimed his intent to attack the Visigoth kingdom of Toulouse by making an alliance with Emperor Valentinian III. He had previously been on good terms with the Western Roman Empire and its influential general Flavius Aëtius. Aëtius had spent a brief exile among the Huns in 433, and the troops that Attila provided against the Goths and Bagaudae had helped earn him the largely honorary title of "magister militum" in the west. The gifts and diplomatic efforts of Geiseric, who opposed and feared the Visigoths, may also have influenced Attila's plans.
However, Valentinian's sister was Honoria, who had sent the Hunnish king a plea for help—and her engagement ring—in order to escape her forced betrothal to a Roman senator in the spring of 450. Honoria may not have intended a proposal of marriage, but Attila chose to interpret her message as such. He accepted, asking for half of the western Empire as dowry.
When Valentinian discovered the plan, only the influence of his mother Galla Placidia convinced him to exile Honoria, rather than killing her. He also wrote to Attila, strenuously denying the legitimacy of the supposed marriage proposal. Attila sent an emissary to Ravenna to proclaim that Honoria was innocent, that the proposal had been legitimate, and that he would come to claim what was rightfully his.
Attila interfered in a succession struggle after the death of a Frankish ruler. Attila supported the elder son, while Aëtius supported the younger. (The location and identity of these kings is not known and subject to conjecture.) Attila gathered his vassals—Gepids, Ostrogoths, Rugians, Scirians, Heruls, Thuringians, Alans, Burgundians, among others–and began his march west. In 451, he arrived in Belgica with an army exaggerated by Jordanes to half a million strong.
On April 7, he captured Metz. Other cities attacked can be determined by the hagiographic "vitae" written to commemorate their bishops: Nicasius was slaughtered before the altar of his church in Rheims; Servatus is alleged to have saved Tongeren with his prayers, as Saint Genevieve is said to have saved Paris. Lupus, bishop of Troyes, is also credited with saving his city by meeting Attila in person.
Aëtius moved to oppose Attila, gathering troops from among the Franks, the Burgundians, and the Celts. A mission by Avitus and Attila's continued westward advance convinced the Visigoth king Theodoric I (Theodorid) to ally with the Romans. The combined armies reached Orléans ahead of Attila, thus checking and turning back the Hunnish advance. Aëtius gave chase and caught the Huns at a place usually assumed to be near Catalaunum (modern Châlons-en-Champagne). Attila decided to fight the Romans on plains where he could use his cavalry.
The two armies clashed in the Battle of the Catalaunian Plains, the outcome of which is commonly considered to be a strategic victory for the Visigothic-Roman alliance. Theodoric was killed in the fighting, and Aëtius failed to press his advantage, according to Edward Gibbon and Edward Creasy, because he feared the consequences of an overwhelming Visigothic triumph as much as he did a defeat. From Aëtius' point of view, the best outcome was what occurred: Theodoric died, Attila was in retreat and disarray, and the Romans had the benefit of appearing victorious.
Attila returned in 452 to renew his marriage claim with Honoria, invading and ravaging Italy along the way. Communities became established in what would later become Venice as a result of these attacks when the residents fled to small islands in the Venetian Lagoon. His army sacked numerous cities and razed Aquileia so completely that it was afterwards hard to recognize its original site. Aëtius lacked the strength to offer battle, but managed to harass and slow Attila's advance with only a shadow force. Attila finally halted at the River Po. By this point, disease and starvation may have taken hold in Attila's camp, thus hindering his war efforts and potentially contributing to the cessation of invasion.
Emperor Valentinian III sent three envoys, the high civilian officers Gennadius Avienus and Trigetius, as well as the Bishop of Rome Leo I, who met Attila at Mincio in the vicinity of Mantua and obtained from him the promise that he would withdraw from Italy and negotiate peace with the Emperor. Prosper of Aquitaine gives a short description of the historic meeting, but gives all the credit to Leo for the successful negotiation. Priscus reports that superstitious fear of the fate of Alaric gave him pause—as Alaric died shortly after sacking Rome in 410.
Italy had suffered from a terrible famine in 451 and her crops were faring little better in 452. Attila's devastating invasion of the plains of northern Italy this year did not improve the harvest. To advance on Rome would have required supplies which were not available in Italy, and taking the city would not have improved Attila's supply situation. Therefore, it was more profitable for Attila to conclude peace and retreat to his homeland.
Furthermore, an East Roman force had crossed the Danube under the command of another officer also named Aetius—who had participated in the Council of Chalcedon the previous year—and proceeded to defeat the Huns who had been left behind by Attila to safeguard their home territories. Attila, hence, faced heavy human and natural pressures to retire "from Italy without ever setting foot south of the Po". As Hydatius writes in his "Chronica Minora":
Marcian was the successor of Theodosius, and he had ceased paying tribute to the Huns in late 450 while Attila was occupied in the west. Multiple invasions by the Huns and others had left the Balkans with little to plunder.
After Attila left Italy and returned to his palace across the Danube, he planned to strike at Constantinople again and reclaim the tribute which Marcian had stopped. However, he died in the early months of 453.
The conventional account from Priscus says that Attila was at a feast celebrating his latest marriage, this time to the beautiful young Ildico (the name suggests Gothic or Ostrogoth origins). In the midst of the revels, however, he suffered a severe nosebleed and choked to death in a stupor. An alternative theory is that he succumbed to internal bleeding after heavy drinking, possibly a condition called esophageal varices, where dilated veins in the lower part of the esophagus rupture leading to death by hemorrhage.
Another account of his death was first recorded 80 years after the events by Roman chronicler Marcellinus Comes. It reports that "Attila, King of the Huns and ravager of the provinces of Europe, was pierced by the hand and blade of his wife". Most scholars reject these accounts as no more than hearsay, preferring instead the account given by Attila's contemporary Priscus. Priscus' version, however, has recently come under renewed scrutiny by Michael A. Babcock. Based on detailed philological analysis, Babcock concludes that the account of natural death given by Priscus was an ecclesiastical "cover story", and that Emperor Marcian (who ruled the Eastern Roman Empire from 450 to 457) was the political force behind Attila's death. Jordanes recounts:
Attila's sons Ellac, Dengizich and Ernak, "in their rash eagerness to rule they all alike destroyed his empire". They "were clamoring that the nations should be divided among them equally and that warlike kings with their peoples should be apportioned to them by lot like a family estate". Against the treatment as "slaves of the basest condition" a Germanic alliance led by the Gepid ruler Ardaric (who was noted for great loyalty to Attila) revolted and fought with the Huns in Pannonia in the Battle of Nedao 454 AD. Attila's eldest son Ellac was killed in that battle. Attila's sons "regarding the Goths as deserters from their rule, came against them as though they were seeking fugitive slaves", attacked Ostrogothic co-ruler Valamir (who also fought alongside Ardaric and Attila at the Catalaunian Plains), but were repelled, and some group of Huns moved to Scythia (probably those of Ernak). His brother Dengizich attempted a renewed invasion across the Danube in 468 AD, but was defeated at the Battle of Bassianae by the Ostrogoths. Dengizich was killed by Roman-Gothic general Anagast the following year, after which the Hunnic dominion ended.
Attila's many children and relatives are known by name and some even by deeds, but soon valid genealogical sources all but dried up, and there seems to be no verifiable way to trace Attila's descendants. This has not stopped many genealogists from attempting to reconstruct a valid line of descent for various medieval rulers. One of the most credible claims has been that of the "Nominalia of the Bulgarian khans" for mythological Avitohol and Irnik from the Dulo clan of the Bulgars.
Attila himself is said to have claimed the titles "Descendant of the Great Nimrod", and "King of the Huns, the Goths, the Danes, and the Medes"—the last two peoples being mentioned to show the extent of his control over subject nations even on the peripheries of his domain.
Jordanes embellished the report of Priscus, reporting that Attila had possessed the "Holy War Sword of the Scythians", which was given to him by Mars and made him a "prince of the entire world".
By the end of the 12th century the royal court of Hungary proclaimed their descent from Attila. Lampert of Hersfeld's contemporary chronicles report that shortly before the year 1071, the Sword of Attila had been presented to Otto of Nordheim by the exiled queen of Hungary, Anastasia of Kiev. This sword, a cavalry sabre now in the Kunsthistorisches Museum in Vienna, appears to be the work of Hungarian goldsmiths of the ninth or tenth century.
An anonymous chronicler of the medieval period represented the meeting of Pope Leo and Atilla as attended also by Saint Peter and Saint Paul, "a miraculous tale calculated to meet the taste of the time" This apotheosis was later portrayed artistically by the Renaissance artist Raphael and sculptor Algardi, whom eighteenth-century historian Edward Gibbon praised for establishing "one of the noblest legends of ecclesiastical tradition".
According to a version of this narrative related in the Chronicon Pictum, a mediaeval Hungarian chronicle, the Pope promised Attila that if he left Rome in peace, one of his successors would receive a holy crown (which has been understood as referring to the Holy Crown of Hungary).
Some histories and chronicles describe him as a great and noble king, and he plays major roles in three Norse sagas: "Atlakviða", "Volsunga saga", and "Atlamál". The "Polish Chronicle" represents Attila's name as "Aquila".
Frutolf of Michelsberg and Otto of Freising pointed out that some songs as "vulgar fables" made Theoderic the Great, Attila and Ermanaric contemporaries, when any reader of Jordanes knew that this was not the case. This refers to the so-called historical poems about Dietrich von Bern (Theoderic), in which Etzel (Attila) is Dietrich's refuge in exile from his wicked uncle Ermenrich (Ermanaric). Etzel is most prominent in the poems "Dietrichs Flucht" and the "Rabenschlacht". Etzel also appears as Kriemhild's second noble husband in the "Nibelungenlied", in which Kriemhild causes the destruction of both the Hunnish kingdom and that of her Burgundian relatives.
In 1812, Ludwig van Beethoven conceived the idea of writing an opera about Attila and approached August von Kotzebue to write the libretto. It was, however, never written. In 1846, Giuseppe Verdi wrote the opera, loosely based on episodes in Attila's invasion of Italy.
In World War I, Allied propaganda referred to Germans as the "Huns", based on a 1900 speech by Emperor Wilhelm II praising Attila the Hun's military prowess, according to Jawaharlal Nehru's "Glimpses of World History". "Der Spiegel" commented on November 6, 1948, that the Sword of Attila was hanging menacingly over Austria.
American writer Cecelia Holland wrote "The Death of Attila" (1973), a historical novel in which Attila appears as a powerful background figure whose life and death deeply impact the protagonists, a young Hunnic warrior and a Germanic one.
The name has many variants in several languages: Atli and Atle in Old Norse; Etzel in Middle High German (Nibelungenlied); Ætla in Old English; Attila, Atilla, and Etele in Hungarian (Attila is the most popular); Attila, Atilla, Atilay, or Atila in Turkish; and Adil and Edil in Kazakh or Adil ("same/similar") or Edil ("to use") in Mongolian.
In modern Hungary and in Turkey, "Attila" and its Turkish variation "Atilla" are commonly used as a male first name. In Hungary, several public places are named after Attila; for instance, in Budapest there are 10 Attila Streets, one of which is an important street behind the Buda Castle. When the Turkish Armed Forces invaded Cyprus in 1974, the operations were named after Attila ("The Attila Plan").
The 1954 Universal International film "Sign of the Pagan" starred Jack Palance as Attila. | https://en.wikipedia.org/wiki?curid=841 |
Aegean Sea
The Aegean Sea is an elongated embayment of the Mediterranean Sea located between the Greek and Anatolian peninsulas. The sea has an area of some 215,000 square kilometres. In the north, the Aegean is connected to the Marmara Sea and the Black Sea by the straits of the Dardanelles and Bosphorus. The Aegean Islands are located within the sea and some bound it on its southern periphery, including Crete and Rhodes. The sea reaches a maximum depth of 3,544 meters, to the east of Crete.
The Aegean Islands can be divided into several island groups, including Dodecanese, the Cyclades, the Sporades, the Saronic islands and the North Aegean Islands, as well as Crete and its surrounding islands. The Dodecanese, located to the southeast, includes the islands of Rhodes, Kos, and Patmos; the islands of Delos and Naxos are within the Cyclades to the south of the sea. Lesbos is part of the North Aegean Islands. Euboea, the second largest island in Greece, is located in the Aegean, despite being administered as part of Central Greece. Nine out of twelve of the Administrative regions of Greece border the sea, along with the Turkish provinces of Edirne, Canakkale, Balıkesir, Izmir, Aydın and Muğla to the east of the sea. Various Turkish islands in the sea are Imbros, Tenedos, Cunda Island, and the Foça Islands.
The Aegean Sea has been historically important, especially in regards to the civilization of Ancient Greece, who inhabited the area around the coast of the Aegean and the Aegean islands. The Aegean islands facilitated contact between the people of the area and between Europe and Asia. Along with the Greeks, Thracians lived among the northern coast. The Romans conquered the area under the Roman Empire, and later the Byzantine Empire held it against advances by the First Bulgarian Empire. The Fourth Crusade weakened Byzantine control of the area, and it was eventually conquered by the Ottoman Empire, with the exception of Crete, which was a Venetian colony until 1669. The Greek War of Independence allowed a Greek state on the coast of the Aegean from 1829 onwards. The Ottoman Empire held a presence over the sea for over 500 years, until it was replaced by modern Turkey.
The rocks making up the floor of the Aegean are mainly limestone, though often greatly altered by volcanic activity that has convulsed the region in relatively recent geologic times. Of particular interest are the richly coloured sediments in the region of the islands of Santorini and Milos, in the south Aegean. Notable cities on the Aegean coastline include Athens, Thessaloniki, Volos, Kavala and Heraklion in Greece, and İzmir and Bodrum in Turkey.
A number of issues concerning sovereignty within the Aegean Sea are disputed between Greece and Turkey. The Aegean dispute has had a large effect on Greek-Turkish relations since the 1970s. Issues include the delimitation of territorial waters, national airspace, exclusive economic zones and flight information regions.
Late Latin authors referred the name "Aegaeus" to Aegeus, whom they said to have jumped into that sea (instead of jumping from the Athenian acropolis, as told by some Greek authors). He was the father of Theseus, the mythical king and founder-hero of Athens. Aegeus had told Theseus to put up white sails when returning if he was successful in killing the Minotaur. When Theseus returned, he forgot these instructions, and Aegeus thinking his son to have died then drowned himself in the sea.
The sea was known in Latin as "Aegaeum mare" under the control of the Roman Empire. The Venetians, who ruled many Greek islands in the High and Late Middle Ages, popularized the name "Archipelago" (Greek: αρχιπέλαγος, meaning "main sea" or "chief sea"), a name that held on in many European countries until the early modern period. In the South Slavic languages, the Aegean is called "White Sea" (Bulgarian: /; Macedonian: /; Serbo-Croatian: /). The Turkish name for the sea is "Ege Denizi," derived from the Greek name.
The Aegean Sea is an elongated embayment of the Mediterranean Sea, and covers about in area, measuring about longitudinally and latitudinal. The sea's maximum depth is , located at a point east of Crete. The Aegean Islands are found within its waters, with the following islands delimiting the sea on the south, generally from west to east: Kythera, Antikythera, Crete, Kasos, Karpathos and Rhodes. The Anatolian peninsula marks the eastern boundary of the sea, while the Greek mainland marks the west. Several seas are contained within the Aegean Sea; the Thracian Sea is a section of the Aegean located to the north, the Icarian Sea to the east, the Myrtoan Sea to the west, while the Sea of Crete is the southern section.
The Greek regions that border the sea, in alphabetical order, are Attica, Central Greece, Central Macedonia, Crete, Eastern Macedonia and Thrace, North Aegean, Peloponnese, South Aegean, and Thessaly. The historical region of Macedonia also borders the sea, to the north.
The Aegean Islands, which almost all belong to Greece, can be divided into seven groups:
Many of the Aegean islands or island chains, are geographically extensions of the mountains on the mainland. One chain extends across the sea to Chios, another extends across Euboea to Samos, and a third extends across the Peloponnese and Crete to Rhodes, dividing the Aegean from the Mediterranean.
The bays and gulfs of the Aegean beginning at the South and moving clockwise include on Crete, the Mirabello, Almyros, Souda and Chania bays or gulfs, on the mainland the Myrtoan Sea to the west with the Argolic Gulf, the Saronic Gulf northwestward, the Petalies Gulf which connects with the South Euboic Sea, the Pagasetic Gulf which connects with the North Euboic Sea, the Thermian Gulf northwestward, the Chalkidiki Peninsula including the Cassandra and the Singitic Gulfs, northward the Strymonian Gulf and the Gulf of Kavala and the rest are in Turkey; Saros Gulf, Edremit Gulf, Dikili Gulf, Gulf of Çandarlı, Gulf of İzmir, Gulf of Kuşadası, Gulf of Gökova, Güllük Gulf.
The Aegean sea is connected to the Sea of Marmara by the Dardanelles, also known from Classical Antiquity as the Hellespont. The Dardanelles are located to the northeast of the sea. It ultimately connects with the Black Sea through the Bosphoros strait, upon which lies the city of Istanbul. The Dardanelles and the Bosphoros are known as the Turkish Straits.
According to the International Hydrographic Organization, the limits of the Aegean Sea as follows:
Aegean surface water circulates in a counterclockwise gyre, with hypersaline Mediterranean water moving northward along the west coast of Turkey, before being displaced by less dense Black Sea outflow. The dense Mediterranean water sinks below the Black Sea inflow to a depth of , then flows through the Dardanelles Strait and into the Sea of Marmara at velocities of . The Black Sea outflow moves westward along the northern Aegean Sea, then flows southwards along the east coast of Greece.
The physical oceanography of the Aegean Sea is controlled mainly by the regional climate, the fresh water discharge from major rivers draining southeastern Europe, and the seasonal variations in the Black Sea surface water outflow through the Dardanelles Strait.
Analysis of the Aegean during 1991 and 1992 revealed three distinct water masses:
The climate of the Aegean Sea largely reflects the climate of Greece and Western Turkey, which is to say, predominately Mediterranean. According to the Köppen climate classification, most of the Aegean is classified as Hot-summer Mediterranean ("Csa"), with hotter and drier summers along with milder and wetter winters. However, high temperatures during summers are generally not quite as high as those in arid or semiarid climates due to the presence of a large body of water. This is most predominant in the west and east coasts of the Aegean, and within the Aegean islands. In the north of the Aegean Sea, the climate is instead classified as Cold semi-arid "(BSk)", which feature cooler summers than Hot-summer Mediterranean climates.
The Etesian winds are a dominant weather influence in the Aegean Basin.
The below table lists climate conditions of some major Aegean cities:
Numerous Greek and Turkish settlements are located along their mainland coast, as well as on towns on the Aegean islands. The largest cities are Athens and Thessaloniki in Greece and İzmir in Turkey. The most populated of the Aegean islands is Crete, followed by Euboea and Rhodes.
Greece has established several marine protected areas along its coasts. According to the Network of Managers of Marine Protected Areas in the Mediterranean (MedPAN), four Greek MPAs are participating in the Network. These include Alonnisos Marine Park, while the Missolonghi–Aitoliko Lagoons and the island of Zakynthosare not on the Aegean.
The current coastline dates back to about 4000 BC. Before that time, at the peak of the last ice age (about 18,000 years ago) sea levels everywhere were 130 metres lower, and there were large well-watered coastal plains instead of much of the northern Aegean. When they were first occupied, the present-day islands including Milos with its important obsidian production were probably still connected to the mainland. The present coastal arrangement appeared around 9,000 years ago, with post-ice age sea levels continuing to rise for another 3,000 years after that.
The subsequent Bronze Age civilizations of Greece and the Aegean Sea have given rise to the general term "Aegean civilization". In ancient times, the sea was the birthplace of two ancient civilizations – the Minoans of Crete and the Myceneans of the Peloponnese.
The Minoan civilization was a Bronze Age civilization on the island of Crete and other Aegean islands, flourishing from around 2700 to 1450 BC before a period of decline, finally ending at around 1100 BC. It represented the first advanced civilization in Europe, leaving behind massive building complexes, tools, stunning artwork, writing systems, and a massive network of trade. The Minoan period saw extensive trade between Crete, Aegean, and Mediterranean settlements, particularly the Near East. The most notable Minoan palace is that of Knossos, followed by that of Phaistos.
After the decline of the Minoan civilization, the Mycenaean Greeks arose, becoming the first advanced civilization in mainland Greece, which lasted from approximately 1600 to 1100 BC. It is believed that the site of Mycenae, which sits close to the Aegean coast, was the center of Mycenaean civilization. The Mycenaeans introduced several innovations in the fields of engineering, architecture and military infrastructure, while trade over vast areas of the Mediterranean, including the Aegean, was essential for the Mycenaean economy. Their syllabic script, the Linear B, offers the first written records of the Greek language and their religion already included several deities that can also be found in the Olympic Pantheon. Mycenaean Greece was dominated by a warrior elite society and consisted of a network of palace-centered states that developed rigid hierarchical, political, social and economic systems. At the head of this society was the king, known as "wanax".
The civilization of Mycenaean Greeks perished with the collapse of Bronze Age culture in the eastern Mediterranean, to be followed by the so-called Greek Dark Ages. It is undetermined what cause the collapse of the Mycenaeans. During the Greek Dark Ages, writing in the Linear B script ceased, vital trade links were lost, and towns and villages were abandoned.
The Archaic period followed the Greek Dark Ages in the 8th century BC. Greece became divided into small self-governing communities, and adopted the Phoenician alphabet, modifying it to create the Greek alphabet. By the 6th century BC several cities had emerged as dominant in Greek affairs: Athens, Sparta, Corinth, and Thebes, of which Athens, Sparta, and Corinth were closest to the Aegean Sea. Each of them had brought the surrounding rural areas and smaller towns under their control, and Athens and Corinth had become major maritime and mercantile powers as well. In the 8th and 7th centuries BC many Greeks emigrated to form colonies in Magna Graecia (Southern Italy and Sicily), Asia Minor and further afield.
The Aegean Sea would later come to be under the control, albeit briefly, of the Kingdom of Macedonia. Philip II and his son Alexander the Great led a series of conquests that led not only to the unification of the Greek mainland and the control of the Aegean Sea under his rule, but also the destruction of the Achaemenid Empire. After Alexander the Great's death, his empire was divided among his generals. Cassander became king of the Hellenistic kingdom of Macedon, which held territory along the western coast of the Aegean, roughly corresponding to modern-day Greece. The Kingdom of Lysimachus had control over the sea's eastern coast. Greece had entered the Hellenistic period.
The Macedonian Wars were a series of conflicts fought by the Roman Republic and its Greek allies in the eastern Mediterranean against several different major Greek kingdoms. They resulted in Roman control or influence over the eastern Mediterranean basin, including the Aegean, in addition to their hegemony in the western Mediterranean after the Punic Wars. During Roman rule, the land around the Aegean Sea fell under the provinces of Achaea, Macedonia, Thracia, Asia and Creta et Cyrenica (island of Crete)
The Fall of the Western Roman Empire allowed its successor state, the Byzantine Empire, to continue Roman control over the Aegean Sea. However, their territory would later be threatened by the Early Muslim conquests initiated by Muhammad in the 7th century. Although the Rashidun Caliphate did not manage to obtain land along the cost of the Aegean sea, its conquest of the Eastern Anatolian peninsula as well as Egypt, the Levant, and North Africa left the Byzantine Empire weakened. The Umayyad Caliphate expanded the territorial gains of the Rashidun Caliphate, conquering much of North Africa, and threatened the Byzantine Empire's control of Western Anatolia, where it meets the Aegean Sea.
During the 820s, Crete was conquered by a group of Berbers Andalusians exiles led by Abu Hafs Umar al-Iqritishi, and it became an independent Islamic state. The Byzantine Empire launched a campaign that took most of the island back in 842 and 843 under Theoktistos, but the reconquest was not completed and was soon reversed. Later attempts by the Byzantine Empire to recover the island were without success. For the approximately 135 years of its existence, the emirate of Crete was one of the major foes of Byzantium. Crete commanded the sea lanes of the Eastern Mediterranean and functioned as a forward base and haven for Muslim corsair fleets that ravaged the Byzantine-controlled shores of the Aegean Sea. Crete returned to Byzantine rule under Nikephoros Phokas, who launched a huge campaign against the Emirate of Crete in 960 to 961.
Meanwhile, the Bulgarian Empire threatened Byzantine control of Northern Greece and the Aegean coast to the south. Under Presian I and his successor Boris I, the Bulgarian Empire managed to obtain a small portion of the northern Aegean coast. Simeon I of Bulgaria led Bulgaria to its greatest territorial expansion, and managed to conqueror much of the northern and western coasts of the Aegean. The Byzantines later regained control. The Second Bulgarian Empire achieved similar success along, again, the northern and western coasts, under Ivan Asen II of Bulgaria.
The Seljuq Turks, under the Seljuk Empire, invaded the Byzantine Empire in 1068, from which they annexed almost all of Anatolia, including the east coast of the Aegean Sea, during the reign of Alp Arslan, the second Sultan of the Seljuk Empire. After the death of his successor, Malik Shah I, the empire was divided, and Malik Shah was succeeded in Anatolia by Kilij Arslan I, who founded the Sultanate of Rum. The Byzantines yet again recaptured the eastern coast of the Aegean.
After Constantinople was occupied by Western European and Venetian forces during the Fourth Crusade, the area around the Aegean sea was fragmented into multiple entities, including the Latin Empire, the Kingdom of Thessalonica, the Empire of Nicaea, the Principality of Achaea, and the Duchy of Athens. The Venetians created the maritime state of the Duchy of the Archipelago, which included all the Cyclades except Mykonos and Tinos. The Empire of Nicaea, a Byzantine rump state, managed to effect the Recapture of Constantinople from the Latins in 1261 and defeat Epirus. Byzantine successes were not to last; the Ottomans would conquer the area around the Aegean coast, but before their expansion the Byzantine Empire had already been weakened from internal conflict. By the late 14th century the Byzantine Empire had lost all control of the coast of the Aegean Sea and could exercise power around their capital, Constantinople. The Ottoman Empire then gained control of all the Aegean coast with the exception of Crete, which was a Venetian colony until 1669.
The Greek War of Independence allowed a Greek state on the coast of the Aegean from 1829 onward. The Ottoman Empire held a presence over the sea for over 500 years until its dissolution following World War I, when it was replaced by modern Turkey. During the war, Greece gained control over the area around the northern coast of the Aegean. By the 1930s, Greece and Turkey had about resumed their present-day borders.
In the Italo-Turkish War of 1912, Italy captured the Dodecanese islands, and had occupied them since, reneging on the 1919 Venizelos–Tittoni agreement to cede them to Greece. The Greco-Italian War took place from October 1940 to April 1941 as part of the Balkans Campaign of World War II. The Italian war aim was to establish a Greek puppet state, which would permit the Italian annexation of the Sporades and the Cyclades islands in the Aegean Sea, to be administered as a part of the Italian Aegean Islands. The German invasion resulted in the Axis occupation of Greece. The German troops evacuated Athens on 12 October 1944, and by the end of the month, they had withdrawn from mainland Greece. Greece was then liberated by Allied troops.
Many of the islands in the Aegean have safe harbours and bays. In ancient times, navigation through the sea was easier than travelling across the rough terrain of the Greek mainland, and to some extent, the coastal areas of Anatolia. Many of the islands are volcanic, and marble and iron are mined on other islands. The larger islands have some fertile valleys and plains.
Of the main islands in the Aegean Sea, two belong to Turkey – Bozcaada (Tenedos) and Gökçeada (Imbros); the rest belong to Greece. Between the two countries, there are political disputes over several aspects of political control over the Aegean space, including the size of territorial waters, air control and the delimitation of economic rights to the continental shelf. These issues are known as the Aegean dispute.
Multiple ports are located along the Greek and Turkish coasts of the Aegean Sea. The port of Piraeus in Athens is the chief port in Greece, the largest passenger port in Europe and the third largest in the world, servicing about 20 million passengers annually. With a throughput of 1.4 million TEUs, Piraeus is placed among the top ten ports in container traffic in Europe and the top container port in the Eastern Mediterranean. Piraeus is also the commercial hub of Greek shipping. Piraeus bi-annually acts as the focus for a major shipping convention, known as Posidonia, which attracts maritime industry professionals from all over the world. Piraeus is currently Greece's third-busiest port in terms of tons of goods transported, behind Aghioi Theodoroi and Thessaloniki. The central port serves ferry routes to almost every island in the eastern portion of Greece, the island of Crete, the Cyclades, the Dodecanese, and much of the northern and the eastern Aegean Sea, while the western part of the port is used for cargo services.
As of 2007, the Port of Thessaloniki was the second-largest container port in Greece after the port of Piraeus, making it one of the busiest ports in Greece. In 2007, the Port of Thessaloniki handled 14,373,245 tonnes of cargo and 222,824 TEU's. Paloukia, on the island of Salamis, is a major passenger port.
Fish are Greece's second largest agricultural export, and Greece has Europe's largest fishing fleet. Fish captured include sardines, mackerel, grouper, grey mullets, sea bass, and seabream. There is a considerable difference between fish catches between the pelagic and demersal zones; with respect to pelagic fisheries, the catches from the northern, central and southern Aegean area groupings are dominated, respectively, by anchovy, horse mackerels, and boops. For demersal fisheries, the catches from the northern and southern Aegean area groupings are dominated by grey mullets and pickerel ("Spicara smaris") respectively.
The industry has been impacted by the Great Recession. Overfishing and habitat destruction is also a concern, threatening grouper, and seabream populations, resulting in perhaps a 50% decline of fish catch. To address these concerns, Greek fishermen have been offered a compensation by the government. Although some species are defined as protected or threatened under EU legislation, several illegal species such as the molluscs "Pinna nobilis", "Charonia tritonis" and "Lithophaga lithophaga", can be bought in restaurants and fish markets around Greece.
The Aegean islands within the Aegean Sea are significant tourist destinations. Tourism to the Aegean islands contributes a significant portion of tourism in Greece, especially since the second half of the 20th century. A total of five UNESCO World Heritage sites are located the Aegean Islands; these include the Monastery of Saint John the Theologian and the Cave of the Apocalypse on Patmos, the Pythagoreion and Heraion of Samos in Samos, the Nea Moni of Chios, the island of Delos, and the Medieval City of Rhodes.
Greece is one of the most visited countries in Europe and the world with over 33 million visitors in 2018, and the tourism industry around a quarter of Greece's Gross Domestic Product. The islands of Santorini, Crete, Lesbos, Delos, and Mykonos are common tourist destinations. An estimated 2 million tourists visit Santorini annually. However, concerns relating to overtourism have arisen in recent years, such as issues of inadequate infrastructure and overcrowding. Alongside Greece, Turkey has also been successful in developing resort areas and attracting large number of tourists, contributing to tourism in Turkey. The phrase "Blue Cruise" refers to recreational voyages along the Turkish Riviera, including across the Aegean. The ancient city of Troy, a World Heritage Site, is on the Turkish coast of the Aegean.
Greece and Turkey both take part in the Blue Flag beach certification programme of the Foundation for Environmental Education. The certification is awarded for beaches and marinas meeting strict quality standards including environmental protection, water quality, safety and services criteria. As of 2015, the Blue Flag has been awarded to 395 beaches and 9 marinas in Greece. Southern Aegean beaches on the Turkish coast include Muğla, with 102 beaches awarded with the blue flag, along with İzmir and Aydın, who have 49 and 30 beaches awarded respectively. | https://en.wikipedia.org/wiki?curid=842 |
A Clockwork Orange (novel)
A Clockwork Orange is a dystopian satirical black comedy novel by English writer Anthony Burgess, published in 1962. It is set in a near-future society that has a youth subculture of extreme violence. The teenage protagonist, Alex, narrates his violent exploits and his experiences with state authorities intent on reforming him. The book is partially written in a Russian-influenced argot called "Nadsat", which takes its name from the Russian suffix that is equivalent to '-teen' in English. According to Burgess, it was a "jeu d'esprit" written in just three weeks.
In 2005, "A Clockwork Orange" was included on "Time" magazine's list of the 100 best English-language novels written since 1923, and it was named by Modern Library and its readers as one of the 100 best English-language novels of the 20th century. The original manuscript of the book has been located at McMaster University's William Ready Division of Archives and Research Collections in Hamilton, Ontario, Canada since the institution purchased the documents in 1971.
Alex is a 15-year-old living in a near-future dystopian city who leads his gang on a night of opportunistic, random "ultra-violence". Alex's friends ("droogs" in the novel's Anglo-Russian slang, "Nadsat") are Dim, a slow-witted bruiser, who is the gang's muscle; Georgie, an ambitious second-in-command; and Pete, who mostly plays along as the droogs indulge their taste for ultra-violence. Characterised as a sociopath and hardened juvenile delinquent, Alex also displays intelligence, quick wit, and a predilection for classical music; he is particularly fond of Beethoven, referred to as "Lovely Ludwig Van".
The novella begins with the droogs sitting in their favourite hangout, the Korova Milk Bar, and drinking "milk-plus" – a beverage consisting of milk laced with the customer's drug of choice – to prepare for a night of mayhem. They assault a scholar walking home from the public library; rob a store, leaving the owner and his wife bloodied and unconscious; beat up a beggar; then scuffle with a rival gang. Joyriding through the countryside in a stolen car, they break into an isolated cottage and terrorise the young couple living there, beating the husband and raping his wife. In a metafictional touch, the husband is a writer working on a manuscript called ""A Clockwork Orange"", and Alex contemptuously reads out a paragraph that states the novel's main theme before shredding the manuscript. Back at the Korova, Alex strikes Dim for his crude response to a woman's singing of an operatic passage, and strains within the gang become apparent. At home in his parents' futuristic flat, Alex plays classical music at top volume, which he describes as giving him orgasmic bliss before falling asleep.
Alex coyly feigns illness to his parents to stay out of school the next day. Following an unexpected visit from P.R. Deltoid, his "post-corrective adviser", Alex visits a record store, where he meets two pre-teen girls. He invites them back to the flat, where he drugs and rapes them. That night after a nap, Alex finds his droogs in a mutinous mood, waiting downstairs in the torn-up and graffitied lobby. Georgie challenges Alex for leadership of the gang, demanding that they pull a "man-sized" job. Alex quells the rebellion by slashing Dim's hand and fighting with Georgie. Then, in a show of generosity, he takes them to a bar, where Alex insists on following through on Georgie's idea to burgle the home of a wealthy elderly woman. Alex breaks in and knocks the woman unconscious; but, when he opens the door to let the others in, Dim strikes him in payback for the earlier fight. The gang abandons Alex on the front step to be arrested by the police; while in custody, he learns that the woman has died from her injuries.
Alex is convicted of murder and sentenced to 14 years in Wandsworth Prison. His parents visit one day to inform him that Georgie has been killed in a botched robbery. Two years into his term, he has obtained a job in one of the prison chapels, playing music on the stereo to accompany the Sunday Christian services. The chaplain mistakes Alex's Bible studies for stirrings of faith; in reality, Alex is only reading Scripture for the violent passages. After his fellow cellmates blame him for beating a troublesome cellmate to death, he is chosen to undergo an experimental behaviour modification treatment called the Ludovico Technique in exchange for having the remainder of his sentence commuted. The technique is a form of aversion therapy, in which Alex is injected with nausea-inducing drugs while watching graphically violent films, eventually conditioning him to become severely ill at the mere thought of violence. As an unintended consequence, the soundtrack to one of the films, Beethoven's Ninth Symphony, renders Alex unable to enjoy his beloved classical music as before.
The effectiveness of the technique is demonstrated to a group of VIPs, who watch as Alex collapses before a bully and abases himself before a scantily clad young woman whose presence has aroused his predatory sexual inclinations. Although the prison chaplain accuses the state of stripping Alex of free will, the government officials on the scene are pleased with the results and Alex is released from prison.
Alex returns to his parents' flat, only to find that they are letting his room to a lodger. Now homeless, he wanders the streets and enters a public library, hoping to learn of a painless method for committing suicide. The old scholar whom Alex had assaulted in Part 1 finds him and beats him, with the help of several friends. Two policemen come to Alex's rescue, but they turn out to be Dim and Billyboy, a former rival gang leader. They take Alex outside of town, brutalise him, and abandon him there. Alex collapses at the door of an isolated cottage, realising too late that it is the one he and his droogs invaded in Part 1. The writer, F. Alexander, still lives here, but his wife has since died of injuries she sustained in the rape. He does not recognise Alex but gives him shelter and questions him about the conditioning he has undergone. Alexander and his colleagues, all highly critical of the government, plan to use Alex as a symbol of state brutality and thus prevent the incumbent government from being re-elected. Alex inadvertently reveals that he was the ringleader of the home invasion; he is removed from the cottage and locked in an upper-story bedroom as a relentless barrage of classical music plays over speakers. He attempts suicide by leaping from the window.
Alex wakes up in a hospital, where he is courted by government officials anxious to counter the bad publicity created by his suicide attempt. Placed in a mental institution, Alex is offered a well-paying job if he agrees to side with the government. A round of tests reveals that his old violent impulses have returned, indicating that the hospital doctors have undone the effects of his conditioning. As photographers snap pictures, Alex daydreams of orgiastic violence and reflects, "I was cured all right."
In the final chapter, Alex finds himself halfheartedly preparing for yet another night of crime with a new gang (Lenn, Rick, Bully). After a chance encounter with Pete, who has reformed and married, Alex finds himself taking less and less pleasure in acts of senseless violence. He begins contemplating giving up crime himself to become a productive member of society and start a family of his own, while reflecting on the notion that his own children could possibly end up being just as destructive as he has been, if not more so.
The book has three parts, each with seven chapters. Burgess has stated that the total of 21 chapters was an intentional nod to the age of 21 being recognised as a milestone in human maturation. The 21st chapter was omitted from the editions published in the United States prior to 1986. In the introduction to the updated American text (these newer editions include the missing 21st chapter), Burgess explains that when he first brought the book to an American publisher, he was told that U.S. audiences would never go for the final chapter, in which Alex sees the error of his ways, decides he has lost all energy for and thrill from violence and resolves to turn his life around (a moment of metanoia).
At the American publisher's insistence, Burgess allowed their editors to cut the redeeming final chapter from the U.S. version, so that the tale would end on a darker note, with Alex succumbing to his violent, reckless nature – an ending which the publisher insisted would be "more realistic" and appealing to a US audience. The film adaptation, directed by Stanley Kubrick, is based on the American edition of the book (which Burgess considered to be "badly flawed"). Kubrick called Chapter 21 "an extra chapter" and claimed that he had not read the original version until he had virtually finished the screenplay, and that he had never given serious consideration to using it. In Kubrick's opinion – as in the opinion of other readers, including the original American editor – the final chapter was unconvincing and inconsistent with the book.
"A Clockwork Orange" was written in Hove, then a senescent seaside town. Burgess had arrived back in Britain after his stint abroad to see that much had changed. A youth culture had grown, including coffee bars, pop music and teenage gangs. England was gripped by fears over juvenile delinquency. Burgess stated that the novel's inspiration was his first wife Lynne's beating by a gang of drunk American servicemen stationed in England during World War II. She subsequently miscarried. In its investigation of free will, the book's target is ostensibly the concept of behaviourism, pioneered by such figures as B. F. Skinner.
Burgess later stated that he wrote the book in three weeks.
Burgess has offered several clarifications about the meaning and origin of its title:
The saying "as queer as..." followed by an improbable object: "...a clockwork orange", or "...a four speed walking stick" or "...a left handed corkscrew" etc. predates Burgess' novel. An early example, "as queer as Dick's hatband", appeared in 1796, and was alluded to in 1757.
This title alludes to the protagonist's negative emotional responses to feelings of evil which prevent the exercise of his free will subsequent to the administration of the Ludovico Technique. To induce this conditioning, Alex is forced to watch scenes of violence on a screen that are systematically paired with negative physical stimulation. The negative physical stimulation takes the form of nausea and "feelings of terror," which are caused by an emetic medicine administered just before the presentation of the films.
The book, narrated by Alex, contains many words in a slang argot which Burgess invented for the book, called Nadsat. It is a mix of modified Slavic words, rhyming slang and derived Russian (like "baboochka"). For instance, these terms have the following meanings in Nadsat: "droog" = friend; "moloko" = milk; "gulliver" ("golova") = head; "malchick" or "malchickiwick" = boy; "soomka" = sack or bag; "Bog" = God; "horrorshow" ("khorosho") = good; "prestoopnick" = criminal; "rooker" ("rooka") = hand; "cal" = crap; "veck" ("chelloveck") = man or guy; "litso" = face; "malenky" = little; and so on. Some words Burgess invented himself or just adapted from pre-existing languages. Compare Polari.
One of Alex's doctors explains the language to a colleague as "odd bits of old rhyming slang; a bit of gypsy talk, too. But most of the roots are Slav propaganda. Subliminal penetration." Some words are not derived from anything, but merely easy to guess, e.g. "in-out, in-out" or "the old in-out" means sexual intercourse. "Cutter", however, means "money", because "cutter" rhymes with "bread-and-butter"; this is rhyming slang, which is intended to be impenetrable to outsiders (especially eavesdropping policemen). Additionally, slang like "appypolly loggy" ("apology") seems to derive from school boy slang. This reflects Alex's age of 15.
In the first edition of the book, no key was provided, and the reader was left to interpret the meaning from the context. In his appendix to the restored edition, Burgess explained that the slang would keep the book from seeming dated, and served to muffle "the raw response of pornography" from the acts of violence.
The term "ultraviolence", referring to excessive or unjustified violence, was coined by Burgess in the book, which includes the phrase "do the ultra-violent". The term's association with aesthetic violence has led to its use in the media.
In 1976, "A Clockwork Orange" was removed from an Aurora, Colorado high school because of "objectionable language". A year later in 1977 it was removed from high school classrooms in Westport, Massachusetts over similar concerns with "objectionable" language. In 1982, it was removed from two Anniston, Alabama libraries, later to be reinstated on a restricted basis. Also, in 1973 a bookseller was arrested for selling the novel. The charges were later dropped. However, each of these instances came after the release of Stanley Kubrick's popular 1971 film adaptation of "A Clockwork Orange", itself the subject of much controversy.
"The Sunday Telegraph" review was positive, and described the book as "entertaining ... even profound". "The Sunday Times" review was negative, and described the book as "a very ordinary, brutal and psychologically shallow story". "The Times" also reviewed the book negatively, describing it as "a somewhat clumsy experiment with science fiction [with] clumsy cliches about juvenile delinquency". The violence was criticised as "unconvincing in detail".
Burgess dismissed "A Clockwork Orange" as "too didactic to be artistic". He claimed that the violent content of the novel "nauseated" him.
In 1985, Burgess published "Flame into Being: The Life and Work of D. H. Lawrence" and while discussing "Lady Chatterley's Lover" in his biography, Burgess compared that novel's notoriety with "A Clockwork Orange": "We all suffer from the popular desire to make the known notorious. The book I am best known for, or only known for, is a novel I am prepared to repudiate: written a quarter of a century ago, a "jeu d'esprit" knocked off for money in three weeks, it became known as the raw material for a film which seemed to glorify sex and violence. The film made it easy for readers of the book to misunderstand what it was about, and the misunderstanding will pursue me until I die. I should not have written the book because of this danger of misinterpretation, and the same may be said of Lawrence and "Lady Chatterley's Lover"."
"A Clockwork Orange" was chosen by "Time" magazine as one of the 100 best English-language books from 1923 to 2005.
A 1965 film by Andy Warhol entitled "Vinyl" was an adaptation of Burgess's novel.
The best known adaptation of the novella to other forms is the 1971 film "A Clockwork Orange" by Stanley Kubrick, featuring Malcolm McDowell as Alex. In 1987 Burgess published a stage play titled "A Clockwork Orange: A Play with Music". The play includes songs, written by Burgess, which are inspired by Beethoven and Nadsat slang.
In 1988, a German adaptation of "A Clockwork Orange" at the intimate theatre of Bad Godesberg featured a musical score by the German punk rock band Die Toten Hosen which, combined with orchestral clips of Beethoven's Ninth Symphony and "other dirty melodies" (so stated by the subtitle), was released on the album "Ein kleines bisschen Horrorschau". The track "Hier kommt Alex" became one of the band's signature songs.
In February 1990, another musical version was produced at the Barbican Theatre in London by the Royal Shakespeare Company. Titled "A Clockwork Orange: 2004", it received mostly negative reviews, with John Peter of "The Sunday Times" of London calling it "only an intellectual "Rocky Horror Show"", and John Gross of "The Sunday Telegraph" calling it "a clockwork lemon". Even Burgess himself, who wrote the script based on his novel, was disappointed. According to "The Evening Standard", he called the score, written by Bono and The Edge of the rock group U2, "neo-wallpaper." Burgess had originally worked alongside the director of the production, Ron Daniels, and envisioned a musical score that was entirely classical. Unhappy with the decision to abandon that score, he heavily criticised the band's experimental mix of hip hop, liturgical and gothic music. Lise Hand of "The Irish Independent" reported The Edge as saying that Burgess's original conception was "a score written by a novelist rather than a songwriter". Calling it "meaningless glitz", Jane Edwardes of "20/20 Magazine" said that watching this production was "like being invited to an expensive French Restaurant – and being served with a Big Mac."
In 1994, Chicago's Steppenwolf Theater put on a production of "A Clockwork Orange" directed by Terry Kinney. The American premiere of novelist Anthony Burgess's own adaptation of his "A Clockwork Orange" starred K. Todd Freeman as Alex. In 2001, UNI Theatre (Mississauga, Ontario) presented the Canadian premiere of the play under the direction of Terry Costa.
In 2002, Godlight Theatre Company presented the New York Premiere adaptation of "A Clockwork Orange" at Manhattan Theatre Source. The production went on to play at the SoHo Playhouse (2002), Ensemble Studio Theatre (2004), 59E59 Theaters (2005) and the Edinburgh Festival Fringe (2005). While at Edinburgh, the production received rave reviews from the press while playing to sold-out audiences. The production was directed by Godlight's Artistic Director, Joe Tantalo.
In 2003, Los Angeles director Brad Mays and the ARK Theatre Company staged a multi-media adaptation of "A Clockwork Orange", which was named "Pick Of The Week" by the "LA Weekly" and nominated for three of the 2004 LA Weekly Theater Awards: Direction, Revival Production (of a 20th-century work), and Leading Female Performance. Vanessa Claire Smith won Best Actress for her gender-bending portrayal of Alex, the music-loving teenage sociopath. This production utilised three separate video streams outputted to seven onstage video monitors – six 19-inch and one 40-inch. In order to preserve the first-person narrative of the book, a pre-recorded video stream of Alex, "your humble narrator", was projected onto the 40-inch monitor, thereby freeing the onstage character during passages which would have been awkward or impossible to sustain in the breaking of the fourth wall.
An adaptation of the work, based on the original novel, the film and Burgess's own stage version, was performed by The SiLo Theatre in Auckland, New Zealand in early 2007. | https://en.wikipedia.org/wiki?curid=843 |
Amsterdam
Amsterdam (, ; ) is the capital and most populous city of the Netherlands with a population of 872,680 within the city proper, 1,380,872 in the urban area and 2,410,960 in the metropolitan area. Found within the province of North Holland, Amsterdam is colloquially referred to as the "Venice of the North", attributed by the large number of canals which form a UNESCO World Heritage Site.
Amsterdam's name derives from "Amstelredamme", indicative of the city's origin around a dam in the river Amstel. Originating as a small fishing village in the late 12th century; Amsterdam became one of the most important ports in the world during the Dutch Golden Age of the 17th century; and became the leading centre for finance and trade. In the 19th and 20th centuries, the city expanded, and many new neighbourhoods and suburbs were planned and built. The 17th-century canals of Amsterdam and the 19–20th century Defence Line of Amsterdam are on the UNESCO World Heritage List. Sloten, annexed in 1921 by the municipality of Amsterdam; is the oldest part of the city, dating to the 9th century.
Amsterdam's main attractions include its historic canals, the Rijksmuseum, the Van Gogh Museum, the Stedelijk Museum, Hermitage Amsterdam, the Concertgebouw, the Anne Frank House, the Scheepvaartmuseum, the Amsterdam Museum, the Heineken Experience, the Royal Palace of Amsterdam, Natura Artis Magistra, Hortus Botanicus Amsterdam, NEMO, the red-light district and many cannabis coffee shops. Drawing more than 5 million international visitors in 2014. The city is also well known for its nightlife and festival activity; with several of its nightclubs (Melkweg, Paradiso) among the world's most famous. Primarily known for its artistic heritage, elaborate canal system and narrow houses with gabled façades; well-preserved legacies of the city’s 17th-century Golden Age. These characteristics are arguably responsible for attracting millions of Amsterdam's visitors annually. Cycling is key to the city’s character, and there are numerous bike paths.
The Amsterdam Stock Exchange is considered the oldest “modern" securities market stock exchange in the world. As the commercial capital of the Netherlands and one of the top financial centres in Europe, Amsterdam is considered an alpha-world city by the Globalization and World Cities (GaWC) study group. The city is also the cultural capital of the Netherlands. Many large Dutch institutions have their headquarters in the city, including: the Philips conglomerate, AkzoNobel, Booking.com, TomTom, and ING. Moreover, many of the world's largest companies are based in Amsterdam or have established their European headquarters in the city, such as leading technology companies Uber, Netflix and Tesla. In 2012, Amsterdam was ranked the second best city to live in by the Economist Intelligence Unit (EIU) and 12th globally on quality of living for environment and infrastructure by Mercer. The city was ranked 4th place globally as top tech hub in the Savills Tech Cities 2019 report (2nd in Europe), and 3rd in innovation by Australian innovation agency 2thinknow in their Innovation Cities Index 2009. The Port of Amsterdam is the fifth largest in Europe. The KLM hub and Amsterdam's main airport: Schiphol, is the Netherlands' busiest airport as well as the fourth busiest in Europe. The Dutch capital is considered one of the most multicultural cities in the world, with at least 177 nationalities represented.
A few of Amsterdam's notable residents throughout history include: painters Rembrandt and Van Gogh, the diarist Anne Frank, and philosopher Baruch Spinoza.
After the floods of 1170 and 1173, locals near the river Amstel built a bridge over the river and a dam across it, giving its name to the village: "Aemstelredamme". The earliest recorded use of that name is in a document dated 27 October 1275, which exempted inhabitants of the village from paying bridge tolls to Count Floris V. This allowed the inhabitants of the village of Aemstelredamme to travel freely through the County of Holland, paying no tolls at bridges, locks and dams. The certificate describes the inhabitants as "homines manentes apud Amestelledamme" (people residing near Amestelledamme). By 1327, the name had developed into "Aemsterdam".
Amsterdam is much younger than Dutch cities such as Nijmegen, Rotterdam, and Utrecht. In October 2008, historical geographer Chris de Bont suggested that the land around Amsterdam was being reclaimed as early as the late 10th century. This does not necessarily mean that there was already a settlement then, since reclamation of land may not have been for farming—it may have been for peat, for use as fuel.
Amsterdam was granted city rights in either 1300 or 1306. From the 14th century on, Amsterdam flourished, largely from trade with the Hanseatic League. In 1345, an alleged Eucharistic miracle in the Kalverstraat rendered the city an important place of pilgrimage until the adoption of the Protestant faith. The Miracle devotion went underground but was kept alive. In the 19th century, especially after the jubilee of 1845, the devotion was revitalized and became an important national point of reference for Dutch Catholics. The "Stille Omgang"—a silent walk or procession in civil attire—is the expression of the pilgrimage within the Protestant Netherlands since the late 19th century. In the heyday of the Silent Walk, up to 90,000 pilgrims came to Amsterdam. In the 21st century this has reduced to about 5000.
In the 16th century, the Dutch rebelled against Philip II of Spain and his successors. The main reasons for the uprising were the imposition of new taxes, the tenth penny, and the religious persecution of Protestants by the newly introduced Inquisition. The revolt escalated into the Eighty Years' War, which ultimately led to Dutch independence. Strongly pushed by Dutch Revolt leader William the Silent, the Dutch Republic became known for its relative religious tolerance. Jews from the Iberian Peninsula, Huguenots from France, prosperous merchants and printers from Flanders, and economic and religious refugees from the Spanish-controlled parts of the Low Countries found safety in Amsterdam. The influx of Flemish printers and the city's intellectual tolerance made Amsterdam a centre for the European free press.
The 17th century is considered Amsterdam's "Golden Age", during which it became the wealthiest city in the western world. Ships sailed from Amsterdam to the Baltic Sea, North America, and Africa, as well as present-day Indonesia, India, Sri Lanka, and Brazil, forming the basis of a worldwide trading network. Amsterdam's merchants had the largest share in both the Dutch East India Company and the Dutch West India Company. These companies acquired overseas possessions that later became Dutch colonies.
Amsterdam was Europe's most important point for the shipment of goods and was the leading Financial centre of the western world. In 1602, the Amsterdam office of the international trading Dutch East India Company became the world's first stock exchange by trading in its own shares. The Bank of Amsterdam started operations in 1609, acting as a full service bank for Dutch merchant bankers and as a reserve bank.
Amsterdam's prosperity declined during the 18th and early 19th centuries. The wars of the Dutch Republic with England and France took their toll on Amsterdam. During the Napoleonic Wars, Amsterdam's significance reached its lowest point, with Holland being absorbed into the French Empire. However, the later establishment of the United Kingdom of the Netherlands in 1815 marked a turning point.
The end of the 19th century is sometimes called Amsterdam's second Golden Age. New museums, a railway station, and the Concertgebouw were built; in this same time, the Industrial Revolution reached the city. The Amsterdam–Rhine Canal was dug to give Amsterdam a direct connection to the Rhine, and the North Sea Canal was dug to give the port a shorter connection to the North Sea. Both projects dramatically improved commerce with the rest of Europe and the world. In 1906, Joseph Conrad gave a brief description of Amsterdam as seen from the seaside, in "The Mirror of the Sea".
Shortly before the First World War, the city started to expand again, and new suburbs were built. Even though the Netherlands remained neutral in this war, Amsterdam suffered a food shortage, and heating fuel became scarce. The shortages sparked riots in which several people were killed. These riots are known as the "Aardappeloproer" (Potato rebellion). People started looting stores and warehouses in order to get supplies, mainly food.
On 1 January 1921, after a flood in 1916, the depleted municipalities of Durgerdam, Holysloot, Zunderdorp and Schellingwoude, all lying north of Amsterdam, were, at their own request, annexed to the city. Between the wars, the city continued to expand, most notably to the west of the Jordaan district in the Frederik Hendrikbuurt and surrounding neighbourhoods.
Nazi Germany invaded the Netherlands on 10 May 1940 and took control of the country. Some Amsterdam citizens sheltered Jews, thereby exposing themselves and their families to a high risk of being imprisoned or sent to concentration camps. More than 100,000 Dutch Jews were deported to Nazi concentration camps, of whom some 60,000 lived in Amsterdam. In response, the Dutch Communist Party organised the February strike attended by 300,000 people to protest against the raids. Perhaps the most famous deportee was the young Jewish girl Anne Frank, who died in the Bergen-Belsen concentration camp. At the end of the Second World War, communication with the rest of the country broke down, and food and fuel became scarce. Many citizens travelled to the countryside to forage. Dogs, cats, raw sugar beets, and tulip bulbs—cooked to a pulp—were consumed to stay alive. Many trees in Amsterdam were cut down for fuel, and wood was taken from the houses, apartments and other buildings of deported Jews.
Many new suburbs, such as Osdorp, Slotervaart, Slotermeer and Geuzenveld, were built in the years after the Second World War.
These suburbs contained many public parks and wide open spaces, and the new buildings provided improved housing conditions with larger and brighter rooms, gardens, and balconies. Because of the war and other events of the 20th century, almost the entire city centre had fallen into disrepair. As society was changing, politicians and other influential figures made plans to redesign large parts of it. There was an increasing demand for office buildings, and also for new roads, as the automobile became available to most people. A metro started operating in 1977 between the new suburb of Bijlmermeer in the city's Zuidoost (southeast) exclave and the centre of Amsterdam. Further plans were to build a new highway above the metro to connect Amsterdam Centraal and city centre with other parts of the city.
The required large-scale demolitions began in Amsterdam's former Jewish neighbourhood. Smaller streets, such as the Jodenbreestraat and Weesperstraat, were widened and almost all houses and buildings were demolished. At the peak of the demolition, the "Nieuwmarktrellen" (Nieuwmarkt Riots) broke out; the rioters expressed their fury about the demolition caused by the restructuring of the city.
As a result, the demolition was stopped and the highway into the city's centre was never fully built; only the metro was completed. Only a few streets remained widened. The new city hall was built on the almost completely demolished Waterlooplein. Meanwhile, large private organisations, such as "Stadsherstel Amsterdam", were founded with the aim of restoring the entire city centre. Although the success of this struggle is visible today, efforts for further restoration are still ongoing. The entire city centre has reattained its former splendour and, as a whole, is now a protected area. Many of its buildings have become monuments, and in July 2010 the Grachtengordel (the three concentric canals: Herengracht, Keizersgracht, and Prinsengracht) was added to the UNESCO World Heritage List.
In the early years of the 21st century, the Amsterdam city centre has attracted large numbers of tourists: between 2012 and 2015, the annual number of visitors rose from 10 million to 17 million. Real estate prices have surged, and local shops are making way for tourist-oriented ones, making the centre unaffordable for the city's inhabitants. These developments have evoked comparisons with Venice, a city thought to be overwhelmed by the tourist influx.
Construction of a metro line connecting the part of the city north of the river (or lake) IJ to the centre was started in 2003. The project was controversial because its cost had exceeded its budget by a factor three by 2008, because of fears of damage to buildings in the centre, and because construction had to be halted and restarted multiple times. The metro line was completed in 2018.
Since 2014, renewed focus has been given to urban regeneration and renewal, especially in areas directly bordering the city centre, such as Frederik Hendrikbuurt. This urban renewal and expansion of the traditional centre of the city—with the construction on artificial islands of the new eastern IJburg neighbourhood—is part of the Structural Vision Amsterdam 2040 initiative.
Amsterdam is located in the Western Netherlands, in the province of North Holland, although it is not its capital which is Haarlem. The river Amstel ends in the city centre and connects to a large number of canals that eventually terminate in the IJ. Amsterdam is about below sea level. The surrounding land is flat as it is formed of large polders. A man-made forest, Amsterdamse Bos, is in the southwest. Amsterdam is connected to the North Sea through the long North Sea Canal.
Amsterdam is intensely urbanised, as is the Amsterdam metropolitan area surrounding the city. Comprising of land, the city proper has 4,457 inhabitants per km2 and 2,275 houses per km2. Parks and nature reserves make up 12% of Amsterdam's land area.
Amsterdam has more than of canals, most of which are navigable by boat. The city's three main canals are the Prinsengracht, Herengracht, and Keizersgracht.
In the Middle Ages, Amsterdam was surrounded by a moat, called the Singel, which now forms the innermost ring in the city, and gives the city centre a horseshoe shape. The city is also served by a seaport. It has been compared with Venice, due to its division into about 90 islands, which are linked by more than 1,200 bridges.
Amsterdam has an oceanic climate (Köppen "Cfb") strongly influenced by its proximity to the North Sea to the west, with prevailing westerly winds. While winters are cool and summers warm, temperatures vary year by year. There can occasionally be cold snowy winters and hot humid summers.
Amsterdam, as well as most of the North Holland province, lies in USDA Hardiness zone 8b. Frosts mainly occur during spells of easterly or northeasterly winds from the inner European continent. Even then, because Amsterdam is surrounded on three sides by large bodies of water, as well as having a significant heat-island effect, nights rarely fall below , while it could easily be in Hilversum, southeast.
Summers are moderately warm with a number of hot days every month. The average daily high in August is , and or higher is only measured on average on 2.5 days, placing Amsterdam in AHS Heat Zone 2. The record extremes range from to .
Days with more than of precipitation are common, on average 133 days per year.
Amsterdam's average annual precipitation is . A large part of this precipitation falls as light rain or brief showers. Cloudy and damp days are common during the cooler months of October through March.
In 1300, Amsterdam's population was around 1,000 people. While many towns in Holland experienced population decline during the 15th and 16th centuries, Amsterdam's population grew, mainly due to the rise of the profitable Baltic maritime trade after the Burgundian victory in the Dutch–Hanseatic War. Still, the population of Amsterdam was only modest compared to the towns and cities of Flanders and Brabant, which comprised the most urbanised area of the Low Countries.
This changed when, during the Dutch Revolt, many people from the Southern Netherlands fled to the North, especially after Antwerp fell to Spanish forces in 1585. Jewish people from Spain, Portugal and Eastern Europe similarly settled in Amsterdam, as did Germans and Scandinavians. In thirty years, Amsterdam's population more than doubled between 1585 and 1610. By 1600, its population was around 50,000. During the 1660s, Amsterdam's population reached 200,000. The city's growth levelled off and the population stabilised around 240,000 for most of the 18th century.
In 1750, Amsterdam was the fourth largest city in western Europe, behind London (676,000), Paris (560,000) and Naples (324,000). This was all the more remarkable as Amsterdam was neither the capital city nor the seat of government of the Dutch Republic, which itself was a much smaller state than England, France or the Ottoman Empire. In contrast to those other metropolises, Amsterdam was also surrounded by large towns such as Leiden (about 67,000), Rotterdam (45,000), Haarlem (38,000), and Utrecht (30,000).
The city's population declined in the early 19th century, dipping under 200,000 in 1820. By the second half of the 19th century, industrialisation spurred renewed growth. Amsterdam's population hit an all-time high of 872,000 in 1959, before declining in the following decades due to government-sponsored suburbanisation to so-called "groeikernen" (growth centres) such as Purmerend and Almere. Between 1970 and 1980, Amsterdam experienced its sharp population decline, peaking at a net loss of 25,000 people in 1973. By 1985 the city had only 675,570 residents. This was soon followed by reurbanisation and gentrification, leading to renewed population growth in the 2010s. Also in the 2010s, much of Amsterdam's population growth was due to immigration to the city. Amsterdam's population is expected to top its previous high in 2019, reaching 873,000.
In the 16th and 17th century non-Dutch immigrants to Amsterdam were mostly Huguenots, Flemings, Sephardi Jews and Westphalians. Huguenots came after the Edict of Fontainebleau in 1685, while the Flemish Protestants came during the Eighty Years' War. The Westphalians came to Amsterdam mostly for economic reasons – their influx continued through the 18th and 19th centuries. Before the Second World War, 10% of the city population was Jewish. Just twenty per cent of them survived the Shoah.
The first mass immigration in the 20th century were by people from Indonesia, who came to Amsterdam after the independence of the Dutch East Indies in the 1940s and 1950s. In the 1960s guest workers from Turkey, Morocco, Italy and Spain emigrated to Amsterdam. After the independence of Suriname in 1975, a large wave of Surinamese settled in Amsterdam, mostly in the Bijlmer area. Other immigrants, including refugees asylum seekers and illegal immigrants, came from Europe, America, Asia, and Africa. In the 1970s and 1980s, many 'old' Amsterdammers moved to 'new' cities like Almere and Purmerend, prompted by the third planological bill of the Dutch government. This bill promoted suburbanisation and arranged for new developments in so-called "groeikernen", literally "cores of growth". Young professionals and artists moved into neighbourhoods de Pijp and the Jordaan abandoned by these Amsterdammers. The non-Western immigrants settled mostly in the social housing projects in Amsterdam-West and the Bijlmer. Today, people of non-Western origin make up approximately one-third of the population of Amsterdam, and more than 50% of the city'
s children. Ethnic Dutch (as defined by the Dutch census) now make up a minority of the total population, although by far the largest one. Only one in three inhabitants under 15 is an "autochtoon", or a person who has two parents of Dutch origin. Segregation along ethnic lines is clearly visible, with people of non-Western origin, considered a separate group by Statistics Netherlands, concentrating in specific neighbourhoods especially in Nieuw-West, Zeeburg, Bijlmer and in certain areas of Amsterdam-Noord.
In 2000, Christians formed the largest religious group in the city (17% of the population). The next largest religion was Islam (14%), most of whose followers were Sunni.
In 1578, the largely Roman Catholic city of Amsterdam joined the revolt against Spanish rule, late in comparison to other major northern Dutch cities. Roman Catholic priests were driven out of the city. Following the Dutch takeover, all churches were converted to Protestant worship. Calvinism was declared the main religion, and although Catholicism was not forbidden and priests allowed to serve, the Catholic hierarchy was prohibited. This led to the establishment of "schuilkerken", covert religious buildings that were hidden in pre-existing buildings. Catholics, some Jewish and dissenting Protestants worshiped in such buildings. A large influx of foreigners of many religions came to 17th-century Amsterdam, in particular Sefardic Jews from Spain and Portugal, Huguenots from France, Lutherans, Mennonites, and Protestants from across the Netherlands. This led to the establishment of many non-Dutch-speaking churches. In 1603, the Jewish received permission to practice their religion. In 1639, the first synagogue was consecrated. The Jews came to call the town Jerusalem of the West.
As they became established in the city, other Christian denominations used converted Catholic chapels to conduct their own services. The oldest English-language church congregation in the world outside the United Kingdom is found at the Begijnhof. Regular services there are still offered in English under the auspices of the Church of Scotland. Being Calvinists, the Huguenots soon integrated into the Dutch Reformed Church, though often retaining their own congregations. Some, commonly referred by the moniker 'Walloon', are recognisable today as they offer occasional services in French.
In the second half of the 17th century, Amsterdam experienced an influx of Ashkenazim, Jews from Central and Eastern Europe. Jews often fled the pogroms in those areas. The first Ashkenazis who arrived in Amsterdam were refugees from the Khmelnytsky Uprising in Ukraine and the Thirty Years' War, which devastated much of Central Europe. They not only founded their own synagogues, but had a strong influence on the 'Amsterdam dialect' adding a large Yiddish local vocabulary.
Despite an absence of an official Jewish ghetto, most Jews preferred to live in the eastern part of the old medieval heart of the city. The main street of this Jewish neighbourhood was the "Jodenbreestraat". The neighbourhood comprised the "Waterlooplein" and the Nieuwmarkt. Buildings in this neighbourhood fell into disrepair after the Second World War, and a large section of the neighbourhood was demolished during the construction of the subway. This led to riots, and as a result the original plans for large-scale reconstruction were abandoned. The neighbourhood was rebuilt with smaller-scale residence buildings on the basis of its original layout.
Catholic churches in Amsterdam have been constructed since the restoration of the episcopal hierarchy in 1853. One of the principal architects behind the city's Catholic churches, Cuypers, was also responsible for the Amsterdam Central station and the Rijksmuseum.
In 1924, the Roman Catholic Church of the Netherlands hosted the International Eucharistic Congress in Amsterdam, and numerous Catholic prelates visited the city, where festivities were held in churches and stadiums. Catholic processions on the public streets, however, were still forbidden under law at the time. Only in the 20th century was Amsterdam's relation to Catholicism normalised, but despite its far larger population size, the episcopal see of the city was placed in the provincial town of Haarlem.
In recent times, religious demographics in Amsterdam have been changed by immigration from former colonies. Hinduism has been introduced from the Hindu diaspora from Suriname and several distinct branches of Islam have been brought from various parts of the world. Islam is now the largest non-Christian religion in Amsterdam. The large community of Ghanaian immigrants have established African churches, often in parking garages in the Bijlmer area.
Amsterdam experienced an influx of religions and cultures after the Second World War. With 180 different nationalities, Amsterdam is home to one of the widest varieties of nationalities of any city in the world. The proportion of the population of immigrant origin in the city proper is about 50% and 88% of the population are Dutch citizens.
Amsterdam has been one of the municipalities in the Netherlands which provided immigrants with extensive and free Dutch-language courses, which have benefited many immigrants.
Amsterdam fans out south from the Amsterdam Centraal railway station and Damrak, the main street off the station. The oldest area of the town is known as De Wallen (English: "The Quays"). It lies to the east of Damrak and contains the city's famous red light district. To the south of De Wallen is the old Jewish quarter of Waterlooplein.
The medieval and colonial age canals of Amsterdam, known as "grachten", embraces the heart of the city where homes have interesting gables. Beyond the Grachtengordel are the former working class areas of Jordaan and de Pijp. The Museumplein with the city's major museums, the Vondelpark, a 19th-century park named after the Dutch writer Joost van den Vondel, and the Plantage neighbourhood, with the zoo, are also located outside the Grachtengordel.
Several parts of the city and the surrounding urban area are polders. This can be recognised by the suffix "-meer" which means "lake", as in Aalsmeer, Bijlmermeer, Haarlemmermeer, and Watergraafsmeer.
The Amsterdam canal system is the result of conscious city planning. In the early 17th century, when immigration was at a peak, a comprehensive plan was developed that was based on four concentric half-circles of canals with their ends emerging at the IJ bay. Known as the Grachtengordel, three of the canals were mostly for residential development: the Herengracht (where "Heren" refers to "Heren Regeerders van de stad Amsterdam" (ruling lords of Amsterdam), and "gracht" means canal, so the name can be roughly translated as "Canal of the Lords"), Keizersgracht (Emperor's Canal), and Prinsengracht (Prince's Canal). The fourth and outermost canal is the Singelgracht, which is often not mentioned on maps, because it is a collective name for all canals in the outer ring. The Singelgracht should not be confused with the oldest and innermost canal, the Singel.
The canals served for defence, water management and transport. The defences took the form of a moat and earthen dikes, with gates at transit points, but otherwise no masonry superstructures. The original plans have been lost, so historians, such as Ed Taverne, need to speculate on the original intentions: it is thought that the considerations of the layout were purely practical and defensive rather than ornamental.
Construction started in 1613 and proceeded from west to east, across the breadth of the layout, like a gigantic windshield wiper as the historian Geert Mak calls it – and not from the centre outwards, as a popular myth has it. The canal construction in the southern sector was completed by 1656. Subsequently, the construction of residential buildings proceeded slowly. The eastern part of the concentric canal plan, covering the area between the Amstel river and the IJ bay, has never been implemented. In the following centuries, the land was used for parks, senior citizens' homes, theatres, other public facilities, and waterways without much planning. Over the years, several canals have been filled in, becoming streets or squares, such as the Nieuwezijds Voorburgwal and the Spui.
After the development of Amsterdam's canals in the 17th century, the city did not grow beyond its borders for two centuries. During the 19th century, Samuel Sarphati devised a plan based on the grandeur of Paris and London at that time. The plan envisaged the construction of new houses, public buildings and streets just outside the Grachtengordel. The main aim of the plan, however, was to improve public health. Although the plan did not expand the city, it did produce some of the largest public buildings to date, like the "Paleis voor Volksvlijt".
Following Sarphati, civil engineers Jacobus van Niftrik and Jan Kalff designed an entire ring of 19th-century neighbourhoods surrounding the city's centre, with the city preserving the ownership of all land outside the 17th-century limit, thus firmly controlling development. Most of these neighbourhoods became home to the working class.
In response to overcrowding, two plans were designed at the beginning of the 20th century which were very different from anything Amsterdam had ever seen before: "Plan Zuid", designed by the architect Berlage, and "West". These plans involved the development of new neighbourhoods consisting of housing blocks for all social classes.
After the Second World War, large new neighbourhoods were built in the western, southeastern, and northern parts of the city. These new neighbourhoods were built to relieve the city's shortage of living space and give people affordable houses with modern conveniences. The neighbourhoods consisted mainly of large housing blocks situated among green spaces, connected to wide roads, making the neighbourhoods easily accessible by motor car. The western suburbs which were built in that period are collectively called the Westelijke Tuinsteden. The area to the southeast of the city built during the same period is known as the Bijlmer.
Amsterdam has a rich architectural history. The oldest building in Amsterdam is the Oude Kerk (English: Old Church), at the heart of the Wallen, consecrated in 1306. The oldest wooden building is "Het Houten Huys" at the Begijnhof. It was constructed around 1425 and is one of only two existing wooden buildings. It is also one of the few examples of Gothic architecture in Amsterdam. The oldest stone building of the Netherlands, The Moriaan is build in 's-Hertogenbosch.
In the 16th century, wooden buildings were razed and replaced with brick ones. During this period, many buildings were constructed in the architectural style of the Renaissance. Buildings of this period are very recognisable with their stepped gable façades, which is the common Dutch Renaissance style. Amsterdam quickly developed its own Renaissance architecture. These buildings were built according to the principles of the architect Hendrick de Keyser. One of the most striking buildings designed by Hendrick de Keyer is the Westerkerk. In the 17th century baroque architecture became very popular, as it was elsewhere in Europe. This roughly coincided with Amsterdam's Golden Age. The leading architects of this style in Amsterdam were Jacob van Campen, Philips Vingboons and Daniel Stalpaert.
Philip Vingboons designed splendid merchants' houses throughout the city. A famous building in baroque style in Amsterdam is the Royal Palace on Dam Square. Throughout the 18th century, Amsterdam was heavily influenced by French culture. This is reflected in the architecture of that period. Around 1815, architects broke with the baroque style and started building in different neo-styles. Most Gothic style buildings date from that era and are therefore said to be built in a neo-gothic style. At the end of the 19th century, the Jugendstil or Art Nouveau style became popular and many new buildings were constructed in this architectural style. Since Amsterdam expanded rapidly during this period, new buildings adjacent to the city centre were also built in this style. The houses in the vicinity of the Museum Square in Amsterdam Oud-Zuid are an example of Jugendstil. The last style that was popular in Amsterdam before the modern era was Art Deco. Amsterdam had its own version of the style, which was called the Amsterdamse School. Whole districts were built this style, such as the "Rivierenbuurt". A notable feature of the façades of buildings designed in Amsterdamse School is that they are highly decorated and ornate, with oddly shaped windows and doors.
The old city centre is the focal point of all the architectural styles before the end of the 19th century.
Jugendstil and Georgian are mostly found outside the city's centre in the neighbourhoods built in the early
20th century, although there are also some striking examples of these styles in the city centre.
Most historic buildings in the city centre and nearby are houses, such as the famous merchants' houses lining the canals.
Amsterdam has many parks, open spaces, and squares throughout the city. The Vondelpark, the largest park in the city, is located in the Oud-Zuid neighbourhood and is named after the 17th-century Amsterdam author Joost van den Vondel. Yearly, the park has around 10 million visitors. In the park is an open-air theatre, a playground and several horeca facilities. In the Zuid borough, is the Beatrixpark, named after Queen Beatrix. Between Amsterdam and Amstelveen is the Amsterdamse Bos ("Amsterdam Forest"), the largest recreational area in Amsterdam. Annually, almost 4.5 million people visit the park, which has a size of 1.000 hectares and is approximately three times the size of Central Park. The Amstelpark in the Zuid borough houses the Rieker windmill, which dates to 1636. Other parks include the Sarphatipark in the De Pijp neighbourhood, the Oosterpark in the Oost borough and the Westerpark in the Westerpark neighbourhood. The city has three beaches: Nemo Beach, Citybeach "Het stenen hoofd" (Silodam) and Blijburg, all located in the Centrum borough.
The city has many open squares ("plein" in Dutch). The namesake of the city as the site of the original dam, Dam Square, is the main city square and has the Royal Palace and National Monument. Museumplein hosts various museums, including the Rijksmuseum, Van Gogh Museum, and Stedelijk Museum. Other squares include Rembrandtplein, Muntplein, Nieuwmarkt, Leidseplein, Spui, and Waterlooplein. Also, near to Amsterdam is the Nekkeveld estate conservation project.
Amsterdam is the financial and business capital of the Netherlands.
According to the 2007 European Cities Monitor (ECM) - an annual location survey of Europe’s leading companies carried out by global real estate consultant Cushman & Wakefield - Amsterdam is one of the top European cities in which to locate an international business, ranking fifth in the survey. with the survey determining London, Paris, Frankfurt and Barcelona as the four European cities surpassing Amsterdam in this regard.
A substantial number of large corporations and banks' headquarters are located in the Amsterdam area, including: AkzoNobel, Heineken International, ING Group, ABN AMRO, TomTom, Delta Lloyd Group, Booking.com and Philips.
Although many small offices remain along the historic canals, centrally based companies have increasingly relocated outside Amsterdam's city centre. Consequently, the Zuidas (English: South Axis) has become the new financial and legal hub of Amsterdam, with the country's five largest law firms and a number of subsidiaries of large consulting firms, such as Boston Consulting Group and Accenture, as well as the World Trade Centre (Amsterdam) located in the Zuidas district. In addition to the Zuidas, there are three smaller financial districts in Amsterdam:
The adjoining municipality of Amstelveen is the location of KPMG International's global headquarters. Other non-Dutch companies have chosen to settle in communities surrounding Amsterdam since they allow freehold property ownership, whereas Amsterdam retains ground rent.
The Port of Amsterdam is the fourth largest port in Europe, the 38th largest port in the world and the second largest port in the Netherlands by metric tons of cargo. In 2014 the Port of Amsterdam had a cargo throughput of 97,4 million tons of cargo, which was mostly bulk cargo.
Amsterdam has the biggest cruise port in the Netherlands with more than 150 cruise ships every year.
In 2019 the new lock in IJmuiden will open; the port will then be able to grow to 125 million tonnes in capacity.
The Amsterdam Stock Exchange (AEX), now part of Euronext, is the world's oldest stock exchange and is one of Europe's largest bourses. It is near Dam Square in the city centre.
Together with Eindhoven (Brainport) and Rotterdam (Seaport), Amsterdam (Airport) forms the foundation of the Dutch economy.
Amsterdam is one of the most popular tourist destinations in Europe, receiving more than 4.63 million international visitors annually, this is excluding the 16 million day-trippers visiting the city every year. The number of visitors has been growing steadily over the past decade. This can be attributed to an increasing number of European visitors. Two-thirds of the hotels are located in the city's centre. Hotels with 4 or 5 stars contribute 42% of the total beds available and 41% of the overnight stays in Amsterdam. The room occupation rate was 85% in 2017, up from 78% in 2006. The majority of tourists (74%) originate from Europe. The largest group of non-European visitors come from the United States, accounting for 14% of the total. Certain years have a theme in Amsterdam to attract extra tourists. For example, the year 2006 was designated "Rembrandt 400", to celebrate the 400th birthday of Rembrandt van Rijn. Some hotels offer special arrangements or activities during these years. The average number of guests per year staying at the four campsites around the city range from 12,000 to 65,000.
De Wallen, also known as Walletjes or Rosse Buurt, is a designated area for legalised prostitution and is Amsterdam's largest and most well known red-light district. This neighbourhood has become a famous attraction for tourists. It consists of a network of roads and alleys containing several hundred small, one-room apartments rented by sex workers who offer their services from behind a window or glass door, typically illuminated with red lights. In recent years the city government has been closing and repurposing the famous red light district windows in an effort to clean up the area and reduce the amount of party and sex tourism.
Shops in Amsterdam range from large high end department stores such as De Bijenkorf founded in 1870 to small specialty shops. Amsterdam's high-end shops are found in the streets P.C. Hooftstraat and "Cornelis Schuytstraat", which are located in the vicinity of the Vondelpark. One of Amsterdam's busiest high streets is the narrow, medieval Kalverstraat in the heart of the city. Other shopping areas include the "Negen Straatjes" and Haarlemmerdijk and Haarlemmerstraat. "Negen Straatjes" are nine narrow streets within the "Grachtengordel", the concentric canal system of Amsterdam. The Negen Straatjes differ from other shopping districts with the presence of a large diversity of privately owned shops. The Haarlemmerstraat and Haarlemmerdijk were voted best shopping street in the Netherlands in 2011. These streets have as the "Negen Straatjes" a large diversity of privately owned shops. But as the "Negen Straatjes" are dominated by fashion stores the Haarlemmerstraat and Haarlemmerdijk offer a very wide variety of all kinds of stores, just to name some specialties: candy and other food related stores, lingerie, sneakers, wedding clothing, interior shops, books, Italian deli's, racing and mountain bikes, skatewear, etc.
The city also features a large number of open-air markets such as the Albert Cuyp Market, Westerstraat-markt, Ten Katemarkt, and Dappermarkt. Some of these markets are held on a daily basis, like the Albert Cuypmarkt and the Dappermarkt. Others, like the Westerstraatmarkt, are held on a weekly basis.
Several fashion brands and designers are based in Amsterdam. Fashion designers include Iris van Herpen, Mart Visser, Viktor & Rolf, Marlies Dekkers and Frans Molenaar. Fashion models like Yfke Sturm, Doutzen Kroes and Kim Noorda started their careers in Amsterdam. Amsterdam has its garment centre in the World Fashion Center. Fashion photographers Inez van Lamsweerde and Vinoodh Matadin were born in Amsterdam.
During the later part of the 16th-century, Amsterdam's Rederijkerskamer (Chamber of rhetoric) organised contests between different Chambers in the reading of poetry and drama. In 1637, Schouwburg, the first theatre in Amsterdam was built, opening on 3 January 1638. The first ballet performances in the Netherlands were given in Schouwburg in 1642 with the "Ballet of the Five Senses". In the 18th century, French theatre became popular. While Amsterdam was under the influence of German music in the 19th century there were few national opera productions; the Hollandse Opera of Amsterdam was built in 1888 for the specific purpose of promoting Dutch opera. In the 19th century, popular culture was centred on the Nes area in Amsterdam (mainly vaudeville and music-hall). An improved metronome was invented in 1812 by Dietrich Nikolaus Winkel. The Rijksmuseum (1885) and Stedelijk Museum (1895) were built and opened. In 1888, the Concertgebouworkest orchestra was established. With the 20th century came cinema, radio and television. Though most studios are located in Hilversum and Aalsmeer, Amsterdam's influence on programming is very strong. Many people who work in the television industry live in Amsterdam. Also, the headquarters of the Dutch SBS Broadcasting Group is located in Amsterdam.
The most important museums of Amsterdam are located on the Museumplein (Museum Square), located at the southwestern side of the Rijksmuseum. It was created in the last quarter of the 19th century on the grounds of the former World's fair. The northeastern part of the square is bordered by the very large Rijksmuseum. In front of the Rijksmuseum on the square itself is a long, rectangular pond. This is transformed into an ice rink in winter. The northwestern part of the square is bordered by the Van Gogh Museum, Stedelijk Museum, House of Bols Cocktail & Genever Experience and Coster Diamonds. The southwestern border of the Museum Square is the Van Baerlestraat, which is a major thoroughfare in this part of Amsterdam. The Concertgebouw is situated across this street from the square. To the southeast of the square are situated a number of large houses, one of which contains the American consulate. A parking garage can be found underneath the square, as well as a supermarket. The Museumplein is covered almost entirely with a lawn, except for the northeastern part of the square which is covered with gravel. The current appearance of the square was realised in 1999, when the square was remodelled. The square itself is the most prominent site in Amsterdam for festivals and outdoor concerts, especially in the summer. Plans were made in 2008 to remodel the square again, because many inhabitants of Amsterdam are not happy with its current appearance.
The Rijksmuseum possesses the largest and most important collection of classical Dutch art.
It opened in 1885. Its collection consists of nearly one million objects. The artist most associated with Amsterdam is Rembrandt, whose work, and the work of his pupils, is displayed in the Rijksmuseum. Rembrandt's masterpiece "The Night Watch" is one of top pieces of art of the museum. It also houses paintings from artists like Bartholomeus van der Helst, Johannes Vermeer, Frans Hals, Ferdinand Bol, Albert Cuyp, Jacob van Ruisdael and Paulus Potter. Aside from paintings, the collection consists of a large variety of decorative art. This ranges from Delftware to giant doll-houses from the 17th century. The architect of the gothic revival building was P.J.H. Cuypers. The museum underwent a 10-year, 375 million euro renovation starting in 2003. The full collection was reopened to the public on 13 April 2013 and the Rijksmuseum has remained the most visited museum in Amsterdam with 2.2 million visitors in 2016 and 2.16 million in 2017.
Van Gogh lived in Amsterdam for a short while and there is a museum dedicated to his work. The museum is housed in one of the few modern buildings in this area of Amsterdam. The building was designed by Gerrit Rietveld. This building is where the permanent collection is displayed. A new building was added to the museum in 1999. This building, known as the performance wing, was designed by Japanese architect Kisho Kurokawa. Its purpose is to house temporary exhibitions of the museum. Some of Van Gogh's most famous paintings, like "The Potato Eaters" and "Sunflowers", are in the collection. The Van Gogh museum is the second most visited museum in Amsterdam, not far behind the Rijksmuseum in terms of the number of visits, being approximately 2.1 million in 2016, for example.
Next to the Van Gogh museum stands the Stedelijk Museum. This is Amsterdam's most important museum of modern art. The museum is as old as the square it borders and was opened in 1895. The permanent collection consists of works of art from artists like Piet Mondriaan, Karel Appel, and Kazimir Malevich. After renovations lasting several years the museum opened in September 2012 with a new composite extension that has been called 'The Bathtub' due to its resemblance to one.
Amsterdam contains many other museums throughout the city. They range from small museums such as the Verzetsmuseum (Resistance Museum), the Anne Frank House, and the Rembrandt House Museum, to the very large, like the Tropenmuseum (Museum of the Tropics), Amsterdam Museum (formerly known as Amsterdam Historical Museum), Hermitage Amsterdam (a dependency of the Hermitage Museum in Saint Petersburg) and the Joods Historisch Museum (Jewish Historical Museum). The modern-styled Nemo is dedicated to child-friendly science exhibitions.
Amsterdam's musical culture includes a large collection of songs which treat the city nostalgically and lovingly. The 1949 song "Aan de Amsterdamse grachten" ("On the canals of Amsterdam") was performed and recorded by many artists, including John Kraaijkamp Sr.; the best-known version is probably that by Wim Sonneveld (1962). In the 1950s Johnny Jordaan rose to fame with "Geef mij maar Amsterdam" ("I prefer Amsterdam"), which praises the city above all others (explicitly Paris); Jordaan sang especially about his own neighbourhood, the Jordaan ("Bij ons in de Jordaan"). Colleagues and contemporaries of Johnny include Tante Leen and Manke Nelis. Another notable Amsterdam song is "Amsterdam" by Jacques Brel (1964). A 2011 poll by Amsterdam newspaper "Het Parool" that Trio Bier's "Oude Wolf" was voted "Amsterdams lijflied". Notable Amsterdam bands from the modern era include the Osdorp Posse and The Ex.
AFAS Live (formerly known as the Heineken Music Hall) is a concert hall located near the Johan Cruyff Arena (known as the Amsterdam Arena until 2018). Its main purpose is to serve as a podium for pop concerts for big audiences. Many famous international artists have performed there. Two other notable venues, Paradiso and the Melkweg are located near the Leidseplein. Both focus on broad programming, ranging from indie rock to hip hop, R&B, and other popular genres. Other more subcultural music venues are OCCII, OT301, De Nieuwe Anita, Winston Kingdom and Zaal 100. Jazz has a strong following in Amsterdam, with the Bimhuis being the premier venue. In 2012, Ziggo Dome was opened, also near Amsterdam Arena, a state-of-the-art indoor music arena.
AFAS Live is also host to many electronic dance music festivals, alongside many other venues. Armin van Buuren and Tiesto, some of the world's leading Trance DJ's hail from the Netherlands and perform frequently in Amsterdam. Each year in October, the city hosts the Amsterdam Dance Event (ADE) which is one of the leading electronic music conferences and one of the biggest club festivals for electronic music in the world, attracting over 350,000 visitors each year. Another popular dance festival is 5daysoff, which takes place in the venues Paradiso and Melkweg. In summer time there are several big outdoor dance parties in or nearby Amsterdam, such as Awakenings, Dance Valley, Mystery Land, Loveland, A Day at the Park, Welcome to the Future, and Valtifest.
Amsterdam has a world-class symphony orchestra, the Royal Concertgebouw Orchestra. Their home is the Concertgebouw, which is across the Van Baerlestraat from the Museum Square. It is considered by critics to be a concert hall with some of the best acoustics in the world. The building contains three halls, Grote Zaal, Kleine Zaal, and Spiegelzaal. Some nine hundred concerts and other events per year take place in the Concertgebouw, for a public of over 700,000, making it one of the most-visited concert halls in the world. The opera house of Amsterdam is situated adjacent to the city hall. Therefore, the two buildings combined are often called the Stopera, (a word originally coined by protesters against it very construction: "Stop the Opera[-house]"). This huge modern complex, opened in 1986, lies in the former Jewish neighbourhood at "Waterlooplein" next to the river Amstel. The "Stopera" is the homebase of Dutch National Opera, Dutch National Ballet and the Holland Symfonia. Muziekgebouw aan 't IJ is a concert hall, which is situated in the IJ near the central station. Its concerts perform mostly modern classical music. Located adjacent to it, is the "Bimhuis", a concert hall for improvised and Jazz music.
Amsterdam has three main theatre buildings.
The Stadsschouwburg at the Leidseplein is the home base of Toneelgroep Amsterdam. The current building dates from 1894. Most plays are performed in the Grote Zaal (Great Hall). The normal programme of events encompasses all sorts of theatrical forms. The Stadsschouwburg is currently being renovated and expanded. The third theatre space, to be operated jointly with next door Melkweg, will open in late 2009 or early 2010.
The Dutch National Opera and Ballet (formerly known as "Het Muziektheater"), dating from 1986, is the principal opera house and home to Dutch National Opera and Dutch National Ballet. Royal Theatre Carré was built as a permanent circus theatre in 1887 and is currently mainly used for musicals, cabaret performances and pop concerts.
The recently re-opened DeLaMar Theater houses the more commercial plays and musicals. A new theatre has also moved into Amsterdam scene in 2014, joining other established venues: Theater Amsterdam is situated in the west part of Amsterdam, on the Danzigerkade. It is housed in a modern building with a panoramic view over the harbour. The theatre is the first ever purpose-built venue to showcase a single play entitled ANNE, the play based on Anne Frank's life.
On the east side of town, there is a small theatre in a converted bath house, the Badhuistheater. The theatre often has English programming.
The Netherlands has a tradition of cabaret or "kleinkunst", which combines music, storytelling, commentary, theatre and comedy. Cabaret dates back to the 1930s and artists like Wim Kan, Wim Sonneveld and Toon Hermans were pioneers of this form of art in the Netherlands. In Amsterdam is the Kleinkunstacademie (English: Cabaret Academy). Contemporary popular artists are Youp van 't Hek, Freek de Jonge, Herman Finkers, Hans Teeuwen, Theo Maassen, Herman van Veen, Najib Amhali, Raoul Heertje, Jörgen Raymann, Brigitte Kaandorp and Comedytrain. The English spoken comedy scene was established with the founding of Boom Chicago in 1993. They have their own theatre at Leidseplein.
Amsterdam is famous for its vibrant and diverse nightlife. Amsterdam has many "cafés" (bars). They range from large and modern to small and cozy. The typical "Bruine Kroeg" (brown "café") breathe a more old fashioned atmosphere with dimmed lights, candles, and somewhat older clientele. These brown cafés mostly offer a wide range of local and international artesanal beers. Most "cafés" have terraces in summertime. A common sight on the Leidseplein during summer is a square full of terraces packed with people drinking beer or wine. Many restaurants can be found in Amsterdam as well. Since Amsterdam is a multicultural city, a lot of different ethnic restaurants can be found. Restaurants range from being rather luxurious and expensive to being ordinary and affordable. Amsterdam also possesses many discothèques. The two main nightlife areas for tourists are the Leidseplein and the Rembrandtplein. The Paradiso, Melkweg and Sugar Factory are cultural centres, which turn into discothèques on some nights. Examples of discothèques near the Rembrandtplein are the Escape, Air, John Doe and Club Abe. Also noteworthy are Panama, Hotel Arena (East), TrouwAmsterdam and Studio 80. In recent years '24-hour' clubs opened their doors, most notably Radion De School, Shelter and Marktkantine. Bimhuis located near the Central Station, with its rich programming hosting the best in the field is considered one of the best jazz clubs in the world. The Reguliersdwarsstraat is the main street for the LGBT community and nightlife.
In 2008, there were 140 festivals and events in Amsterdam.
Famous festivals and events in Amsterdam include: "Koningsdag" (which was named "Koninginnedag" until the crowning of King Willem-Alexander in 2013) (King's Day – Queen's Day); the Holland Festival for the performing arts; the yearly Prinsengrachtconcert (classical concerto on the Prinsen canal) in August; the 'Stille Omgang' (a silent Roman Catholic evening procession held every March); Amsterdam Gay Pride; The Cannabis Cup; and the Uitmarkt. On Koningsdag—that is held each year on 27 April—hundreds of thousands of people travel to Amsterdam to celebrate with the city's residents. The entire city becomes overcrowded with people buying products from the "freemarket," or visiting one of the many music concerts.
The yearly Holland Festival attracts international artists and visitors from all over Europe. Amsterdam Gay Pride is a yearly local LGBT parade of boats in Amsterdam's canals, held on the first Saturday in August. The annual Uitmarkt is a three-day cultural event at the start of the cultural season in late August. It offers previews of many different artists, such as musicians and poets, who perform on podia.
Amsterdam is home of the "Eredivisie" football club AFC Ajax. The stadium Johan Cruyff Arena is the home of Ajax. It is located in the south-east of the city next to the new Amsterdam Bijlmer ArenA railway station. Before moving to their current location in 1996, Ajax played their regular matches in the now demolished De Meer Stadion in the eastern part of the city or in the Olympic Stadium.
In 1928, Amsterdam hosted the Summer Olympics. The Olympic Stadium built for the occasion has been completely restored and is now used for cultural and sporting events, such as the Amsterdam Marathon. In 1920, Amsterdam assisted in hosting some of the sailing events for the Summer Olympics held in neighbouring Antwerp, Belgium by hosting events at Buiten IJ.
The city holds the Dam to Dam Run, a race from Amsterdam to Zaandam, as well as the Amsterdam Marathon. The ice hockey team Amstel Tijgers play in the Jaap Eden ice rink. The team competes in the Dutch ice hockey premier league. Speed skating championships have been held on the 400-metre lane of this ice rink.
Amsterdam holds two American football franchises: the Amsterdam Crusaders and the Amsterdam Panthers. The Amsterdam Pirates baseball team competes in the Dutch Major League. There are three field hockey teams: Amsterdam, Pinoké and Hurley, who play their matches around the Wagener Stadium in the nearby city of Amstelveen. The basketball team MyGuide Amsterdam competes in the Dutch premier division and play their games in the Sporthallen Zuid.
There is one rugby club in Amsterdam, which also hosts sports training classes such as RTC (Rugby Talenten Centrum or Rugby Talent Centre) and the National Rugby stadium.
Since 1999, the city of Amsterdam honours the best sportsmen and women at the Amsterdam Sports Awards. Boxer Raymond Joval and field hockey midfielder Carole Thate were the first to receive the awards, in 1999.
Amsterdam hosted the World Gymnaestrada in 1991 and will do so again in 2023.
The city of Amsterdam is a municipality under the Dutch Municipalities Act. It is governed by a directly elected municipal council, a municipal executive board and a mayor. Since 1981, the municipality of Amsterdam has gradually been divided into semi-autonomous boroughs, called "stadsdelen" or 'districts'. Over time, a total of 15 boroughs were created. In May 2010, under a major reform, the number of Amsterdam boroughs was reduced to eight: Amsterdam-Centrum covering the city centre including the canal belt, Amsterdam-Noord consisting of the neighbourhoods north of the IJ lake, Amsterdam-Oost in the east, Amsterdam-Zuid in the south, Amsterdam-West in the west, Amsterdam Nieuw-West in the far west, Amsterdam Zuidoost in the southeast, and Westpoort covering the Port of Amsterdam area.
As with all Dutch municipalities, Amsterdam is governed by a directly elected municipal council, a municipal executive board and a government appointed mayor ("burgemeester"). The mayor is a member of the municipal executive board, but also has individual responsibilies in maintaining public order. On 27 June 2018, Femke Halsema (former member of House of Representatives for GroenLinks from 1998 to 2011) was appointed as the first woman to be Mayor of Amsterdam by the King's Commissioner of North Holland for a six-year term after being nominated by the Amsterdam municipal council and began serving a six-year term on 12 July 2018. She replaces Eberhard van der Laan (Labour Party) who was the Mayor of Amsterdam from 2010 until his death in October 2017. After the 2014 municipal council elections, a governing majority of D66, VVD and SP was formed – the first coalition without the Labour Party since World War II. Next to the Mayor, the municipal executive board consists of eight "wethouders" ('alderpersons') appointed by the municipal council: four D66 alderpersons, two VVD alderpersons and two SP alderpersons.
On 18 September 2017, it was announced by Eberhard van der Laan in an open letter to Amsterdam citizens that Kajsa Ollongren would take up his office as acting Mayor of Amsterdam with immediate effect due to ill health. Ollongren was succeeded as acting Mayor by Eric van der Burg on 26 October 2017 and by Jozias van Aartsen on 4 December 2017.
Unlike most other Dutch municipalities, Amsterdam is subdivided into eight boroughs, called "stadsdelen" or 'districts', a system that was implemented gradually in the 1980s to improve local governance. The boroughs are responsible for many activities that had previously been run by the central city. In 2010, the number of Amsterdam boroughs reached fifteen. Fourteen of those had their own district council ("deelraad"), elected by a popular vote. The fifteenth, Westpoort, covers the harbour of Amsterdam and had very few residents. Therefore, it was governed by the central municipal council.
Under the borough system, municipal decisions are made at borough level, except for those affairs pertaining to the whole city such as major infrastructure projects, which are the jurisdiction of the central municipal authorities. In 2010, the borough system was restructured, in which many smaller boroughs merged into larger boroughs. In 2014, under a reform of the Dutch Municipalities Act, the Amsterdam boroughs lost much of their autonomous status, as their district councils were abolished.
The municipal council of Amsterdam voted to maintain the borough system by replacing the district councils with smaller, but still directly elected district committees ("bestuurscommissies"). Under a municipal ordinance, the new district committees were granted responsibilities through delegation of regulatory and executive powers by the central municipal council.
"Amsterdam" is usually understood to refer to the municipality of Amsterdam. Colloquially, some areas within the municipality, such as the town of Durgerdam, may not be considered part of Amsterdam.
Statistics Netherlands uses three other definitions of Amsterdam: metropolitan agglomeration Amsterdam ("Grootstedelijke Agglomeratie Amsterdam", not to be confused with "Grootstedelijk Gebied Amsterdam", a synonym of "Groot Amsterdam"), Greater Amsterdam ("Groot Amsterdam", a COROP region) and the urban region Amsterdam ("Stadsgewest Amsterdam"). The Amsterdam Department for Research and Statistics uses a fourth conurbation, namely the "Stadsregio Amsterdam" ('City Region of Amsterdam'). The city region is similar to Greater Amsterdam but includes the municipalities of Zaanstad and Wormerland. It excludes Graft-De Rijp.
The smallest of these areas is the municipality of Amsterdam with a population of 802,938 in 2013. The conurbation had a population of 1,096,042 in 2013. It includes the municipalities of Zaanstad, Wormerland, Oostzaan, Diemen and Amstelveen only, as well as the municipality of Amsterdam. Greater Amsterdam includes 15 municipalities, and had a population of 1,293,208 in 2013. Though much larger in area, the population of this area is only slightly larger, because the definition excludes the relatively populous municipality of Zaanstad. The largest area by population, the Amsterdam Metropolitan Area (Dutch: Metropoolregio Amsterdam), has a population of 2,33 million. It includes for instance Zaanstad, Wormerland, Muiden, Abcoude, Haarlem, Almere and Lelystad but excludes Graft-De Rijp. Amsterdam is part of the conglomerate metropolitan area Randstad, with a total population of 6,659,300 inhabitants.
Of these various metropolitan area configurations, only the "Stadsregio Amsterdam" (City Region of Amsterdam) has a formal governmental status. Its responsibities include regional spatial planning and the metropolitan public transport concessions.
Under the Dutch Constitution, Amsterdam is the capital of the Netherlands. Since the 1983 constitutional revision, the constitution mentions "Amsterdam" and "capital" in chapter 2, article 32: The king's confirmation by oath and his coronation take place in "the capital Amsterdam" (""de hoofdstad Amsterdam""). Previous versions of the constitution only mentioned "the city of Amsterdam" (""de stad Amsterdam""). For a royal investiture, therefore, the States General of the Netherlands (the Dutch Parliament) meets for a ceremonial joint session in Amsterdam. The ceremony traditionally takes place at the Nieuwe Kerk on Dam Square, immediately after the former monarch has signed the act of abdication at the nearby Royal Palace of Amsterdam. Normally, however, the Parliament sits in The Hague, the city which has historically been the seat of the Dutch government, the Dutch monarchy, and the Dutch supreme court. Foreign embassies are also located in The Hague.
The coat of arms of Amsterdam is composed of several historical elements. First and centre are three St Andrew's crosses, aligned in a vertical band on the city's shield (although Amsterdam's patron saint was Saint Nicholas). These St Andrew's crosses can also be found on the cityshields of neighbours Amstelveen and Ouder-Amstel. This part of the coat of arms is the basis of the flag of Amsterdam, flown by the city government, but also as civil ensign for ships registered in Amsterdam. Second is the Imperial Crown of Austria. In 1489, out of gratitude for services and loans, Maximilian I awarded Amsterdam the right to adorn its coat of arms with the king's crown. Then, in 1508, this was replaced with Maximilian's imperial crown when he was crowned Holy Roman Emperor. In the early years of the 17th century, Maximilian's crown in Amsterdam's coat of arms was again replaced, this time with the crown of Emperor Rudolph II, a crown that became the Imperial Crown of Austria. The lions date from the late 16th century, when city and province became part of the Republic of the Seven United Netherlands. Last came the city's official motto: "Heldhaftig, Vastberaden, Barmhartig" ("Heroic, Determined, Merciful"), bestowed on the city in 1947 by Queen Wilhelmina, in recognition of the city's bravery during the Second World War.
Currently, there are sixteen tram routes and five metro routes. All are operated by municipal public transport operator Gemeentelijk Vervoerbedrijf (GVB), which also runs the city bus network.
Four fare-free GVB ferries carry pedestrians and cyclists across the IJ lake to the borough of Amsterdam-Noord, and two fare-charging ferries run east and west along the harbour. There are also privately operated water taxis, a water bus, a boat sharing operation, electric rental boats and canal cruises, that transport people along Amsterdam's waterways.
Regional buses, and some suburban buses, are operated by Connexxion and EBS. International coach services are provided by Eurolines from Amsterdam Amstel railway station, IDBUS from Amsterdam Sloterdijk railway station, and Megabus from the Zuiderzeeweg in the east of the city.
In order to facilitate easier transport to the center of Amsterdam, the city has various P+R Locations where people can park their car at an affordable price and transfer to one of the numerous public transport lines.
Amsterdam was intended in 1932 to be the hub, a kind of Kilometre Zero, of the highway system of the Netherlands, with freeways numbered One to Eight planned to originate from the city. The outbreak of the Second World War and shifting priorities led to the current situation, where only roads A1, A2, and A4 originate from Amsterdam according to the original plan. The A3 to Rotterdam was cancelled in 1970 in order to conserve the Groene Hart. Road A8, leading north to Zaandam and the A10 Ringroad were opened between 1968 and 1974. Besides the A1, A2, A4 and A8, several freeways, such as the A7 and A6, carry traffic mainly bound for Amsterdam.
The A10 ringroad surrounding the city connects Amsterdam with the Dutch national network of freeways. Interchanges on the A10 allow cars to enter the city by transferring to one of the 18 "city roads", numbered S101 through to S118. These city roads are regional roads without grade separation, and sometimes without a central reservation. Most are accessible by cyclists. The S100 "Centrumring" is a smaller ringroad circumnavigating the city's centre.
In the city centre, driving a car is discouraged. Parking fees are expensive, and many streets are closed to cars or are one-way. The local government sponsors carsharing and carpooling initiatives such as "Autodelen" and "Meerijden.nu". The local government has also started removing parking spaces in the city, with the goal of removing 10,000 spaces (roughly 1,500 per year) by 2025
Amsterdam is served by ten stations of the Nederlandse Spoorwegen (Dutch Railways). Five are intercity stops: Sloterdijk, Zuid, Amstel, Bijlmer ArenA and Amsterdam Centraal. The stations for local services are: Lelylaan, RAI, Holendrecht, Muiderpoort and Science Park. Amsterdam Centraal is also an international railway station. From the station there are regular services to destinations such as Austria, Belarus, Belgium, Czechia, Denmark, France, Germany, Hungary, Poland, Russia, Switzerland and the United Kingdom. Among these trains are international trains of the Nederlandse Spoorwegen (Amsterdam-Berlin), the Eurostar (Amsterdam-Brussels-London), Thalys (Amsterdam-Brussels-Paris/Lille), and Intercity-Express (Amsterdam–Cologne–Frankfurt).
Amsterdam Airport Schiphol is less than 20 minutes by train from Amsterdam Centraal station and is served by domestic and international intercity trains, such as Thalys, Eurostar and Intercity Brussel. Schiphol is the largest airport in the Netherlands, the third largest in Europe, and the 14th-largest in the world in terms of passengers. It handles over 68 million passengers per year and is the home base of four airlines, KLM, Transavia, Martinair and Arkefly. , Schiphol was the fifth busiest airport in the world measured by international passenger numbers. This airport is 4 meters below sea level.. Although Schiphol is internationally known as Amsterdam Schiphol Airport it actually lies in the neighbouring municipality of Haarlemmermeer, southwest of the city.
Amsterdam is one of the most bicycle-friendly large cities in the world and is a centre of bicycle culture with good facilities for cyclists such as bike paths and bike racks, and several guarded bike storage garages ("fietsenstalling") which can be used.
According to the most recent figures published by Central Bureau of Statistics (CBS), in 2015 the 442.693 households (850.000 residents) in Amsterdam together owned 847.000 bicycles – 1.91 bicycle per household. Previously, wildly different figures were arrived at using a Wisdom of the crowd approach. Theft is widespreadin 2011, about 83,000 bicycles were stolen in Amsterdam. Bicycles are used by all socio-economic groups because of their convenience, Amsterdam's small size, the of bike paths, the flat terrain, and the inconvenience of driving an automobile.
Amsterdam has two universities: the University of Amsterdam ("Universiteit van Amsterdam", UvA), and the "Vrije Universiteit Amsterdam" (VU). Other institutions for higher education include an art school – Gerrit Rietveld Academie, a university of applied sciences – the Hogeschool van Amsterdam, and the Amsterdamse Hogeschool voor de Kunsten. Amsterdam's International Institute of Social History is one of the world's largest documentary and research institutions concerning social history, and especially the history of the labour movement. Amsterdam's Hortus Botanicus, founded in the early 17th century, is one of the oldest botanical gardens in the world, with many old and rare specimens, among them the coffee plant that served as the parent for the entire coffee culture in Central and South America.
There are over 200 primary schools in Amsterdam. Some of these primary schools base their teachings on particular pedagogic theories like the various Montessori schools. The biggest Montessori high school in Amsterdam is the Montessori Lyceum Amsterdam. Many schools, however, are based on religion. This used to be primarily Roman Catholicism and various Protestant denominations, but with the influx of Muslim immigrants there has been a rise in the number of Islamic schools. Jewish schools can be found in the southern suburbs of Amsterdam.
Amsterdam is noted for having five independent grammar schools (Dutch: gymnasia), the Vossius Gymnasium, Barlaeus Gymnasium, St. Ignatius Gymnasium, Het 4e Gymnasium and the Cygnus Gymnasium where a classical curriculum including Latin and classical Greek is taught. Though believed until recently by many to be an anachronistic and elitist concept that would soon die out, the gymnasia have recently experienced a revival, leading to the formation of a fourth and fifth grammar school in which the three aforementioned schools participate. Most secondary schools in Amsterdam offer a variety of different levels of education in the same school. The city also has various colleges ranging from art and design to politics and economics which are mostly also available for students coming from other countries.
Schools for foreign nationals in Amsterdam include the Amsterdam International Community School, British School of Amsterdam, Albert Einstein International School Amsterdam, Lycée Vincent van Gogh La Haye-Amsterdam primary campus (French school), International School of Amsterdam, and the Japanese School of Amsterdam.
Amsterdam is a prominent centre for national and international media. Some locally based newspapers include "Het Parool", a national daily paper; "De Telegraaf", the largest Dutch daily newspaper; the daily newspapers "Trouw", "de Volkskrant" and "NRC Handelsblad"; "De Groene Amsterdammer", a weekly newspaper; the free newspapers "Metro" and "The Holland Times" (printed in English).
Amsterdam is home to the second-largest Dutch commercial TV group SBS Broadcasting Group, consisting of TV-stations SBS 6, Net 5 and Veronica. However, Amsterdam is not considered 'the media city of the Netherlands'. The town of Hilversum, south-east of Amsterdam, has been crowned with this unofficial title. Hilversum is the principal centre for radio and television broadcasting in the Netherlands. Radio Netherlands, heard worldwide via shortwave radio since the 1920s, is also based there. Hilversum is home to an extensive complex of audio and television studios belonging to the national broadcast production company NOS, as well as to the studios and offices of all the Dutch public broadcasting organisations and many commercial TV production companies.
In 2012, the music video of Far East Movement, 'Live My Life', was filmed in various parts of Amsterdam.
Also, several movies were filmed in Amsterdam, such as James Bond's Diamonds Are Forever, Ocean's Twelve, Girl with a Pearl Earring and The Hitman's Bodyguard. Amsterdam is also featured in John Green's book "The Fault in Our Stars", which has been made into a film as well that partly takes place in Amsterdam.
The housing market is heavily regulated. The increased influx of migrants, especially since the Syrian Civil War (2011-present), has been burdensome, economically and culturally, but the government deals with citizen and migrant cases for housing equally. According to the Netherlands' Organisation for Economic Co-operation and Developmen, "60% of housing stock is controlled by housing corporations. No different treatment for migrant groups".
From the late 1960s onwards many buildings in Amsterdam have been squatted both for housing and for using as social centres. A number of these squats have legalised and become well known, such as OCCII, OT301, Paradiso and Vrankrijk. | https://en.wikipedia.org/wiki?curid=844 |
Audi
Audi AG () is a German automobile manufacturer that designs, engineers, produces, markets and distributes luxury vehicles. Audi is a member of the Volkswagen Group and has its roots at Ingolstadt, Bavaria, Germany. Audi-branded vehicles are produced in nine production facilities worldwide.
The origins of the company are complex, going back to the early 20th century and the initial enterprises (Horch and the "Audiwerke") founded by engineer August Horch; and two other manufacturers (DKW and Wanderer), leading to the foundation of Auto Union in 1932. The modern era of Audi essentially began in the 1960s when Auto Union was acquired by Volkswagen from Daimler-Benz. After relaunching the Audi brand with the 1965 introduction of the Audi F103 series, Volkswagen merged Auto Union with NSU Motorenwerke in 1969, thus creating the present day form of the company.
The company name is based on the Latin translation of the surname of the founder, August Horch. "Horch", meaning "listen" in German, becomes "audi" in Latin. The four rings of the Audi logo each represent one of four car companies that banded together to create Audi's predecessor company, Auto Union. Audi's slogan is "Vorsprung durch Technik", meaning "Being Ahead through Technology". However, Audi USA had used the slogan "Truth in Engineering" from 2007 to 2016, and have not used the slogan since 2016. Audi, along with fellow German marques BMW and Mercedes-Benz, is among the best-selling luxury automobile brands in the world.
Automobile company Wanderer was originally established in 1885, later becoming a branch of Audi AG. Another company, NSU, which also later merged into Audi, was founded during this time, and later supplied the chassis for Gottlieb Daimler's four-wheeler.
On 14 November 1899, August Horch (1868–1951) established the company A. Horch & Cie. in the Ehrenfeld district of Cologne. In 1902, he moved with his company to Reichenbach im Vogtland. On 10 May 1904, he founded the August Horch & Cie. Motorwagenwerke AG, a joint-stock company in Zwickau (State of Saxony).
After troubles with Horch chief financial officer, August Horch left Motorwagenwerke and founded in Zwickau on 16 July 1909, his second company, the August Horch Automobilwerke GmbH. His former partners sued him for trademark infringement. The German Reichsgericht (Supreme Court) in Leipzig, eventually determined that the Horch brand belonged to his former company.
Since August Horch was prohibited from using "Horch" as a trade name in his new car business, he called a meeting with close business friends, Paul and Franz Fikentscher from Zwickau. At the apartment of Franz Fikentscher, they discussed how to come up with a new name for the company. During this meeting, Franz's son was quietly studying Latin in a corner of the room. Several times he looked like he was on the verge of saying something but would just swallow his words and continue working, until he finally blurted out, "Father – "audiatur et altera pars"... wouldn't it be a good idea to call it "audi" instead of "horch"?" "Horch!" in German means "Hark!" or "hear", which is "Audi" in the singular imperative form of "audire" – "to listen" – in Latin. The idea was enthusiastically accepted by everyone attending the meeting. On 25 April 1910 the Audi Automobilwerke GmbH Zwickau (from 1915 on Audiwerke AG Zwickau) was entered in the company's register of Zwickau registration court.
The first Audi automobile, the Audi Type A 10/ Sport-Phaeton, was produced in the same year, followed by the successor Type B 10/28PS in the same year.
Audi started with a 2,612 cc inline-four engine model Type A, followed by a 3,564 cc model, as well as 4,680 cc and 5,720 cc models. These cars were successful even in sporting events. The first six-cylinder model Type M, 4,655 cc appeared in 1924.
August Horch left the "Audiwerke" in 1920 for a high position at the ministry of transport, but he was still involved with Audi as a member of the board of trustees. In September 1921, Audi became the first German car manufacturer to present a production car, the Audi Type K, with left-handed drive. Left-hand drive spread and established dominance during the 1920s because it provided a better view of oncoming traffic, making overtaking safer when driving on the right.
In August 1928, Jørgen Rasmussen, the owner of Dampf-Kraft-Wagen (DKW), acquired the majority of shares in Audiwerke AG. In the same year, Rasmussen bought the remains of the U.S. automobile manufacturer Rickenbacker, including the manufacturing equipment for 8-cylinder engines. These engines were used in "Audi Zwickau" and "Audi Dresden" models that were launched in 1929. At the same time, 6-cylinder and 4-cylinder (the "four" with a Peugeot engine) models were manufactured. Audi cars of that era were luxurious cars equipped with special bodywork.
In 1932, Audi merged with Horch, DKW, and Wanderer, to form Auto Union AG, Chemnitz. It was during this period that the company offered the Audi Front that became the first European car to combine a six-cylinder engine with front-wheel drive. It used a power train shared with the Wanderer, but turned 180 degrees, so that the drive shaft faced the front.
Before World War II, Auto Union used the four interlinked rings that make up the Audi badge today, representing these four brands. However, this badge was used only on Auto Union racing cars in that period while the member companies used their own names and emblems. The technological development became more and more concentrated and some Audi models were propelled by Horch or Wanderer built engines.
Reflecting the economic pressures of the time, Auto Union concentrated increasingly on smaller cars through the 1930s, so that by 1938 the company's DKW brand accounted for 17.9% of the German car market, while Audi held only 0.1%. After the final few Audis were delivered in 1939 the "Audi" name disappeared completely from the new car market for more than two decades.
Like most German manufacturing, at the onset of World War II the Auto Union plants were retooled for military production, and were a target for allied bombing during the war which left them damaged.
Overrun by the Soviet Army in 1945, on the orders of the Soviet Union military administration the factories were dismantled as part of war reparations. Following this, the company's entire assets were expropriated without compensation. On 17 August 1948, Auto Union AG of Chemnitz was deleted from the commercial register. These actions had the effect of liquidating Germany's Auto Union AG. The remains of the Audi plant of Zwickau became the VEB (for "People Owned Enterprise") or AWZ (in English: Automobile Works Zwickau).
With no prospect of continuing production in Soviet-controlled East Germany, Auto Union executives began the process of relocating what was left of the company to West Germany. A site was chosen in Ingolstadt, Bavaria, to start a spare parts operation in late 1945, which would eventually serve as the headquarters of the reformed Auto Union in 1949.
The former Audi factory in Zwickau restarted assembly of the pre-war-models in 1949. These DKW models were renamed to IFA F8 and IFA F9 and were similar to the West German versions. West and East German models were equipped with the traditional and renowned DKW two-stroke engines. The Zwickau plant manufactured the infamous Trabant until 1991, when it came under Volkswagen control—effectively bringing it under the same umbrella as Audi since 1945.
A new West German headquartered Auto Union was launched in Ingolstadt with loans from the Bavarian state government and Marshall Plan aid. The reformed company was launched 3 September 1949 and continued DKW's tradition of producing front-wheel drive vehicles with two-stroke engines. This included production of a small but sturdy 125 cc motorcycle and a DKW delivery van, the DKW F89 L at Ingolstadt. The Ingolstadt site was large, consisting of an extensive complex of formerly military buildings which was suitable for administration as well as vehicle warehousing and distribution, but at this stage there was at Ingolstadt no dedicated plant suitable for mass production of automobiles: for manufacturing the company's first post-war mass-market passenger car plant capacity in Düsseldorf was rented from Rheinmetall-Borsig. It was only ten years later, after the company had attracted an investor, when funds became available for construction of major car plant at the Ingolstadt head office site.
In 1958, in response to pressure from Friedrich Flick, then the company's largest single shareholder, Daimler-Benz took an 87% holding in the Auto Union company, and this was increased to a 100% holding in 1959. However, small two-stroke cars were not the focus of Daimler-Benz's interests, and while the early 1960s saw major investment in new Mercedes models and in a state of the art factory for Auto Union's, the company's aging model range at this time did not benefit from the economic boom of the early 1960s to the same extent as competitor manufacturers such as Volkswagen and Opel. The decision to dispose of the Auto Union business was based on its lack of profitability. Ironically, by the time they sold the business, it also included a large new factory and near production-ready modern four-stroke engine, which would enable the Auto Union business, under a new owner, to embark on a period of profitable growth, now producing not Auto Unions or DKWs, but using the "Audi" name, resurrected in 1965 after a 25-year gap.
In 1964, Volkswagen acquired a 50% holding in the business, which included the new factory in Ingolstadt, the DKW and Audi brands along with the rights to the new engine design which had been funded by Daimler-Benz, who in return retained the dormant Horch trademark and the Düsseldorf factory which became a Mercedes-Benz van assembly plant. Eighteen months later, Volkswagen bought complete control of Ingolstadt, and by 1966 were using the spare capacity of the Ingolstadt plant to assemble an additional 60,000 Volkswagen Beetles per year. Two-stroke engines became less popular during the 1960s as customers were more attracted to the smoother four-stroke engines. In September 1965, the DKW F102 was fitted with a four-stroke engine and a facelift for the car's front and rear. Volkswagen dumped the DKW brand because of its associations with two-stroke technology, and having classified the model internally as the F103, sold it simply as the "Audi". Later developments of the model were named after their horsepower ratings and sold as the Audi 60, 75, 80, and Super 90, selling until 1972. Initially, Volkswagen was hostile to the idea of Auto Union as a standalone entity producing its own models having acquired the company merely to boost its own production capacity through the Ingolstadt assembly plant – to the point where Volkswagen executives ordered that the Auto Union name and flags bearing the four rings were removed from the factory buildings. Then VW chief Heinz Nordhoff explicitly forbade Auto Union from any further product development. Fearing that the Volkswagen had no long-term ambition for the Audi brand, Auto Union engineers under the leadership of Ludwig Kraus developed the first Audi 100 in secret, without Nordhoff's knowledge. When presented with a finished prototype, Nordhoff was so impressed he authorised the car for production, which when launched in 1968, went on to be a huge success. With this, the resurrection of the Audi brand was now complete, this being followed by the first generation Audi 80 in 1972, which would in turn provide a template for VW's new front-wheel-drive water-cooled range which debuted from the mid-1970s onward.
In 1969, Auto Union merged with NSU, based in Neckarsulm, near Stuttgart. In the 1950s, NSU had been the world's largest manufacturer of motorcycles, but had moved on to produce small cars like the NSU Prinz, the TT and TTS versions of which are still popular as vintage race cars. NSU then focused on new rotary engines based on the ideas of Felix Wankel. In 1967, the new NSU Ro 80 was a car well ahead of its time in technical details such as aerodynamics, light weight, and safety. However, teething problems with the rotary engines put an end to the independence of NSU. The Neckarsulm plant is now used to produce the larger Audi models A6 and A8. The Neckarsulm factory is also home of the "quattro GmbH" (from November 2016 "Audi Sport GmbH"), a subsidiary responsible for development and production of Audi high-performance models: the R8 and the RS model range.
The new merged company was incorporated on 1 January 1969 and was known as Audi NSU Auto Union AG, with its headquarters at NSU's Neckarsulm plant, and saw the emergence of Audi as a separate brand for the first time since the pre-war era. Volkswagen introduced the Audi brand to the United States for the 1970 model year. That same year, the mid-sized car that NSU had been working on, the K70, originally intended to slot between the rear-engined Prinz models and the futuristic NSU Ro 80, was instead launched as a Volkswagen.
After the launch of the Audi 100 of 1968, the Audi 80/Fox (which formed the basis for the 1973 Volkswagen Passat) followed in 1972 and the Audi 50 (later rebadged as the Volkswagen Polo) in 1974. The Audi 50 was a seminal design because it was the first incarnation of the Golf/Polo concept, one that led to a hugely successful world car. Ultimately, the Audi 80 and 100 (progenitors of the A4 and A6, respectively) became the company's biggest sellers, whilst little investment was made in the fading NSU range; the Prinz models were dropped in 1973 whilst the fatally flawed NSU Ro80 went out of production in 1977, spelling the effective end of the NSU brand. Production of the Audi 100 had been steadily moved from Ingolstadt to Neckarsulm as the 1970s had progressed, and by the appearance of the second generation C2 version in 1976, all production was now at the former NSU plant. Neckarsulm from that point onward would produce Audi's higher end models.
The Audi image at this time was a conservative one, and so, a proposal from chassis engineer Jörg Bensinger was accepted to develop the four-wheel drive technology in Volkswagen's Iltis military vehicle for an Audi performance car and rally racing car. The performance car, introduced in 1980, was named the "Audi Quattro", a turbocharged coupé which was also the first German large-scale production vehicle to feature permanent all-wheel drive through a centre differential. Commonly referred to as the "Ur-Quattro" (the "Ur-" prefix is a German augmentative used, in this case, to mean "original" and is also applied to the first generation of Audi's S4 and S6 Sport Saloons, as in "UrS4" and "UrS6"), few of these vehicles were produced (all hand-built by a single team), but the model was a great success in rallying. Prominent wins proved the viability of all-wheel-drive racecars, and the Audi name became associated with advances in automotive technology.
In 1985, with the Auto Union and NSU brands effectively dead, the company's official name was now shortened to simply Audi AG. At the same time the company's headquarters moved back to Ingolstadt and two new wholly owned subsidiaries; "Auto Union GmbH" and "NSU GmbH", were formed to own and manage the historical trademarks and intellectual property of the original constituent companies (the exception being Horch, which had been retained by Daimler-Benz after the VW takeover), and to operate Audi's heritage operations.
In 1986, as the Passat-based Audi 80 was beginning to develop a kind of "grandfather's car" image, the "type 89" was introduced. This completely new development sold extremely well. However, its modern and dynamic exterior belied the low performance of its base engine, and its base package was quite spartan (even the passenger-side mirror was an option.) In 1987, Audi put forward a new and very elegant Audi 90, which had a much superior set of standard features. In the early 1990s, sales began to slump for the Audi 80 series, and some basic construction problems started to surface.
In the early part of the 21st century, Audi set forth on a German racetrack to claim and maintain several world records, such as top speed endurance. This effort was in-line with the company's heritage from the 1930s racing era Silver Arrows.
Through the early 1990s, Audi began to shift its target market upscale to compete against German automakers Mercedes-Benz and BMW. This began with the release of the Audi V8 in 1990. It was essentially a new engine fitted to the Audi 100/200, but with noticeable bodywork differences. Most obvious was the new grille that was now incorporated in the bonnet.
By 1991, Audi had the four-cylinder Audi 80, the 5-cylinder Audi 90 and Audi 100, the turbocharged Audi 200 and the Audi V8. There was also a coupé version of the 80/90 with both four- and five-cylinder engines.
Although the five-cylinder engine was a successful and robust powerplant, it was still a little too different for the target market. With the introduction of an all-new Audi 100 in 1992, Audi introduced a 2.8L V6 engine. This engine was also fitted to a face-lifted Audi 80 (all 80 and 90 models were now badged 80 except for the USA), giving this model a choice of four-, five-, and six-cylinder engines, in saloon, coupé and convertible body styles.
The five-cylinder was soon dropped as a major engine choice; however, a turbocharged version remained. The engine, initially fitted to the 200 quattro 20V of 1991, was a derivative of the engine fitted to the Sport Quattro. It was fitted to the Audi Coupé, named the S2, and also to the Audi 100 body, and named the S4. These two models were the beginning of the mass-produced S series of performance cars.
Sales in the United States fell after a series of recalls from 1982 to 1987 of Audi 5000 models associated with reported incidents of sudden unintended acceleration linked to six deaths and 700 accidents. At the time, NHTSA was investigating 50 car models from 20 manufacturers for sudden surges of power.
A "60 Minutes" report aired 23 November 1986, featuring interviews with six people who had sued Audi after reporting unintended acceleration, showing an Audi 5000 ostensibly suffering a problem when the brake pedal was pushed. Subsequent investigation revealed that "60 Minutes" had engineered the failure – fitting a canister of compressed air on the passenger-side floor, linked via a hose to a hole drilled into the transmission.
Audi contended, prior to findings by outside investigators, that the problems were caused by driver error, specifically pedal misapplication. Subsequently, the National Highway Traffic Safety Administration (NHTSA) concluded that the majority of unintended acceleration cases, including all the ones that prompted the "60 Minutes" report, were caused by driver error such as confusion of pedals. CBS did not acknowledge the test results of involved government agencies, but did acknowledge the similar results of another study.
In a review study published in 2012, NHTSA summarized its past findings about the Audi unintended acceleration problems: "Once an unintended acceleration had begun, in the Audi 5000, due to a failure in the idle-stabilizer system (producing an initial acceleration of 0.3g), pedal misapplication resulting from panic, confusion, or unfamiliarity with the Audi 5000 contributed to the severity of the incident."
This summary is consistent with the conclusions of NHTSA's most technical analysis at the time: "Audi idle-stabilization systems were prone to defects which resulted in excessive idle speeds and brief unanticipated accelerations of up to 0.3g [which is similar in magnitude to an emergency stop in a subway car]. These accelerations could not be the sole cause of [(long-duration) sudden acceleration incidents (SAI)], but might have triggered some SAIs by startling the driver. The defective idle-stabilization system performed a type of electronic throttle control. Significantly: multiple "intermittent malfunctions of the electronic control unit were observed and recorded ... and [were also observed and] reported by Transport Canada."
With a series of recall campaigns, Audi made several modifications; the first adjusted the distance between the brake and accelerator pedal on automatic-transmission models. Later repairs, of 250,000 cars dating back to 1978, added a device requiring the driver to press the brake pedal before shifting out of park. A legacy of the Audi 5000 and other reported cases of sudden unintended acceleration are intricate gear stick patterns and brake interlock mechanisms to prevent inadvertent shifting into forward or reverse. It is unclear how the defects in the idle-stabilization system were addressed.
Audi's U.S. sales, which had reached 74,061 in 1985, dropped to 12,283 in 1991 and remained level for three years. – with resale values falling dramatically. Audi subsequently offered increased warranty protection and renamed the affected models – with the "5000" becoming the "100" and "200" in 1989 – and reached the same sales levels again only by model year 2000.
A 2010 "BusinessWeek" article – outlining possible parallels between Audi's experience and 2009–2010 Toyota vehicle recalls – noted a class-action lawsuit filed in 1987 by about 7,500 Audi 5000-model owners remains unsettled and remains contested in Chicago's Cook County after appeals at the Illinois state and U.S. federal levels.
In the mid-to-late 1990s, Audi introduced new technologies including the use of aluminium construction. Produced from 1999 to 2005, the Audi A2 was a futuristic super mini, born from the Al2 concept, with many features that helped regain consumer confidence, like the aluminium space frame, which was a first in production car design. In the A2 Audi further expanded their TDI technology through the use of frugal three-cylinder engines. The A2 was extremely aerodynamic and was designed around a wind tunnel. The Audi A2 was criticised for its high price and was never really a sales success but it planted Audi as a cutting-edge manufacturer. The model, a Mercedes-Benz A-Class competitor, sold relatively well in Europe. However, the A2 was discontinued in 2005 and Audi decided not to develop an immediate replacement.
The next major model change came in 1995 when the Audi A4 replaced the Audi 80. The new nomenclature scheme was applied to the Audi 100 to become the Audi A6 (with a minor facelift). This also meant the S4 became the S6 and a new S4 was introduced in the A4 body. The S2 was discontinued. The Audi Cabriolet continued on (based on the Audi 80 platform) until 1999, gaining the engine upgrades along the way. A new A3 hatchback model (sharing the Volkswagen Golf Mk4's platform) was introduced to the range in 1996, and the radical Audi TT coupé and roadster were debuted in 1998 based on the same underpinnings.
The engines available throughout the range were now a 1.4 L, 1.6 L and 1.8 L four-cylinder, 1.8 L four-cylinder turbo, 2.6 L and 2.8 L V6, 2.2 L turbo-charged five-cylinder and the 4.2 L V8 engine. The V6s were replaced by new 2.4 L and 2.8 L 30V V6s in 1998, with marked improvement in power, torque and smoothness. Further engines were added along the way, including a 3.7 L V8 and 6.0 L W12 engine for the A8.
Audi's sales grew strongly in the 2000s, with deliveries to customers increasing from 653,000 in 2000 to 1,003,000 in 2008. The largest sales increases came from Eastern Europe (+19.3%), Africa (+17.2%) and the Middle East (+58.5%). China in particular has become a key market, representing 108,000 out of 705,000 cars delivered in the first three quarters of 2009. One factor for its popularity in China is that Audis have become the car of choice for purchase by the Chinese government for officials, and purchases by the government are responsible for 20% of its sales in China. As of late 2009, Audi's operating profit of €1.17 billion ($1.85 billion) made it the biggest contributor to parent Volkswagen Group's nine-month operating profit of €1.5 billion, while the other marques in Group such as Bentley and SEAT had suffered considerable losses. May 2011 saw record sales for Audi of America with the new Audi A7 and Audi A3 TDI Clean Diesel. In May 2012, Audi reported a 10% increase in its sales—from 408 units to 480 in the last year alone.
Audi manufactures vehicles in seven plants around the world, some of which are shared with other VW Group marques although many sub-assemblies such as engines and transmissions are manufactured within other Volkswagen Group plants.
Audi's two principal assembly plants are:
Outside of Germany, Audi produces vehicles at:
In September 2012, Audi announced the construction of its first North American manufacturing plant in Puebla, Mexico. This plant became operative in 2016 and produces the second generation Q5.
From 2002 up to 2003, Audi headed the Audi Brand Group, a subdivision of the Volkswagen Group's Automotive Division consisting of Audi, Lamborghini and SEAT, that was focused on sporty values, with the marques' product vehicles and performance being under the higher responsibility of the Audi brand.
On January 2014, Audi, along with the Wireless Power Consortium, operated a booth which demonstrated a phone compartment using the Qi open interface standard at the Consumer Electronics Show (CES). In May, most of the Audi dealers in the UK falsely claimed that the Audi A7, A8, and R8 were Euro NCAP safety tested, all achieving five out of five stars. In fact none were tested.
In 2015, Audi admitted that at least 2.1 million Audi cars had been involved in the Volkswagen emissions testing scandal in which software installed in the cars manipulated emissions data to fool regulators and allow the cars to pollute at higher than government-mandated levels. The A1, A3, A4, A5, A6, TT, Q3 and Q5 models were implicated in the scandal. Audi promised to quickly find a technical solution and upgrade the cars so they can function within emissions regulations. Ulrich Hackenberg, the head of research and development at Audi, was suspended in relation to the scandal. Despite widespread media coverage about the scandal through the month of September, Audi reported that U.S. sales for the month had increased by 16.2%. Audi's parent company Volkswagen announced on 18 June 2018 that Audi chief executive Rupert Stadler had been arrested.
In November 2015, the U.S. Environmental Protection Agency implicated the 3-liter diesel engine versions of the 2016 Audi A6 Quattro, A7 Quattro, A8, A8L and the Q5 as further models that had emissions regulation defeat-device software installed. Thus, these models emitted nitrogen oxide at up to nine times the legal limit when the car detected that it was not hooked up to emissions testing equipment.
In November 2016, Audi expressed an intention to establish an assembly factory in Pakistan, with the company's local partner acquiring land for a plant in Korangi Creek Industrial Park in Karachi. Approval of the plan would lead to an investment of $30 million in the new plant. Audi planned to cut 9,500 jobs in Germany starting from 2020 till 2025 to fund electric vehicles and digital working.
In February 2020, Volkswagen AG announced that it plans to take over all Audi shares it does not own (totalling 0.36%) via a squeeze-out according to German stock corporation law, thus making Audi a fully owned subsidiary of the Volkswagen Group.
Audi AI is a driver assist feature offered by Audi. The company's stated intent is to offer fully autonomous driving at a future time, acknowledging that legal, regulatory and technical hurdles must be overcome to achieve this goal. On June 4, 2017, Audi stated that its new A8 will be fully self-driving for speeds up to 60 km/h using its Audi AI. Contrary to other cars, the driver will not have to do safety checks such as touching the steering wheel every 15 seconds to use this feature. The Audi A8 will therefore be the first production car to reach level 3 autonomous driving, meaning that the driver can safely turn their attention away from driving tasks, e.g. the driver can text or watch a movie. Audi will also be the first manufacturer to use a 3D LIDAR system in addition to cameras and ultrasonic sensors for their AI.
Audi produces 100% galvanised cars to prevent corrosion, and was the first mass-market vehicle to do so, following introduction of the process by Porsche, c. 1975. Along with other precautionary measures, the full-body zinc coating has proved to be very effective in preventing rust. The body's resulting durability even surpassed Audi's own expectations, causing the manufacturer to extend its original 10-year warranty against corrosion perforation to currently 12 years (except for aluminium bodies which do not rust).
Audi introduced a new series of vehicles in the mid-1990s and continues to pursue new technology and high performance. An all-aluminium car was brought forward by Audi, and in 1994 the Audi A8 was launched, which introduced aluminium space frame technology (called "Audi Space Frame" or ASF) which saves weight and improves torsion rigidity compared to a conventional steel frame. Prior to that effort, Audi used examples of the Type 44 chassis fabricated out of aluminium as test-beds for the technique. The disadvantage of the aluminium frame is that it is very expensive to repair and requires a specialized aluminium bodyshop. The weight reduction is somewhat offset by the quattro four-wheel drive system which is standard in most markets. Nonetheless, the A8 is usually the lightest all-wheel drive car in the full-size luxury segment, also having best-in-class fuel economy. The Audi A2, Audi TT and Audi R8 also use Audi Space Frame designs.
For most of its lineup (excluding the A3, A1, and TT models), Audi has not adopted the transverse engine layout which is typically found in economy cars (such as Peugeot and Citroën), since that would limit the type and power of engines that can be installed. To be able to mount powerful engines (such as a V8 engine in the Audi S4 and Audi RS4, as well as the W12 engine in the Audi A8L W12), Audi has usually engineered its more expensive cars with a longitudinally front-mounted engine, in an "overhung" position, over the front wheels in front of the axle line - this layout dates back to the DKW and Auto Union saloons from the 1950s. But while this allows for the easy adoption of all-wheel drive, it goes against the ideal 50:50 weight distribution.
In all its post Volkswagen-era models, Audi has firmly refused to adopt the traditional rear-wheel drive layout favored by its two archrivals Mercedes-Benz and BMW, favoring either front-wheel drive or all-wheel drive. The majority of Audi's lineup in the United States features all-wheel drive standard on most of its expensive vehicles (only the entry-level trims of the A4 and A6 are available with front-wheel drive), in contrast to Mercedes-Benz and BMW whose lineup treats all-wheel drive as an option. BMW did not offer all-wheel drive on its V8-powered cars (as opposed to crossover SUVs) until the 2010 BMW 7 Series and 2011 BMW 5 Series, while the Audi A8 has had all-wheel drive available/standard since the 1990s. Regarding high-performance variants, Audi S and RS models have always had all-wheel drive, unlike their direct rivals from BMW M and Mercedes-AMG whose cars are rear-wheel drive only (although their performance crossover SUVs are all-wheel drive).
Audi has recently applied the "quattro" badge to models such as the A3 and TT which do not use the Torsen-based system as in prior years with a mechanical center differential, but with the Haldex Traction electro-mechanical clutch AWD system.
Prior to the introduction of the Audi 80 and Audi 50 in 1972 and 1974, respectively, Audi had led the development of the "EA111" and "EA827" inline-four engine families. These new power units underpinned the water-cooled revival of parent company Volkswagen (in the Polo, Golf, Passat and Scirocco), whilst the many derivatives and descendants of these two basic engine designs have appeared in every generation of VW Group vehicles right up to the present day.
In the 1980s, Audi, along with Volvo, was the champion of the inline-five cylinder, 2.1/2.2 L engine as a longer-lasting alternative to more traditional six-cylinder engines. This engine was used not only in production cars but also in their race cars. The 2.1 L inline five-cylinder engine was used as a base for the rally cars in the 1980s, providing well over after modification. Before 1990, there were engines produced with a displacement between 2.0 L and 2.3 L. This range of engine capacity allowed for both fuel economy and power.
For the ultra-luxury version of its Audi A8 fullsize luxury flagship sedan, the Audi A8L W12, Audi uses the Volkswagen Group W12 engine instead of the conventional V12 engine favored by rivals Mercedes-Benz and BMW. The W12 engine configuration (also known as a "WR12") is created by forming two imaginary narrow-angle 15° VR6 engines at an angle of 72°, and the narrow angle of each set of cylinders allows just two overhead camshafts to drive each pair of banks, so just four are needed in total. The advantage of the W12 engine is its compact packaging, allowing Audi to build a 12-cylinder sedan with all-wheel drive, whereas a conventional V12 engine could have only a rear-wheel drive configuration as it would have no space in the engine bay for a differential and other components required to power the front wheels. In fact, the 6.0 L W12 in the Audi A8L W12 is smaller in overall dimensions than the 4.2 L V8 that powers the Audi A8 4.2 variants. The 2011 Audi A8 debuted a revised 6.3-litre version of the W12 (WR12) engine with .
New models of the A3, A4, A6 and A8 have been introduced, with the ageing 1.8-litre engine now having been replaced by new Fuel Stratified Injection (FSI) engines. Nearly every petroleum burning model in the range now incorporates this fuel-saving technology.
In 2003 Volkswagen introduced the Direct-Shift Gearbox (DSG), a type of dual-clutch transmission. It is a type of automatic transmission, drivable like a conventional torque converter automatic transmission. Based on the gearbox found in the Group B S1, the system includes dual electro-hydraulically controlled clutches instead of a torque converter. This is implemented in some VW Golfs, Audi A3, Audi A4 and TT models where DSG is called S-Tronic.
Beginning in 2005, Audi has implemented white LED technology as daytime running lights (DRL) in their products. The distinctive shape of the DRLs has become a trademark of sorts. LEDs were first introduced on the Audi A8 W12, the world's first production car to have LED DRLs, and have since spread throughout the entire model range. The LEDs are present on some Audi billboards.
Since 2010, Audi has also offered the LED technology in low- and high-beam headlights.
Starting with the 2003 Audi A8, Audi has used a centralised control interface for its on-board infotainment systems, called Multi Media Interface (MMI). It is essentially a rotating control knob and 'segment' buttons – designed to control all in-car entertainment devices (radio, CD changer, iPod, TV tuner), satellite navigation, heating and ventilation, and other car controls with a screen.
The availability of MMI has gradually filtered down the Audi lineup, and following its introduction on the third generation A3 in 2011, MMI is now available across the entire range. It has been generally well received, as it requires less menu-surfing with its segment buttons around a central knob, along with 'main function' direct access buttons – with shortcuts to the radio or phone functions. The colour screen is mounted on the upright dashboard, and on the A4 (new), A5, A6, A8, and Q7, the controls are mounted horizontally.
Audi has assisted with technology to produce synthetic diesel from water and carbon dioxide. Audi calls the synthetic diesel E-diesel. It is also working on synthetic gasoline (which it calls E-gasoline).
Audi uses scanning gloves for parts registration during assembly, and automatic robots to transfer cars from factory to rail cars.
The following tables list Audi production vehicles that are sold as of 2018:
Audi is planning an alliance with the Japanese electronics giant Sanyo to develop a pilot hybrid electric project for the Volkswagen Group. The alliance could result in Sanyo batteries and other electronic components being used in future models of the Volkswagen Group. Concept electric vehicles unveiled to date include the Audi A1 Sportback Concept, Audi A4 TDI Concept E, and the fully electric Audi e-tron Concept Supercar.
In December 2018, Audi announced to invest 14 billion Euro ($15.9 billion) in e-mobility, self-driving cars.
Audi has competed in various forms of motorsports. Audi's tradition in motorsport began with their former company Auto Union in the 1930s. In the 1990s, Audi found success in the Touring and Super Touring categories of motor racing after success in circuit racing in North America.
In 1980, Audi released the Quattro, a four-wheel drive (4WD) turbocharged car that went on to win rallies and races worldwide. It is considered one of the most significant rally cars of all time, because it was one of the first to take advantage of the then-recently changed rules which allowed the use of four-wheel drive in competition racing. Many critics doubted the viability of four-wheel drive racers, thinking them to be too heavy and complex, yet the Quattro was to become a successful car. Leading its first rally it went off the road, however the rally world had been served notice 4WD was the future. The Quattro went on to achieve much success in the World Rally Championship. It won the 1983 (Hannu Mikkola) and the 1984 (Stig Blomqvist) drivers' titles, and brought Audi the manufacturers' title in 1982 and 1984.
In 1984, Audi launched the short-wheelbase Sport Quattro which dominated rally races in Monte Carlo and Sweden, with Audi taking all podium places, but succumbed to problems further into WRC contention. In 1985, after another season mired in mediocre finishes, Walter Röhrl finished the season in his Sport Quattro S1, and helped place Audi second in the manufacturers' points. Audi also received rally honours in the Hong Kong to Beijing rally in that same year. Michèle Mouton, the only female driver to win a round of the World Rally Championship and a driver for Audi, took the Sport Quattro S1, now simply called the "S1", and raced in the Pikes Peak International Hill Climb. The climb race pits a driver and car to drive to the summit of the Pikes Peak mountain in Colorado, and in 1985, Michèle Mouton set a new record of 11:25.39, and being the first woman to set a Pikes Peak record. In 1986, Audi formally left international rally racing following an accident in Portugal involving driver Joaquim Santos in his Ford RS200. Santos swerved to avoid hitting spectators in the road, and left the track into the crowd of spectators on the side, killing three and injuring 30. Bobby Unser used an Audi in that same year to claim a new record for the Pikes Peak Hill Climb at 11:09.22.
In 1987, Walter Röhrl claimed the title for Audi setting a new Pikes Peak International Hill Climb record of 10:47.85 in his Audi S1, which he had retired from the WRC two years earlier. The Audi S1 employed Audi's time-tested inline-five-cylinder turbocharged engine, with the final version generating . The engine was mated to a six-speed gearbox and ran on Audi's famous four-wheel drive system. All of Audi's top drivers drove this car; Hannu Mikkola, Stig Blomqvist, Walter Röhrl and Michèle Mouton. This Audi S1 started the range of Audi 'S' cars, which now represents an increased level of sports-performance equipment within the mainstream Audi model range.
As Audi moved away from rallying and into circuit racing, they chose to move first into America with the Trans-Am in 1988.
In 1989, Audi moved to International Motor Sports Association (IMSA) GTO with the Audi 90, however as they avoided the two major endurance events (Daytona and Sebring) despite winning on a regular basis, they would lose out on the title.
In 1990, having completed their objective to market cars in North America, Audi returned to Europe, turning first to the Deutsche Tourenwagen Meisterschaft (DTM) series with the Audi V8, and then in 1993, being unwilling to build cars for the new formula, they turned their attention to the fast-growing Super Touring series, which are a series of national championships. Audi first entered in the French Supertourisme and Italian Superturismo. In the following year, Audi would switch to the German Super Tourenwagen Cup (known as STW), and then to British Touring Car Championship (BTCC) the year after that.
The Fédération Internationale de l'Automobile (FIA), having difficulty regulating the quattro four-wheel drive system, and the impact it had on the competitors, would eventually ban all four-wheel drive cars from competing in 1998, but by then, Audi switched all their works efforts to sports car racing.
By 2000, Audi would still compete in the US with their RS4 for the SCCA Speed World GT Challenge, through dealer/team Champion Racing competing against Corvettes, Vipers, and smaller BMWs (where it is one of the few series to permit 4WD cars). In 2003, Champion Racing entered an RS6. Once again, the quattro four-wheel drive was superior, and Champion Audi won the championship. They returned in 2004 to defend their title, but a newcomer, Cadillac with the new Omega Chassis CTS-V, gave them a run for their money. After four victories in a row, the Audis were sanctioned with several negative changes that deeply affected the car's performance. Namely, added ballast weights, and Champion Audi deciding to go with different tyres, and reducing the boost pressure of the turbocharger.
In 2004, after years of competing with the TT-R in the revitalised DTM series, with privateer team Abt Racing/Christian Abt taking the 2002 title with Laurent Aïello, Audi returned as a full factory effort to touring car racing by entering two factory supported Joest Racing A4 DTM cars.
Audi began racing prototype sportscars in 1999, debuting at the Le Mans 24 hour. Two car concepts were developed and raced in their first season - the Audi R8R (open-cockpit 'roadster' prototype) and the Audi R8C (closed-cockpit 'coupé' GT-prototype). The R8R scored a credible podium on its racing debut at Le Mans and was the concept which Audi continued to develop into the 2000 season due to favourable rules for open-cockpit prototypes.
However, most of the competitors (such as BMW, Toyota, Mercedes and Nissan) retired at the end of 1999.
The factory-supported Joest Racing team won at Le Mans three times in a row with the Audi R8 (2000–2002), as well as winning every race in the American Le Mans Series in its first year. Audi also sold the car to customer teams such as Champion Racing.
In 2003, two Bentley Speed 8s, with engines designed by Audi, and driven by Joest drivers "loaned" to the fellow Volkswagen Group company, competed in the GTP class, and finished the race in the top two positions, while the Champion Racing R8 finished third overall, and first in the LMP900 class. Audi returned to the winner's podium at the 2004 race, with the top three finishers all driving R8s: Audi Sport Japan Team Goh finished first, Audi Sport UK Veloqx second, and Champion Racing third.
At the 2005 24 Hours of Le Mans, Champion Racing entered two R8s, along with an R8 from the Audi PlayStation Team Oreca. The R8s (which were built to old LMP900 regulations) received a narrower air inlet restrictor, reducing power, and an additional of weight compared to the newer LMP1 chassis. On average, the R8s were about 2–3 seconds off pace compared to the Pescarolo–Judd. But with a team of excellent drivers and experience, both Champion R8s were able to take first and third, while the Oreca team took fourth. The Champion team was also the first American team to win Le Mans since the Gulf Ford GTs in 1967. This also ends the long era of the R8; however, its replacement for 2006, called the Audi R10 TDI, was unveiled on 13 December 2005.
The R10 TDI employed many new and innovative features, the most notable being the twin-turbocharged direct injection diesel engine. It was first raced in the 2006 12 Hours of Sebring as a race-test in preparation for the 2006 24 Hours of Le Mans, which it later went on to win. Audi had a win in the first diesel sports car at 12 Hours of Sebring (the car was developed with a Diesel engine due to ACO regulations that favor diesel engines). As well as winning the 24 Hours of Le Mans in 2006, the R10 TDI beat the Peugeot 908 HDi FAP in , and in , (however Peugeot won the 24h in 2009) with a podium clean-sweep (all four 908 entries retired) while breaking a distance record (set by the Porsche 917K of Martini Racing in ), in with the R15 TDI Plus.
Audi's sports car racing success would continue with the Audi R18's victory at the 2011 24 Hours of Le Mans. Audi Sport Team Joest's Benoît Tréluyer earned Audi their first pole position in five years while the team's sister car locked out the front row. Early accidents eliminated two of Audi's three entries, but the sole remaining Audi R18 TDI of Tréluyer, Marcel Fässler, and André Lotterer held off the trio of Peugeot 908s to claim victory by a margin of 13.8 seconds.
Audi entered a factory racing team run by Joest Racing into the American Le Mans Series under the Audi Sport North America name in 2000. This was a successful operation with the team winning on its debut in the series at the 2000 12 Hours of Sebring. Factory backed Audi R8s were the dominant car in ALMS taking 25 victories between 2000 and the end of the 2002 season. In 2003 Audi sold customer cars to Champion Racing as well as continuing to race the factory Audi Sport North America team. Champion Racing won many races as a private team running Audi R8s and eventually replaced Team Joest as the Audi Sport North America between 2006 and 2008. Since 2009 Audi has not taken part in full American Le Mans Series Championships, but has competed in the series opening races at Sebring, using the 12-hour race as a test for Le Mans, and also as part of the 2012 FIA World Endurance Championship season calendar.
Audi participated in the 2003 1000km of Le Mans which was a one-off sports car race in preparation for the 2004 European Le Mans Series. The factory team Audi Sport UK won races and the championship in the 2004 season but Audi was unable to match their sweeping success of Audi Sport North America in the American Le Mans Series, partly due to the arrival of a factory competitor in LMP1, Peugeot. The French manufacturer's 908 HDi FAP became the car to beat in the series from 2008 onwards with 20 LMP wins. However, Audi were able to secure the championship in 2008 even though Peugeot scored more race victories in the season.
In 2012, the FIA sanctioned a World Endurance Championship which would be organised by the ACO as a continuation of the ILMC. Audi competed won the first WEC race at Sebring and followed this up with a further three successive wins, including the 2012 24 Hours of Le Mans. Audi scored a final 5th victory in the 2012 WEC in Bahrain and were able to win the inaugural WEC Manufacturers' Championship.
As defending champions, Audi once again entered the Audi R18 e-tron quattro chassis into the 2013 WEC and the team won the first five consecutive races, including the 2013 24 Hours of Le Mans. The victory at Round 5, Circuit of the Americas, was of particular significance as it marked the 100th win for Audi in Le Mans prototypes. Audi secured their second consecutive WEC Manufacturers' Championship at Round 6 after taking second place and half points in the red-flagged Fuji race.
For the 2014 season Audi entered a redesigned and upgraded R18 e-tron quattro which featured a 2 MJ energy recovery system. As defending champions, Audi would once again face a challenge in LMP1 from Toyota, and additionally from Porsche who returned to endurance racing after a 16-year absence. The season opening 6hrs of Silverstone was a disaster for Audi who saw both cars retire from the race, marking the first time that an Audi car has failed to score a podium in a World Endurance Championship race.
Audi provide factory support to Abt Sportsline in the FIA Formula E Championship, The team competed under the title of Audi Sport Abt Formula E Team in the inaugural 2014-15 Formula E season. On 13 February 2014 the team announced its driver line up as Daniel Abt and World Endurance Championship driver Lucas di Grassi.
Audi has been linked to Formula One in recent years but has always resisted due to the company's opinion that it is not relevant to road cars, but hybrid power unit technology has been adopted into the sport, swaying the company's view and encouraging research into the program by former Ferrari team principal Stefano Domenicali.
The Audi emblem is four overlapping rings that represent the four marques of Auto Union. The Audi emblem symbolises the amalgamation of Audi with DKW, Horch and Wanderer: the first ring from the left represents Audi, the second represents DKW, third is Horch, and the fourth and last ring Wanderer.
The design is popularly believed to have been the idea of Klaus von Oertzen, the director of sales at Wanderer – when Berlin was chosen as the host city for the 1936 Summer Olympics and that a form of the Olympic logo symbolized the newly established Auto Union's desire to succeed. Somewhat ironically, the International Olympic Committee later sued Audi in the International Trademark Court in 1995, where they lost.
The original "Audi" script, with the distinctive slanted tails on the "A" and "d" was created for the historic Audi company in 1920 by the famous graphic designer Lucian Bernhard, and was resurrected when Volkswagen revived the brand in 1965. Following the demise of NSU in 1977, less prominence was given to the four rings, in preference to the "Audi" script encased within a black (later red) ellipse, and was commonly displayed next to the Volkswagen roundel when the two brands shared a dealer network under the V.A.G banner. The ellipse (known as the Audi Oval) was phased out after 1994, when Audi formed its own independent dealer network, and prominence was given back to the four rings – at the same time Audi Sans (a derivative of Univers) was adopted as the font for all marketing materials, corporate communications and was also used in the vehicles themselves.
As part of Audi's centennial celebration in 2009, the company updated the logo, changing the font to left-aligned Audi Type, and altering the shading for the overlapping rings. The revised logo was designed by Rayan Abdullah.
Audi developed a Corporate Sound concept, with Audi Sound Studio designed for producing the Corporate Sound. The Corporate Sound project began with sound agency Klangerfinder GmbH & Co KG and s12 GmbH. Audio samples were created in Klangerfinder's sound studio in Stuttgart, becoming part of Audi Sound Studio collection. Other Audi Sound Studio components include The Brand Music Pool, The Brand Voice. Audi also developed Sound Branding Toolkit including certain instruments, sound themes, rhythm and car sounds which all are supposed to reflect the AUDI sound character.
Audi started using a beating heart sound trademark beginning in 1996. An updated heartbeat sound logo, developed by agencies KLANGERFINDER GmbH & Co KG of Stuttgart and S12 GmbH of Munich, was first used in 2010 in an Audi A8 commercial with the slogan "The Art of Progress."
Audi's corporate tagline is "Vorsprung durch Technik", meaning ""Progress through Technology"". The German-language tagline is used in many European countries, including the United Kingdom, and in other markets, such as Latin America, Oceania, Africa and parts of Asia including Japan. A few years ago, the North American tagline was ""Innovation through technology"", but in Canada the German tagline "Vorsprung durch Technik" was used in advertising. Since 2007, Audi has used the slogan "Truth in Engineering" in the U.S. However, since the Audi emissions testing scandal came to light in September 2015, this slogan was lambasted for being discordant with reality. In fact, just hours after disgraced Volkswagen CEO Martin Winterkorn admitted to cheating on emissions data, an advertisement during the 2015 Primetime Emmy Awards promoted Audi's latest advances in low emissions technology with Kermit the Frog stating, "It's not that easy being green."
It was first used in English-language advertising after Sir John Hegarty of the Bartle Bogle Hegarty advertising agency visited the Audi factory in 1982. In the original British television commercials, the phrase was voiced by Geoffrey Palmer. After its repeated use in advertising campaigns, the phrase found its way into popular culture, including the British comedy "Only Fools and Horses", the U2 song "Zooropa" and the Blur song "Parklife". Similar-sounding phrases have also been used, including as the punchline for a joke in the movie "Lock, Stock, and Two Smoking Barrels" and in the British TV series "Peep Show".
Audi Sans (based on Univers Extended) was originally created in 1997 by Ole Schäfer for MetaDesign. MetaDesign was later commissioned for a new corporate typeface called Audi Type, designed by Paul van der Laan and Pieter van Rosmalen of Bold Monday. The font began to appear in Audi's 2009 products and marketing materials.
Audi is a strong partner of different kinds of sports. In football, long partnerships exist between Audi and domestic clubs including Bayern Munich, Hamburger SV, 1. FC Nürnberg, Hertha BSC, and Borussia Mönchengladbach and international clubs including Chelsea, Real Madrid, FC Barcelona, A.C. Milan, AFC Ajax and Perspolis. Audi also sponsors winter sports: The Audi FIS Alpine Ski World Cup is named after the company. Additionally, Audi supports the German Ski Association (DSV) as well as the alpine skiing national teams of Switzerland, Sweden, Finland, France, Liechtenstein, Italy, Austria and the U.S. For almost two decades, Audi fosters golf sport: for example with the Audi quattro Cup and the HypoVereinsbank Ladies German Open presented by Audi. In sailing, Audi is engaged in the Medcup regatta and supports the team Luna Rossa during the Louis Vuitton Pacific Series and also is the primary sponsor of the Melges 20 sailboat. Further, Audi sponsors the regional teams ERC Ingolstadt (hockey) and FC Ingolstadt 04 (soccer). In 2009, the year of Audi's 100th anniversary, the company organized the Audi Cup for the first time. Audi also sponsor the New York Yankees as well. In October 2010 they agreed to a three sponsorship year-deal with Everton. Audi also sponsors the England Polo Team and holds the Audi Polo Awards.
Since the start of the Marvel Cinematic Universe, Audi signed a deal to sponsor, promote and provide vehicles for several films. So far these have been, Iron Man, Iron Man 2, Iron Man 3, , , , and . The R8 supercar became the personal vehicle for Tony Stark (played by Robert Downey Jr.) for six of these films. The e-tron vehicles were promoted in Endgame and Far From Home. Several commercials were co-produced by Marvel and Audi to promote several new concepts and some of the latest vehicles such as the A8, SQ7 and the e-Tron fleet.
In 2001, Audi promoted the new multitronic continuously variable transmission with television commercials throughout Europe, featuring an impersonator of musician and actor Elvis Presley. A prototypical dashboard figure – later named "Wackel-Elvis" ("Wobble Elvis" or "Wobbly Elvis") – appeared in the commercials to demonstrate the smooth ride in an Audi equipped with the multitronic transmission. The dashboard figure was originally intended for use in the commercials only, but after they aired the demand for Wackel-Elvis fans grew among fans and the figure was mass-produced in China and marketed by Audi in their factory outlet store.
As part of Audi's attempt to promote its Diesel technology in 2009, the company began Audi Mileage Marathon. The driving tour featured a fleet of 23 Audi TDI vehicles from 4 models (Audi Q7 3.0 TDI, Audi Q5 3.0 TDI, Audi A4 3.0 TDI, Audi A3 Sportback 2.0 TDI with S tronic transmission) travelling across the American continent from New York to Los Angeles, passing major cities like Chicago, Dallas and Las Vegas during the 13 daily stages, as well as natural wonders including the Rocky Mountains, Death Valley and the Grand Canyon.
The next phase of technology Audi is developing is the e-tron electric drive powertrain system. They have shown several concept cars , each with different levels of size and performance. The original e-tron concept shown at the 2009 Frankfurt motor show is based on the platform of the R8 and has been scheduled for limited production. Power is provided by electric motors at all four wheels. The second concept was shown at the 2010 Detroit Motor Show. Power is provided by two electric motors at the rear axle. This concept is also considered to be the direction for a future mid-engined gas-powered 2-seat performance coupe. The Audi A1 e-tron concept, based on the Audi A1 production model, is a hybrid vehicle with a range extending Wankel rotary engine to provide power after the initial charge of the battery is depleted. It is the only concept of the three to have range extending capability. The car is powered through the front wheels, always using electric power.
It is all set to be displayed at the Auto Expo 2012 in New Delhi, India, from 5 January. Powered by a 1.4 litre engine, and can cover a distance up to 54 km s on a single charge. The e-tron was also shown in the 2013 blockbuster film Iron Man 3 and was driven by Tony Stark (Iron Man).
Audi has supported the European version of PlayStation Home, the PlayStation 3's online community-based service, by releasing a dedicated Home space. Audi is the first carmaker to develop a such a space for Home. On 17 December 2009, Audi released two spaces; the Audi Home Terminal and the Audi Vertical Run. The Audi Home Terminal features an Audi TV channel delivering video content, an Internet Browser feature, and a view of a city. The Audi Vertical Run is where users can access the mini-game Vertical Run, a futuristic mini-game featuring Audi's e-tron concept. Players collect energy and race for the highest possible speeds and the fastest players earn a place in the Audi apartments located in a large tower in the centre of the Audi Space. In both the Home Terminal and Vertical Run spaces, there are teleports where users can teleport back and forth between the two spaces. Audi had stated that additional content would be added in 2010. On 31 March 2015 Sony shutdown the PlayStation Home service rendering all content for it inaccessible. | https://en.wikipedia.org/wiki?curid=848 |
Aircraft
An aircraft is a vehicle that is able to fly by gaining support from the air. It counters the force of gravity by using either static lift or by using the dynamic lift of an airfoil, or in a few cases the downward thrust from jet engines. Common examples of aircraft include airplanes, helicopters, airships (including blimps), gliders, paramotors and hot air balloons.
The human activity that surrounds aircraft is called "aviation". The science of aviation, including designing and building aircraft, is called "aeronautics." Crewed aircraft are flown by an onboard pilot, but unmanned aerial vehicles may be remotely controlled or self-controlled by onboard computers. Aircraft may be classified by different criteria, such as lift type, aircraft propulsion, usage and others.
Flying model craft and stories of manned flight go back many centuries; however, the first manned ascent — and safe descent — in modern times took place by larger hot-air balloons developed in the 18th century. Each of the two World Wars led to great technical advances. Consequently, the history of aircraft can be divided into five eras:
Aerostats use buoyancy to float in the air in much the same way that ships float on the water. They are characterized by one or more large cells or canopies, filled with a relatively low-density gas such as helium, hydrogen, or hot air, which is less dense than the surrounding air. When the weight of this is added to the weight of the aircraft structure, it adds up to the same weight as the air that the craft displaces.
Small hot-air balloons, called sky lanterns, were first invented in ancient China prior to the 3rd century BC and used primarily in cultural celebrations, and were only the second type of aircraft to fly, the first being kites, which were first invented in ancient China over two thousand years ago. (See Han Dynasty)
A balloon was originally any aerostat, while the term airship was used for large, powered aircraft designs — usually fixed-wing. In 1919 Frederick Handley Page was reported as referring to "ships of the air," with smaller passenger types as "Air yachts." In the 1930s, large intercontinental flying boats were also sometimes referred to as "ships of the air" or "flying-ships". — though none had yet been built. The advent of powered balloons, called dirigible balloons, and later of rigid hulls allowing a great increase in size, began to change the way these words were used. Huge powered aerostats, characterized by a rigid outer framework and separate aerodynamic skin surrounding the gas bags, were produced, the Zeppelins being the largest and most famous. There were still no fixed-wing aircraft or non-rigid balloons large enough to be called airships, so "airship" came to be synonymous with these aircraft. Then several accidents, such as the Hindenburg disaster in 1937, led to the demise of these airships. Nowadays a "balloon" is an unpowered aerostat and an "airship" is a powered one.
A powered, steerable aerostat is called a "dirigible". Sometimes this term is applied only to non-rigid balloons, and sometimes "dirigible balloon" is regarded as the definition of an airship (which may then be rigid or non-rigid). Non-rigid dirigibles are characterized by a moderately aerodynamic gasbag with stabilizing fins at the back. These soon became known as "blimps". During World War II, this shape was widely adopted for tethered balloons; in windy weather, this both reduces the strain on the tether and stabilizes the balloon. The nickname "blimp" was adopted along with the shape. In modern times, any small dirigible or airship is called a blimp, though a blimp may be unpowered as well as powered.
Heavier-than-air aircraft, such as airplanes, must find some way to push air or gas downwards, so that a reaction occurs (by Newton's laws of motion) to push the aircraft upwards. This dynamic movement through the air is the origin of the term "aerodyne". There are two ways to produce dynamic upthrust — aerodynamic lift, and powered lift in the form of engine thrust.
Aerodynamic lift involving wings is the most common, with fixed-wing aircraft being kept in the air by the forward movement of wings, and rotorcraft by spinning wing-shaped rotors sometimes called rotary wings. A wing is a flat, horizontal surface, usually shaped in cross-section as an aerofoil. To fly, air must flow over the wing and generate lift. A "flexible wing" is a wing made of fabric or thin sheet material, often stretched over a rigid frame. A "kite" is tethered to the ground and relies on the speed of the wind over its wings, which may be flexible or rigid, fixed, or rotary.
With powered lift, the aircraft directs its engine thrust vertically downward. V/STOL aircraft, such as the Harrier Jump Jet and Lockheed Martin F-35B take off and land vertically using powered lift and transfer to aerodynamic lift in steady flight.
A pure rocket is not usually regarded as an aerodyne, because it does not depend on the air for its lift (and can even fly into space); however, many aerodynamic lift vehicles have been powered or assisted by rocket motors. Rocket-powered missiles that obtain aerodynamic lift at very high speed due to airflow over their bodies are a marginal case.
The forerunner of the fixed-wing aircraft is the kite. Whereas a fixed-wing aircraft relies on its forward speed to create airflow over the wings, a kite is tethered to the ground and relies on the wind blowing over its wings to provide lift. Kites were the first kind of aircraft to fly, and were invented in China around 500 BC. Much aerodynamic research was done with kites before test aircraft, wind tunnels, and computer modelling programs became available.
The first heavier-than-air craft capable of controlled free-flight were gliders. A glider designed by George Cayley carried out the first true manned, controlled flight in 1853.
The practical, powered, fixed-wing aircraft (the airplane or aeroplane) was invented by Wilbur and Orville Wright. Besides the method of propulsion, fixed-wing aircraft are in general characterized by their wing configuration. The most important wing characteristics are:
A variable geometry aircraft can change its wing configuration during flight.
A "flying wing" has no fuselage, though it may have small blisters or pods. The opposite of this is a "lifting body", which has no wings, though it may have small stabilizing and control surfaces.
Wing-in-ground-effect vehicles are not considered aircraft. They "fly" efficiently close to the surface of the ground or water, like conventional aircraft during takeoff. An example is the Russian ekranoplan (nicknamed the "Caspian Sea Monster"). Man-powered aircraft also rely on ground effect to remain airborne with a minimal pilot power, but this is only because they are so underpowered—in fact, the airframe is capable of flying higher.
Rotorcraft, or rotary-wing aircraft, use a spinning rotor with aerofoil section blades (a "rotary wing") to provide lift. Types include helicopters, autogyros, and various hybrids such as gyrodynes and compound rotorcraft.
"Helicopters" have a rotor turned by an engine-driven shaft. The rotor pushes air downward to create lift. By tilting the rotor forward, the downward flow is tilted backward, producing thrust for forward flight. Some helicopters have more than one rotor and a few have rotors turned by gas jets at the tips.
"Autogyros" have unpowered rotors, with a separate power plant to provide thrust. The rotor is tilted backward. As the autogyro moves forward, air blows upward across the rotor, making it spin. This spinning increases the speed of airflow over the rotor, to provide lift. Rotor kites are unpowered autogyros, which are towed to give them forward speed or tethered to a static anchor in high-wind for kited flight.
"Cyclogyros" rotate their wings about a horizontal axis.
"Compound rotorcraft" have wings that provide some or all of the lift in forward flight. They are nowadays classified as "powered lift" types and not as rotorcraft. "Tiltrotor" aircraft (such as the Bell Boeing V-22 Osprey), tiltwing, tail-sitter, and coleopter aircraft have their rotors/propellers horizontal for vertical flight and vertical for forward flight.
The smallest aircraft are toys/recreational items, and even smaller, nano-aircraft.
The largest aircraft by dimensions and volume (as of 2016) is the 302-foot-long (about 95 meters) British Airlander 10, a hybrid blimp, with helicopter and fixed-wing features, and reportedly capable of speeds up to 90 mph (about 150 km/h), and an airborne endurance of two weeks with a payload of up to 22,050 pounds (11 tons).
The largest aircraft by weight and largest regular fixed-wing aircraft ever built, , is the Antonov An-225 "Mriya". That Ukrainian-built six-engine Russian transport of the 1980s is 84 meters (276 feet) long, with an 88-meter (289-foot) wingspan. It holds the world payload record, after transporting 428,834 pounds (200 tons) of goods, and has recently flown 100-ton loads commercially. Weighing in between 1.1 and 1.4 million pounds (550–700 tons) maximum loaded weight, it is also the heaviest aircraft built to date. It can cruise at 500 mph.
The largest military airplanes are the Ukrainian/Russian Antonov An-124 "Ruslan" (world's second-largest airplane, also used as a civilian transport), and American Lockheed C-5 Galaxy transport, weighing, loaded, over 765,000 pounds (over 380 tons). The 8-engine, piston/propeller Hughes H-4 "Hercules" "Spruce Goose" — an American World War II wooden flying boat transport with a greater wingspan (94 meters / 260 feet) than any current aircraft and a tail height equal to the tallest (Airbus A380-800 at 24.1 meters / 78 feet) — flew only one short hop in the late 1940s and never flew out of ground effect.
The largest civilian airplanes, apart from the above-noted An-225 and An-124, are the Airbus Beluga cargo transport derivative of the Airbus A300 jet airliner, the Boeing Dreamlifter cargo transport derivative of the Boeing 747 jet airliner/transport (the 747-200B was, at its creation in the 1960s, the heaviest aircraft ever built, with a maximum weight of 836,000 pounds (over 400 tons)), and the double-decker Airbus A380 "super-jumbo" jet airliner (the world's largest passenger airliner).
The fastest recorded powered aircraft flight and fastest recorded aircraft flight of an air-breathing powered aircraft was of the NASA X-43A "Pegasus", a scramjet-powered, hypersonic, lifting body experimental research aircraft, at Mach 9.6 (nearly 7,000 mph). The X-43A set that new mark, and broke its own world record of Mach 6.3, nearly 5,000 mph, set in March 2004, on its third and final flight on 16 November 2004.
Prior to the X-43A, the fastest recorded powered airplane flight (and still the record for the fastest manned, powered airplane / fastest manned, non-spacecraft aircraft) was of the North American X-15A-2, rocket-powered airplane at 4,520 mph (7,274 km/h), Mach 6.72, on 3 October 1967. On one flight it reached an altitude of 354,300 feet.
The fastest known, production aircraft (other than rockets and missiles) currently or formerly operational (as of 2016) are:
"Lockheed SR-71A," display notes, 29 May 2015, National Museum of the United States Air Force retrieved 2 December 2016
Bender, Jeremy and Amanda Macias, "The 9 fastest piloted planes in the world," 18 September 2015, "Business Insider", retrieved 3 December 2016
Gliders are heavier-than-air aircraft that do not employ propulsion once airborne. Take-off may be by launching forward and downward from a high location, or by pulling into the air on a tow-line, either by a ground-based winch or vehicle, or by a powered "tug" aircraft. For a glider to maintain its forward air speed and lift, it must descend in relation to the air (but not necessarily in relation to the ground). Many gliders can "soar", "i.e.", gain height from updrafts such as thermal currents. The first practical, controllable example was designed and built by the British scientist and pioneer George Cayley, whom many recognise as the first aeronautical engineer. Common examples of gliders are sailplanes, hang gliders and paragliders.
Balloons drift with the wind, though normally the pilot can control the altitude, either by heating the air or by releasing ballast, giving some directional control (since the wind direction changes with altitude). A wing-shaped hybrid balloon can glide directionally when rising or falling; but a spherically shaped balloon does not have such directional control.
Kites are aircraft that are tethered to the ground or other object (fixed or mobile) that maintains tension in the tether or kite line; they rely on virtual or real wind blowing over and under them to generate lift and drag. Kytoons are balloon-kite hybrids that are shaped and tethered to obtain kiting deflections, and can be lighter-than-air, neutrally buoyant, or heavier-than-air.
Powered aircraft have one or more onboard sources of mechanical power, typically aircraft engines although rubber and manpower have also been used. Most aircraft engines are either lightweight reciprocating engines or gas turbines. Engine fuel is stored in tanks, usually in the wings but larger aircraft also have additional fuel tanks in the fuselage.
Propeller aircraft use one or more propellers (airscrews) to create thrust in a forward direction. The propeller is usually mounted in front of the power source in "tractor configuration" but can be mounted behind in "pusher configuration". Variations of propeller layout include "contra-rotating propellers" and "ducted fans".
Many kinds of power plant have been used to drive propellers. Early airships used man power or steam engines. The more practical internal combustion piston engine was used for virtually all fixed-wing aircraft until World War II and is still used in many smaller aircraft. Some types use turbine engines to drive a propeller in the form of a turboprop or propfan. Human-powered flight has been achieved, but has not become a practical means of transport. Unmanned aircraft and models have also used power sources such as electric motors and rubber bands.
Jet aircraft use airbreathing jet engines, which take in air, burn fuel with it in a combustion chamber, and accelerate the exhaust rearwards to provide thrust.
Different jet engine configurations include the turbojet and turbofan, sometimes with the addition of an afterburner. Those with no rotating turbomachinery include the pulsejet and ramjet. These mechanically simple engines produce no thrust when stationary, so the aircraft must be launched to flying speed using a catapult, like the V-1 flying bomb, or a rocket, for example. Other engine types include the motorjet and the dual-cycle Pratt & Whitney J58.
Compared to engines using propellers, jet engines can provide much higher thrust, higher speeds and, above about , greater efficiency. They are also much more fuel-efficient than rockets. As a consequence nearly all large, high-speed or high-altitude aircraft use jet engines.
Some rotorcraft, such as helicopters, have a powered rotary wing or "rotor", where the rotor disc can be angled slightly forward so that a proportion of its lift is directed forwards. The rotor may, like a propeller, be powered by a variety of methods such as a piston engine or turbine. Experiments have also used jet nozzles at the rotor blade tips.
Aircraft are designed according to many factors such as customer and manufacturer demand, safety protocols and physical and economic constraints. For many types of aircraft the design process is regulated by national airworthiness authorities.
The key parts of an aircraft are generally divided into three categories:
The approach to structural design varies widely between different types of aircraft. Some, such as paragliders, comprise only flexible materials that act in tension and rely on aerodynamic pressure to hold their shape. A balloon similarly relies on internal gas pressure, but may have a rigid basket or gondola slung below it to carry its payload. Early aircraft, including airships, often employed flexible doped aircraft fabric covering to give a reasonably smooth aeroshell stretched over a rigid frame. Later aircraft employed semi-monocoque techniques, where the skin of the aircraft is stiff enough to share much of the flight loads. In a true monocoque design there is no internal structure left.
The key structural parts of an aircraft depend on what type it is.
Lighter-than-air types are characterised by one or more gasbags, typically with a supporting structure of flexible cables or a rigid framework called its hull. Other elements such as engines or a gondola may also be attached to the supporting structure.
Heavier-than-air types are characterised by one or more wings and a central fuselage. The fuselage typically also carries a tail or empennage for stability and control, and an undercarriage for takeoff and landing. Engines may be located on the fuselage or wings. On a fixed-wing aircraft the wings are rigidly attached to the fuselage, while on a rotorcraft the wings are attached to a rotating vertical shaft. Smaller designs sometimes use flexible materials for part or all of the structure, held in place either by a rigid frame or by air pressure. The fixed parts of the structure comprise the airframe.
The avionics comprise the aircraft flight control systems and related equipment, including the cockpit instrumentation, navigation, radar, monitoring, and communications systems.
The flight envelope of an aircraft refers to its approved design capabilities in terms of airspeed, load factor and altitude. The term can also refer to other assessments of aircraft performance such as maneuverability. When an aircraft is abused, for instance by diving it at too-high a speed, it is said to be flown "outside the envelope", something considered foolhardy since it has been taken beyond the design limits which have been established by the manufacturer. Going beyond the envelope may have a known outcome such as flutter or entry to a non-recoverable spin (possible reasons for the boundary).
The range is the distance an aircraft can fly between takeoff and landing, as limited by the time it can remain airborne.
For a powered aircraft the time limit is determined by the fuel load and rate of consumption.
For an unpowered aircraft, the maximum flight time is limited by factors such as weather conditions and pilot endurance. Many aircraft types are restricted to daylight hours, while balloons are limited by their supply of lifting gas. The range can be seen as the average ground speed multiplied by the maximum time in the air.
The Airbus A350 is now the longest range airliner.
Flight dynamics is the science of air vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation around three axes which pass through the vehicle's center of gravity, known as "pitch", "roll," and "yaw".
Flight dynamics is concerned with the stability and control of an aircraft's rotation about each of these axes.
An aircraft that is unstable tends to diverge from its intended flight path and so is difficult to fly. A very stable aircraft tends to stay on its flight path and is difficult to maneuver. Therefore, it is important for any design to achieve the desired degree of stability. Since the widespread use of digital computers, it is increasingly common for designs to be inherently unstable and rely on computerised control systems to provide artificial stability.
A fixed wing is typically unstable in pitch, roll, and yaw. Pitch and yaw stabilities of conventional fixed wing designs require horizontal and vertical stabilisers, which act similarly to the feathers on an arrow. These stabilizing surfaces allow equilibrium of aerodynamic forces and to stabilise the flight dynamics of pitch and yaw. They are usually mounted on the tail section (empennage), although in the canard layout, the main aft wing replaces the canard foreplane as pitch stabilizer. Tandem wing and tailless aircraft rely on the same general rule to achieve stability, the aft surface being the stabilising one.
A rotary wing is typically unstable in yaw, requiring a vertical stabiliser.
A balloon is typically very stable in pitch and roll due to the way the payload is slung underneath the center of lift.
Flight control surfaces enable the pilot to control an aircraft's flight attitude and are usually part of the wing or mounted on, or integral with, the associated stabilizing surface. Their development was a critical advance in the history of aircraft, which had until that point been uncontrollable in flight.
Aerospace engineers develop control systems for a vehicle's orientation (attitude) about its center of mass. The control systems include actuators, which exert forces in various directions, and generate rotational forces or moments about the aerodynamic center of the aircraft, and thus rotate the aircraft in pitch, roll, or yaw. For example, a pitching moment is a vertical force applied at a distance forward or aft from the aerodynamic center of the aircraft, causing the aircraft to pitch up or down. Control systems are also sometimes used to increase or decrease drag, for example to slow the aircraft to a safe speed for landing.
The two main aerodynamic forces acting on any aircraft are lift supporting it in the air and drag opposing its motion. Control surfaces or other techniques may also be used to affect these forces directly, without inducing any rotation.
Aircraft permit long distance, high speed travel and may be a more fuel efficient mode of transportation in some circumstances. Aircraft have environmental and climate impacts beyond fuel efficiency considerations, however. They are also relatively noisy compared to other forms of travel and high altitude aircraft generate contrails, which experimental evidence suggests may alter weather patterns.
Aircraft are produced in several different types optimized for various uses; military aircraft, which includes not just combat types but many types of supporting aircraft, and civil aircraft, which include all non-military types, experimental and model.
A military aircraft is any aircraft that is operated by a legal or insurrectionary armed service of any type. Military aircraft can be either combat or non-combat:
Most military aircraft are powered heavier-than-air types. Other types, such as gliders and balloons, have also been used as military aircraft; for example, balloons were used for observation during the American Civil War and World War I, and military gliders were used during World War II to land troops.
Civil aircraft divide into "commercial" and "general" types, however there are some overlaps.
Commercial aircraft include types designed for scheduled and charter airline flights, carrying passengers, mail and other cargo. The larger passenger-carrying types are the airliners, the largest of which are wide-body aircraft. Some of the smaller types are also used in general aviation, and some of the larger types are used as VIP aircraft.
General aviation is a catch-all covering other kinds of private (where the pilot is not paid for time or expenses) and commercial use, and involving a wide range of aircraft types such as business jets (bizjets), trainers, homebuilt, gliders, warbirds and hot air balloons to name a few. The vast majority of aircraft today are general aviation types.
An experimental aircraft is one that has not been fully proven in flight, or that carries a Special Airworthiness Certificate, called an Experimental Certificate in United States parlance. This often implies that the aircraft is testing new aerospace technologies, though the term also refers to amateur-built and kit-built aircraft, many of which are based on proven designs.
A model aircraft is a small unmanned type made to fly for fun, for static display, for aerodynamic research or for other purposes. A scale model is a replica of some larger design. | https://en.wikipedia.org/wiki?curid=849 |
Alfred Nobel
Alfred Bernhard Nobel ( , ; 21 October 1833 – 10 December 1896) was a Swedish chemist, engineer, inventor, businessman, and philanthropist. He held 355 different patents, dynamite being the most famous. The synthetic element nobelium was named after him. He owned Bofors, which he redirected from its previous role as primarily an iron and steel producer to a major manufacturer of cannon and other armaments. Having read a premature obituary which condemned him for profiting from the sales of arms, he bequeathed his fortune to institute the Nobel Prize. His name also survives in companies such as Dynamit Nobel and AkzoNobel, which are descendants of mergers with companies that Nobel established.
Born in Stockholm, Alfred Nobel was the third son of Immanuel Nobel (1801–1872), an inventor and engineer, and Karolina Andriette (Ahlsell) Nobel (1805–1889). The couple married in 1827 and had eight children. The family was impoverished, and only Alfred and his three brothers survived past childhood. Through his father, Alfred Nobel was a descendant of the Swedish scientist Olaus Rudbeck (1630–1702), and in his turn the boy was interested in engineering, particularly explosives, learning the basic principles from his father at a young age. Alfred Nobel's interest in technology was inherited from his father, an alumnus of Royal Institute of Technology in Stockholm.
Following various business failures, Nobel's father moved to Saint Petersburg in 1837 and grew successful there as a manufacturer of machine tools and explosives. He invented the veneer lathe (which allowed the production of modern plywood) and started work on the torpedo. In 1842, the family joined him in the city. Now prosperous, his parents were able to send Nobel to private tutors and the boy excelled in his studies, particularly in chemistry and languages, achieving fluency in English, French, German and Russian. For 18 months, from 1841 to 1842, Nobel went to the only school he ever attended as a child, in Stockholm.
As a young man, Nobel studied with chemist Nikolai Zinin; then, in 1850, went to Paris to further the work. There he met Ascanio Sobrero, who had invented nitroglycerin three years before. Sobrero strongly opposed the use of nitroglycerin, as it was unpredictable, exploding when subjected to heat or pressure. But Nobel became interested in finding a way to control and use nitroglycerin as a commercially usable explosive, as it had much more power than gunpowder. At age 18, he went to the United States for one year to study, working for a short period under Swedish-American inventor John Ericsson, who designed the American Civil War ironclad "USS Monitor". Nobel filed his first patent, an English patent for a gas meter, in 1857, while his first Swedish patent, which he received in 1863, was on 'ways to prepare gunpowder'.
The family factory produced armaments for the Crimean War (1853–1856), but had difficulty switching back to regular domestic production when the fighting ended and they filed for bankruptcy. In 1859, Nobel's father left his factory in the care of the second son, Ludvig Nobel (1831–1888), who greatly improved the business. Nobel and his parents returned to Sweden from Russia and Nobel devoted himself to the study of explosives, and especially to the safe manufacture and use of nitroglycerin. Nobel invented a detonator in 1863, and in 1865 designed the blasting cap.
On 3 September 1864, a shed used for preparation of nitroglycerin exploded at the factory in Heleneborg, Stockholm, Sweden, killing five people, including Nobel's younger brother Emil. Dogged and unfazed by more minor accidents, Nobel went on to build further factories, focusing on improving the stability of the explosives he was developing. Nobel invented dynamite in 1867, a substance easier and safer to handle than the more unstable nitroglycerin. Dynamite was patented in the US and the UK and was used extensively in mining and the building of transport networks internationally. In 1875 Nobel invented gelignite, more stable and powerful than dynamite, and in 1887 patented ballistite, a predecessor of cordite.
Nobel was elected a member of the Royal Swedish Academy of Sciences in 1884, the same institution that would later select laureates for two of the Nobel prizes, and he received an honorary doctorate from Uppsala University in 1893.
Nobel's brothers Ludvig and Robert exploited oilfields along the Caspian Sea and became hugely rich in their own right. Nobel invested in these and amassed great wealth through the development of these new oil regions. During his life Nobel was issued 355 patents internationally and by his death his business had established more than 90 armaments factories, despite his apparently pacifist character.
In 1888, the death of his brother Ludvig caused several newspapers to publish obituaries of Alfred in error. One French newspaper published an obituary titled "Le marchand de la mort est mort" "("The merchant of death is dead")". Nobel read the obituary and was appalled at the idea that he would be remembered in this way. His decision to posthumously donate the majority of his wealth to found the Nobel Prize has been credited at least in part to him wanting to leave behind a better legacy.
Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like "kieselguhr" (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as "dynamite". Nobel demonstrated his explosive for the first time that year, at a quarry in Redhill, Surrey, England. In order to help reestablish his name and improve the image of his business from the earlier controversies associated with the dangerous explosives, Nobel had also considered naming the highly powerful substance "Nobel's Safety Powder", but settled with Dynamite instead, referring to the Greek word for "power" ().
Nobel later combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. 'Gelignite', or blasting gelatine, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances. Gelignite was more stable, transportable and conveniently formed to fit into bored holes, like those used in drilling and mining, than the previously used compounds and was adopted as the standard technology for mining in the "Age of Engineering" bringing Nobel a great amount of financial success, though at a significant cost to his health. An offshoot of this research resulted in Nobel's invention of ballistite, the precursor of many modern smokeless powder explosives and still used as a rocket propellant.
In 1888, Alfred's brother, Ludvig, died while visiting Cannes, and a French newspaper mistakenly published Alfred's obituary. It condemned him for his invention of military explosives (not, as is commonly quoted, dynamite, which was mainly used for civilian applications) and is said to have brought about his decision to leave a better legacy after his death. The obituary stated, "" ("The merchant of death is dead") and went on to say, "Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday." Alfred (who never had a wife or children) was disappointed with what he read and concerned with how he would be remembered.
On 27 November 1895, at the Swedish-Norwegian Club in Paris, Nobel signed his last will and testament and set aside the bulk of his estate to establish the Nobel Prizes, to be awarded annually without distinction of nationality. After taxes and bequests to individuals, Nobel's will allocated 94% of his total assets, 31,225,000 Swedish kronor, to establish the five Nobel Prizes. This converted to £1,687,837 (GBP) at the time. In 2012, the capital was worth around SEK 3.1 billion (US$472 million, EUR 337 million), which is almost twice the amount of the initial capital, taking inflation into account.
The first three of these prizes are awarded for eminence in physical science, in chemistry and in medical science or physiology; the fourth is for literary work "in an ideal direction" and the fifth prize is to be given to the person or society that renders the greatest service to the cause of international fraternity, in the suppression or reduction of standing armies, or in the establishment or furtherance of peace congresses.
The formulation for the literary prize being given for a work "in an ideal direction" (' in Swedish), is cryptic and has caused much confusion. For many years, the Swedish Academy interpreted "ideal" as "idealistic" (') and used it as a reason not to give the prize to important but less romantic authors, such as Henrik Ibsen and Leo Tolstoy. This interpretation has since been revised, and the prize has been awarded to, for example, Dario Fo and José Saramago, who do not belong to the camp of literary idealism.
There was room for interpretation by the bodies he had named for deciding on the physical sciences and chemistry prizes, given that he had not consulted them before making the will. In his one-page testament, he stipulated that the money go to discoveries or inventions in the physical sciences and to discoveries or improvements in chemistry. He had opened the door to technological awards, but had not left instructions on how to deal with the distinction between science and technology. Since the deciding bodies he had chosen were more concerned with the former, the prizes went to scientists more often than engineers, technicians or other inventors.
Sweden's central bank Sveriges Riksbank celebrated its 300th anniversary in 1968 by donating a large sum of money to the Nobel Foundation to be used to set up a sixth prize in the field of economics in honour of Alfred Nobel. In 2001, Alfred Nobel's great-great-nephew, Peter Nobel (b. 1931), asked the Bank of Sweden to differentiate its award to economists given "in Alfred Nobel's memory" from the five other awards. This request added to the controversy over whether the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel is actually a legitimate "Nobel Prize".
Nobel was accused of high treason against France for selling Ballistite to Italy, so he moved from Paris to Sanremo, Italy in 1891. On 10 December 1896, he suffered a stroke and died. He had left most of his wealth in trust, unbeknown to his family, in order to fund the Nobel Prize awards. He is buried in Norra begravningsplatsen in Stockholm.
Nobel was Lutheran and regularly attended the Church of Sweden Abroad during his Paris years, led by pastor Nathan Söderblom who received the Nobel Peace Prize in 1930. He became an agnostic in youth and was an atheist later in life, though still donated generously to the Church.
Nobel travelled for much of his business life, maintaining companies in Europe and America while keeping a home in Paris from 1873 to 1891. He remained a solitary character, given to periods of depression. He remained unmarried, although his biographers note that he had at least three loves, the first in Russia with a girl named Alexandra who rejected his proposal. In 1876, Austro-Bohemian Countess Bertha Kinsky became his secretary, but she left him after a brief stay to marry her previous lover Baron Arthur Gundaccar von Suttner. Her contact with Nobel was brief, yet she corresponded with him until his death in 1896, and it is believed that she was a major influence in his decision to include a peace prize in his will. She was awarded the 1905 Nobel Peace prize "for her sincere peace activities". Nobel's longest-lasting relationship was with Sofija Hess from Celje whom he met in 1876. The liaison lasted for 18 years.
Nobel gained proficiency in Swedish, French, Russian, English, German, and Italian. He also developed sufficient literary skill to write poetry in English. His "Nemesis" is a prose tragedy in four acts about Beatrice Cenci. It was printed while he was dying, but the entire stock was destroyed immediately after his death except for three copies, being regarded as scandalous and blasphemous. It was published in Sweden in 2003 and has been translated into Slovenian and French.
The "Monument to Alfred Nobel" (, ) in Saint Petersburg is located along the Bolshaya Nevka River on Petrogradskaya Embankment. It was dedicated in 1991 to mark the 90th anniversary of the first Nobel Prize presentation. Diplomat Thomas Bertelman and Professor Arkady Melua initiators of creation of the monument (1989). Professor A. Melua has provided funds for the establishment of the monument (J.S.Co. "Humanistica", 1990–1991). The abstract metal sculpture was designed by local artists Sergey Alipov and Pavel Shevchenko, and appears to be an explosion or branches of a tree. Petrogradskaya Embankment is the street where the Nobel's family lived until 1859.
Criticism of Nobel focuses on his leading role in weapons manufacturing and sales, and some question his motives in creating his prizes, suggesting they are intended to improve his reputation. | https://en.wikipedia.org/wiki?curid=851 |
Anal sex
Anal sex or anal intercourse is generally the insertion and thrusting of the erect penis into a person's anus, or anus and rectum, for sexual pleasure. Other forms of anal sex include fingering, the use of sex toys for anal penetration, oral sex performed on the anus (anilingus), and pegging. Although "anal sex" most commonly means penileanal penetration, sources sometimes use "anal intercourse" to exclusively denote penileanal penetration, and "anal sex" to denote any form of anal sexual activity, especially between pairings as opposed to anal masturbation.
While anal sex is commonly associated with male homosexuality, research shows that not all gay males engage in anal sex and that it is not uncommon in heterosexual relationships. Types of anal sex can also be a part of lesbian sexual practices. People may experience pleasure from anal sex by stimulation of the anal nerve endings, and orgasm may be achieved through anal penetration – by indirect stimulation of the prostate in men, indirect stimulation of the clitoris or an area of the vagina (sometimes called "the G-spot") in women, and other sensory nerves (especially the pudendal nerve). However, people may also find anal sex painful, sometimes extremely so, which may be primarily due to psychological factors in some cases.
As with most forms of sexual activity, anal sex participants risk contracting sexually transmitted infections (STIs). Anal sex is considered a high-risk sexual practice because of the vulnerability of the anus and rectum. The anal and rectal tissues are delicate and do not provide lubrication like the vagina does, so they can easily tear and permit disease transmission, especially if a personal lubricant is not used. Anal sex without protection of a condom is considered the riskiest form of sexual activity, and therefore health authorities such as the World Health Organization (WHO) recommend safe sex practices for anal sex.
Strong views are often expressed about anal sex. It is controversial in various cultures, especially with regard to religious prohibitions. This is commonly due to prohibitions against anal sex among males or teachings about the procreative purpose of sexual activity. It may be considered taboo or unnatural, and is a criminal offense in some countries, punishable by corporal or capital punishment. By contrast, anal sex may also be considered a natural and valid form of sexual activity as fulfilling as other desired sexual expressions, and can be an enhancing or primary element of a person's sex life.
The abundance of nerve endings in the anal region and rectum can make anal sex pleasurable for men or women. The internal and external sphincter muscles control the opening and closing of the anus; these muscles, which are sensitive membranes made up of many nerve endings, facilitate pleasure or pain during anal sex. "Human Sexuality: An Encyclopedia" states that "the inner third of the anal canal is less sensitive to touch than the outer two-thirds, but is more sensitive to pressure" and that "the rectum is a curved tube about eight or nine inches long and has the capacity, like the anus, to expand".
Research indicates that anal sex occurs significantly less frequently than other sexual behaviors, but its association with dominance and submission, as well as taboo, makes it an appealing stimulus to people of all sexual orientations. In addition to sexual penetration by the penis, people may use sex toys such as a dildo, a butt plug or anal beads, engage in fingering, anilingus, pegging, anal masturbation or fisting for anal sexual activity, and different sex positions may also be included. Fisting is the least practiced of the activities, partly because it is uncommon that people can relax enough to accommodate an object as big as a fist being inserted into the anus.
In a male receptive partner, being anally penetrated can produce a pleasurable sensation due to the inserted penis rubbing or brushing against the prostate through the anal wall. This can result in pleasurable sensations and can lead to an orgasm in some cases. Prostate stimulation can produce a deeper orgasm, sometimes described by men as more widespread and intense, longer-lasting, and allowing for greater feelings of ecstasy than orgasm elicited by penile stimulation only. The prostate is located next to the rectum and is the larger, more developed male homologue (variation) to the female Skene's glands. It is also typical for a man to not reach orgasm as a receptive partner solely from anal sex.
General statistics indicate that 70–80% of women require direct clitoral stimulation to achieve orgasm. The vaginal walls contain significantly fewer nerve endings than the clitoris (which has many nerve endings specifically intended for orgasm), and therefore intense sexual pleasure, including orgasm, from vaginal sexual stimulation is less likely to occur than from direct clitoral stimulation in the majority of women. The clitoris is composed of more than the externally visible glans (head). The vagina, for example, is flanked on each side by the clitoral crura, the internal legs of the clitoris, which are highly sensitive and become engorged with blood when sexually aroused. Indirect stimulation of the clitoris through anal penetration may be caused by the shared sensory nerves, especially the pudendal nerve, which gives off the inferior anal nerves and divides into the perineal nerve and the dorsal nerve of the clitoris. Although the anus has many nerve endings, their purpose is not specifically for inducing orgasm, and so a woman achieving orgasm solely by anal stimulation is rare.
The Gräfenberg spot, or G-spot, is a debated area of female anatomy, particularly among doctors and researchers, but it is typically described as being located behind the female pubic bone surrounding the urethra and accessible through the anterior wall of the vagina; it and other areas of the vagina are considered to have tissue and nerves that are related to the clitoris. Direct stimulation of the clitoris, a G-spot area, or both, while engaging in anal sex can help some women enjoy the activity and reach orgasm during it.
Stimulation from anal sex can additionally be affected by popular perception or portrayals of the activity, such as erotica or pornography. In pornography, anal sex is commonly portrayed as a desirable, painless routine that does not require personal lubricant; this can result in couples performing anal sex without care, and men and women believing that it is unusual for women, as receptive partners, to find discomfort or pain instead of pleasure from the activity. By contrast, each person's sphincter muscles react to penetration differently, the anal sphincters have tissues that are more prone to tearing, and the anus and rectum do not provide lubrication for sexual penetration like the vagina does. Researchers say adequate application of a personal lubricant, relaxation, and communication between sexual partners are crucial to avoid pain or damage to the anus or rectum. Additionally, ensuring that the anal area is clean and the bowel is empty, for both aesthetics and practicality, may be desired by participants.
The anal sphincters are usually tighter than the pelvic muscles of the vagina, which can enhance the sexual pleasure for the inserting male during male-to-female anal intercourse because of the pressure applied to the penis. Men may also enjoy the penetrative role during anal sex because of its association with dominance, because it is made more alluring by a female partner or society in general insisting that it is forbidden, or because it presents an additional option for penetration.
While some women find being a receptive partner during anal intercourse painful or uncomfortable, or only engage in the act to please a male sexual partner, other women find the activity pleasurable or prefer it to vaginal intercourse.
In a 2010 clinical review article of heterosexual anal sex, "anal intercourse" is used to specifically denote penile-anal penetration, and "anal sex" is used to denote any form of anal sexual activity. The review suggests that anal sex is exotic among the sexual practices of some heterosexuals and that "for a certain number of heterosexuals, anal intercourse is pleasurable, exciting, and perhaps considered more intimate than vaginal sex".
Anal intercourse is sometimes used as a substitute for vaginal intercourse during menstruation. The likelihood of pregnancy occurring during anal sex is greatly reduced, as anal sex alone cannot lead to pregnancy unless sperm is somehow transported to the vaginal opening. Because of this, some couples practice anal intercourse as a form of contraception, often in the absence of a condom.
Male-to-female anal sex is commonly viewed as a way of preserving female virginity because it is non-procreative and does not tear the hymen; a person, especially a teenage girl or woman, who engages in anal sex or other sexual activity with no history of having engaged in vaginal intercourse is often regarded among heterosexuals and researchers as not having yet experienced virginity loss. This is sometimes called "technical virginity." Heterosexuals may view anal sex as "fooling around" or as foreplay; scholar Laura M. Carpenter stated that this view "dates to the late 1600s, with explicit 'rules' appearing around the turn of the twentieth century, as in marriage manuals defining petting as 'literally every caress known to married couples but does not include complete sexual intercourse.'"
Because most research on anal intercourse addresses men who have sex with men, little data exists on the prevalence of anal intercourse among heterosexual couples. In Kimberly R. McBride's 2010 clinical review on heterosexual anal intercourse and other forms of anal sexual activity, it is suggested that changing norms may affect the frequency of heterosexual anal sex. McBride and her colleagues investigated the prevalence of non-intercourse anal sex behaviors among a sample of men (n=1,299) and women (n=1,919) compared to anal intercourse experience and found that 51% of men and 43% of women had participated in at least one act of oral–anal sex, manual–anal sex, or anal sex toy use. The report states the majority of men (n=631) and women (n=856) who reported heterosexual anal intercourse in the past 12 months were in exclusive, monogamous relationships: 69% and 73%, respectively. The review added that because "relatively little attention [is] given to anal intercourse and other anal sexual behaviors between heterosexual partners", this means that it is "quite rare" to have research "that specifically differentiates the anus as a sexual organ or addresses anal sexual function or dysfunction as legitimate topics. As a result, we do not know the extent to which anal intercourse differs qualitatively from coitus."
According to a 2010 study from the National Survey of Sexual Health and Behavior (NSSHB) that was authored by Debby Herbenick et al., although anal intercourse is reported by fewer women than other partnered sex behaviors, partnered women in the age groups between 18–49 are significantly more likely to report having anal sex in the past 90 days. Women engaged in anal intercourse less commonly than men. Vaginal intercourse was practiced more than insertive anal intercourse among men, but 13% to 15% of men aged 25 to 49 practiced insertive anal intercourse.
With regard to adolescents, limited data also exists. This may be because of the taboo nature of anal sex and that teenagers and caregivers subsequently avoid talking to one another about the topic. It is also common for subject review panels and schools to avoid the subject. A 2000 study found that 22.9% of college students who self-identified as non-virgins had anal sex. They used condoms during anal sex 20.9% of the time as compared with 42.9% of the time with vaginal intercourse.
Anal sex being more common among heterosexuals today than it was previously has been linked to the increase in consumption of anal pornography among men, especially among those who view it on a regular basis. Seidman et al. argued that "cheap, accessible and, especially, interactive media have enabled many more people to produce as well as consume pornography", and that this modern way of producing pornography, in addition to the buttocks and anus having become more eroticized, has led to a significant interest in or obsession with anal sex among men.
Historically, anal sex has been commonly associated with male homosexuality. However, many gay men and men who have sex with men in general (those who identify as gay, bisexual, heterosexual or have not identified their sexual identity) do not engage in anal sex. Among men who have anal sex with other men, the insertive partner may be referred to as the "top" and the one being penetrated may be referred to as the "bottom". Those who enjoy either role may be referred to as "versatile".
Gay men who prefer anal sex may view it as their version of intercourse and a natural expression of intimacy that is capable of providing pleasure. The notion that it might resonate with gay men with the same emotional significance that vaginal sex resonates with heterosexuals has also been considered. Some men who have sex with men, however, believe that being a receptive partner during anal sex questions their masculinity.
Men who have sex with men may also prefer to engage in frot or other forms of mutual masturbation because they find it more pleasurable or more affectionate, to preserve technical virginity, or as safe sex alternatives to anal sex, while other frot advocates denounce anal sex as degrading to the receptive partner and unnecessarily risky.
Reports regarding the prevalence of anal sex among gay men and other men who have sex with men vary. A survey in "The Advocate" in 1994 indicated that 46% of gay men preferred to penetrate their partners, while 43% preferred to be the receptive partner. Other sources suggest that roughly three-fourths of gay men have had anal sex at one time or another, with an equal percentage participating as tops and bottoms. A 2012 NSSHB sex survey in the U.S. suggests high lifetime participation in anal sex among gay men: 83.3% report ever taking part in anal sex in the insertive position and 90% in the receptive position, even if only between a third and a quarter self-report very recent engagement in the practice, defined as 30 days or less.
Oral sex and mutual masturbation are more common than anal stimulation among men in sexual relationships with other men. According to Weiten et al., anal intercourse is generally more popular among gay male couples than among heterosexual couples, but "it ranks behind oral sex and mutual masturbation" among both sexual orientations in prevalence. Wellings et al. reported that "the equation of 'homosexual' with 'anal' sex among men is common among lay and health professionals alike" and that "yet an Internet survey of 180,000 MSM across Europe (EMIS, 2011) showed that oral sex was most commonly practised, followed by mutual masturbation, with anal intercourse in third place".
Women may sexually stimulate a man's anus by fingering the exterior or interior areas of the anus; they may also stimulate the perineum (which, for males, is between the base of the scrotum and the anus), massage the prostate or engage in anilingus. Sex toys, such as a dildo, may also be used. The practice of a woman penetrating a man's anus with a strap-on dildo for sexual activity is called pegging.
Commonly, heterosexual men reject the idea of being receptive partners during anal sex because they believe it is a feminine act, can make them vulnerable, or contradicts their sexual orientation (for example, that it is indicative that they are gay). The "BMJ" stated in 1999, however:
Reece et al. reported in 2010 that receptive anal intercourse is infrequent among men overall, stating that "an estimated 7% of men 14 to 94 years old reported being a receptive partner during anal intercourse".
With regard to lesbian sexual practices, anal sex includes fingering, use of a dildo or other sex toys, or anilingus.
There is less research on anal sexual activity among women who have sex with women compared to couples of other sexual orientations. In 1987, a non-scientific study (Munson) was conducted of more than 100 members of a lesbian social organization in Colorado. When asked what techniques they used in their last ten sexual encounters, lesbians in their 30s were twice as likely as other age groups to engage in anal stimulation (with a finger or dildo). A 2014 study of partnered lesbian women in Canada and the U.S. found that 7% engaged in anal stimulation or penetration at least once a week; about 10% did so monthly and 70% did not at all. Anilingus is also less often practiced among female same-sex couples.
Anal sex can expose its participants to two principal dangers: infections due to the high number of infectious microorganisms not found elsewhere on the body, and physical damage to the anus and rectum due to their fragility. Unprotected penile-anal penetration, colloquially known as "barebacking", carries a higher risk of passing on sexually transmitted infections (STIs) because the anal sphincter is a delicate, easily torn tissue that can provide an entry for pathogens. Use of condoms, ample lubrication to reduce the risk of tearing, and safer sex practices in general, reduce the risk of STI transmission. However, a condom can break or otherwise come off during anal sex, and this is more likely to happen with anal sex than with other sex acts because of the tightness of the anal sphincters during friction.
Unprotected receptive anal sex (with an HIV positive partner) is the sex act most likely to result in HIV transmission. Other infections that can be transmitted by unprotected anal sex are human papillomavirus (HPV) (which can increase risk of anal cancer); typhoid fever; amoebiasis; chlamydia; cryptosporidiosis; "E. coli" infections; giardiasis; gonorrhea; hepatitis A; hepatitis B; hepatitis C; herpes simplex; Kaposi's sarcoma-associated herpesvirus (HHV-8); lymphogranuloma venereum; "Mycoplasma hominis"; "Mycoplasma genitalium"; pubic lice; salmonellosis; shigella; syphilis; tuberculosis; and "Ureaplasma urealyticum".
As with other sexual practices, people without sound knowledge about the sexual risks involved are susceptible to STIs. Because of the view that anal sex is not "real sex" and therefore does not result in virginity loss, or pregnancy, teenagers and other young people may consider vaginal intercourse riskier than anal intercourse and believe that a STI can only result from vaginal intercourse. It may be because of these views that condom use with anal sex is often reported to be low and inconsistent across all groups in various countries.
Although anal sex alone does not lead to pregnancy, pregnancy can still occur with anal sex or other forms of sexual activity if the penis is near the vagina (such as during intercrural sex or other genital-genital rubbing) and its sperm is deposited near the vagina's entrance and travels along the vagina's lubricating fluids; the risk of pregnancy can also occur without the penis being near the vagina because sperm may be transported to the vaginal opening by the vagina coming in contact with fingers or other non-genital body parts that have come in contact with semen.
There are a variety of factors that make male-to-female anal intercourse riskier than vaginal intercourse for women, including the risk of HIV transmission being higher for anal intercourse than for vaginal intercourse. The risk of injury to the woman during anal intercourse is also significantly higher than the risk of injury to her during vaginal intercourse because of the durability of the vaginal tissues compared to the anal tissues. Additionally, if a man moves from anal intercourse immediately to vaginal intercourse without a condom or without changing it, infections can arise in the vagina (or urinary tract) due to bacteria present within the anus; these infections can also result from switching between vaginal sex and anal sex by the use of fingers or sex toys.
Pain during receptive anal sex among gay men (or men who have sex with men) is formally known as "anodyspareunia." In one study, 61% of gay or bisexual men said they experienced painful receptive anal sex and that it was the most frequent sexual difficulty they had experienced. By contrast, 24% of gay or bisexual men stated that they always experienced some degree of pain during anal sex, and about 12% of gay men find it too painful to pursue receptive anal sex; it was concluded that the perception of anal sex as painful is as likely to be psychologically or emotionally based as it is to be physically based. Factors predictive of pain during anal sex include inadequate lubrication, feeling tense or anxious, lack of stimulation, as well as lack of social ease with being gay and being closeted. Research has found that psychological factors can in fact be the primary contributors to the experience of pain during anal intercourse and that adequate communication between sexual partners can prevent it, countering the notion that pain is always inevitable during anal sex.
Unprotected anal sex is a risk factor for formation of antisperm antibodies (ASA) in the recipient. In some people, ASA may cause autoimmune infertility. Antisperm antibodies impair fertilization, negatively affect the implantation process and impair growth and development of the embryo.
Anal sex can exacerbate hemorrhoids and therefore result in bleeding; in other cases, the formation of a hemorrhoid is attributed to anal sex. If bleeding occurs as a result of anal sex, it may also be because of a tear in the anal or rectal tissues (an anal fissure) or perforation (a hole) in the colon, the latter of which being a serious medical issue that should be remedied by immediate medical attention. Because of the rectum's lack of elasticity, the anal mucous membrane being thin, and small blood vessels being present directly beneath the mucous membrane, tiny tears and bleeding in the rectum usually result from penetrative anal sex, though the bleeding is usually minor and therefore usually not visible. By contrast to other anal sexual behaviors, anal fisting poses a more serious danger of damage due to the deliberate stretching of the anal and rectal tissues; anal fisting injuries include anal sphincter lacerations and rectal and sigmoid colon (rectosigmoid) perforation, which might result in death.
Repetitive penetrative anal sex may result in the anal sphincters becoming weakened, which may cause rectal prolapse or affect the ability to hold in feces (a condition known as fecal incontinence). Rectal prolapse is relatively uncommon, however, especially in men, and its causes are not well understood. Kegel exercises have been used to strengthen the anal sphincters and overall pelvic floor, and may help prevent or remedy fecal incontinence.
Most cases of anal cancer are related to infection with the human papilloma virus (HPV). Anal sex alone does not cause anal cancer; the risk of anal cancer through anal sex is attributed to HPV infection, which is often contracted through unprotected anal sex. Anal cancer is relatively rare, and significantly less common than cancer of the colon or rectum (colorectal cancer); the American Cancer Society states that it affects approximately 7,060 people (4,430 in women and 2,630 in men) and results in approximately 880 deaths (550 in women and 330 in men) in the United States, and that, though anal cancer has been on the rise for many years, it is mainly diagnosed in adults, "with an average age being in the early 60s" and it "affects women somewhat more often than men." Though anal cancer is serious, treatment for it is "often very effective" and most anal cancer patients can be cured of the disease; the American Cancer Society adds that "receptive anal intercourse also increases the risk of anal cancer in both men and women, particularly in those younger than the age of 30. Because of this, men who have sex with men have a high risk of this cancer."
Different cultures have had different views on anal sex throughout human history, with some cultures more positive about the activity than others. Historically, anal sex has been restricted or condemned, especially with regard to religious beliefs; it has also commonly been used as a form of domination, usually with the active partner (the one who is penetrating) representing masculinity and the passive partner (the one who is being penetrated) representing femininity. A number of cultures have especially recorded the practice of anal sex between males, and anal sex between males has been especially stigmatized or punished. In some societies, if discovered to have engaged in the practice, the individuals involved were put to death, such as by decapitation, burning, or even mutilation.
Anal sex has been more accepted in modern times; it is often considered a natural, pleasurable form of sexual expression. Some people, men in particular, are only interested in anal sex for sexual satisfaction, which has been partly attributed to the buttocks and anus being more eroticized in modern culture, including via pornography. Engaging in anal sex is still, however, punished in some societies. For example, regarding LGBT rights in Iran, Iran's Penal Code states in Article 109 that "both men involved in same-sex penetrative (anal) or non-penetrative sex will be punished" and "Article 110 states that those convicted of engaging in anal sex will be executed and that the manner of execution is at the discretion of the judge".
From the earliest records, the ancient Sumerians had very relaxed attitudes toward sex and did not regard anal sex as taboo. "Entu" priestesses were forbidden from producing offspring and frequently engaged in anal sex as a method of birth control. Anal sex is also obliquely alluded to by a description of an omen in which a man "keeps saying to his wife: 'Bring your backside.'" Other Sumerian texts refer to homosexual anal intercourse. The "gala", a set of priests who worked in the temples of the goddess Inanna, where they performed elegies and lamentations, were especially known for their homosexual proclivities. The Sumerian sign for "gala" was a ligature of the signs for "penis" and "anus". One Sumerian proverb reads: "When the "gala" wiped off his ass [he said], 'I must not arouse that which belongs to my mistress [i.e., Inanna].'"
The term "Greek love" has long been used to refer to anal intercourse, and in modern times, "doing it the Greek way" is sometimes used as slang for anal sex. Male-male anal sex was not a universally accepted practice in Ancient Greece; it was the target of jokes in some Athenian comedies. Aristophanes, for instance, mockingly alludes to the practice, claiming, "Most citizens are "europroktoi" (wide-arsed) now." The terms "kinaidos", "europroktoi", and "katapygon" were used by Greek residents to categorize men who chronically practiced passive anal intercourse. Pederastic practices in ancient Greece (sexual activity between men and adolescent boys), at least in Athens and Sparta, were expected to avoid penetrative sex of any kind. Greek artwork of sexual interaction between men and boys usually depicted fondling or intercrural sex, which was not condemned for violating or feminizing boys, while male-male anal intercourse was usually depicted between males of the same age-group. Intercrural sex was not considered penetrative and two males engaging in it was considered a "clean" act. Some sources explicitly state that anal sex between men and boys was criticized as shameful and seen as a form of hubris. Evidence suggests, however, that the younger partner in pederastic relationships (i.e., the "eromenos") did engage in receptive anal intercourse so long as no one accused him of being 'feminine'.
In later Roman-era Greek poetry, anal sex became a common literary convention, represented as taking place with "eligible" youths: those who had attained the proper age but had not yet become adults. Seducing those not of proper age (for example, non-adolescent children) into the practice was considered very shameful for the adult, and having such relations with a male who was no longer adolescent was considered more shameful for the young male than for the one mounting him. Greek courtesans, or hetaerae, are said to have frequently practiced male-female anal intercourse as a means of preventing pregnancy.
A male citizen taking the passive (or receptive) role in anal intercourse ("paedicatio" in Latin) was condemned in Rome as an act of "impudicitia" (immodesty or unchastity); free men, however, could take the active role with a young male slave, known as a "catamite" or "puer delicatus". The latter was allowed because anal intercourse was considered equivalent to vaginal intercourse in this way; men were said to "take it like a woman" (muliebria pati, "to undergo womanly things") when they were anally penetrated, but when a man performed anal sex on a woman, she was thought of as playing the boy's role. Likewise, women were believed to only be capable of anal sex or other sex acts with women if they possessed an exceptionally large clitoris or a dildo. The passive partner in any of these cases was always considered a woman or a boy because being the one who penetrates was characterized as the only appropriate way for an adult male citizen to engage in sexual activity, and he was therefore considered unmanly if he was the one who was penetrated; slaves could be considered "non-citizen". Although Roman men often availed themselves of their own slaves or others for anal intercourse, Roman comedies and plays presented Greek settings and characters for explicit acts of anal intercourse, and this may be indicative that the Romans thought of anal sex as something specifically "Greek".
In Japan, records (including detailed shunga) show that some males engaged in penetrative anal intercourse with males. Evidence suggestive of widespread male-female anal intercourse in a pre-modern culture can be found in the erotic vases, or stirrup-spout pots, made by the Moche people of Peru; in a survey, of a collection of these pots, it was found that 31 percent of them depicted male-female anal intercourse significantly more than any other sex act. Moche pottery of this type belonged to the world of the dead, which was believed to be a reversal of life. Therefore, the reverse of common practices was often portrayed. The Larco Museum houses an erotic gallery in which this pottery is showcased.
In many Western countries, anal sex has generally been taboo since the Middle Ages, when heretical movements were sometimes attacked by accusations that their members practiced anal sex among themselves. At that time, celibate members of the Christian clergy were accused of engaging in "sins against nature", including anal sex.
The term "buggery" originated in medieval Europe as an insult used to describe the rumored same-sex sexual practices of the heretics from a sect originating in Bulgaria, where its followers were called "bogomils"; when they spread out of the country, they were called "buggres" (from the ethnonym "Bulgars"). Another term for the practice, more archaic, is "pedicate" from the Latin "pedicare", with the same meaning.
The Renaissance poet Pietro Aretino advocated anal sex in his "Sonetti Lussuriosi" (Lust Sonnets). While men who engaged in homosexual relationships were generally suspected of engaging in anal sex, many such individuals did not. Among these, in recent times, have been André Gide, who found it repulsive; and Noël Coward, who had a horror of disease, and asserted when young that "I'd never do anything – well the disgusting thing they do – because I know I could get something wrong with me".
The Mishneh Torah, a text considered authoritative by Orthodox Jewish sects, states "since a man's wife is permitted to him, he may act with her in any manner whatsoever. He may have intercourse with her whenever he so desires and kiss any organ of her body he wishes, and he may have intercourse with her naturally or unnaturally [traditionally, "unnaturally" refers to anal and oral sex], provided that he does not expend semen to no purpose. Nevertheless, it is an attribute of piety that a man should not act in this matter with levity and that he should sanctify himself at the time of intercourse."
Christian texts may sometimes euphemistically refer to anal sex as the "peccatum contra naturam" (the sin against nature, after Thomas Aquinas) or "Sodomitica luxuria" (sodomitical lusts, in one of Charlemagne's ordinances), or "peccatum illud horribile, inter christianos non nominandum" (that horrible sin that among Christians is not to be named).
"Liwat", or the sin of Lot's people, which has come to be interpreted as referring generally to same-sex sexual activity, is commonly officially prohibited by Islamic sects; there are parts of the Quran which talk about smiting on Sodom and Gomorrah, and this is thought to be a reference to unnatural sex, and so there are hadith and Islamic laws which prohibit it. While, concerning Islamic belief, it is objectionable to use the words "al-Liwat" and "luti" to refer to homosexuality because it is blasphemy toward the prophet of Allah, and therefore the terms "sodomy" and "homosexuality" are preferred, same-sex male practitioners of anal sex are called "luti" or "lutiyin" in plural and are seen as criminals in the same way that a thief is a criminal. | https://en.wikipedia.org/wiki?curid=2460 |
Aarau
Aarau (, ) is a town, a municipality, and the capital of the northern Swiss canton of Aargau. The town is also the capital of the district of Aarau. It is German-speaking and predominantly Protestant. Aarau is situated on the Swiss plateau, in the valley of the Aare, on the river's right bank, and at the southern foot of the Jura Mountains, and is west of Zürich, and northeast of Bern. The municipality borders directly on the canton of Solothurn to the west. It is the largest town in Aargau. At the beginning of 2010 Rohr became a district of Aarau.
The official language of Aarau is (the Swiss variety of Standard) German, but the main spoken language is the local variant of the Alemannic Swiss German dialect.
The old city of Aarau is situated on a rocky outcrop at a narrowing of the Aare river valley, at the southern foot of the Jura mountains. Newer districts of the city lie to the south and east of the outcrop, as well as higher up the mountain, and in the valley on both sides of the Aare.
The neighboring municipalities are Küttigen to the north and Buchs to the east, Suhr to the south-east, Unterentfelden to the south, and Eppenberg-Wöschnau and Erlinsbach to the west.
Aarau and the nearby neighboring municipalities have grown together and now form an interconnected agglomeration. The only exception is Unterentfelden whose settlements are divided from Aarau by the extensive forests of Gönhard and Zelgli.
Approximately nine-tenths of the city is south of the Aare, and one tenth is to the north. It has an area, , of . Of this area, 6.3% is used for agricultural purposes, while 34% is forested. Of the rest of the land, 55.2% is settled (buildings or roads) and the remainder (4.5%) is non-productive (rivers or lakes). The lowest elevation, , is found at the banks of the Aar, and the highest elevation, at , is the Hungerberg on the border with Küttigen.
A few artifacts from the Neolithic period were found in Aarau. Near the location of the present train station, the ruins of a settlement from the Bronze Age (about 1000 BC) have been excavated. The Roman road between Salodurum (Solothurn) and Vindonissa passed through the area, along the route now covered by the Bahnhofstrasse. In 1976 divers in the Aare found part of a seven-meter wide wooden bridge from the late Roman times.
Aarau was founded around AD 1240 by the counts of Kyburg. Aarau is first mentioned in 1248 as "Arowe". Around 1250 it was mentioned as "Arowa". However the first mention of a city sized settlement was in 1256. The town was ruled from the "Rore" tower, which has been incorporated into the modern city hall.
In 1273 the counts of Kyburg died out. Agnes of Kyburg, who had no male relations, sold the family's lands to King Rudolf I von Habsburg. He granted Aarau its city rights in 1283. In the 14th century the city was expanded in two stages, and a second defensive wall was constructed. A deep ditch separated the city from its "suburb;" its location is today marked by a wide street named "Graben" (meaning Ditch).
In 1415 Bern invaded lower Aargau with the help of Solothurn. Aarau capitulated after a short resistance, and was forced to swear allegiance to the new rulers. In the 16th century, the rights of the lower classes were abolished.
In March 1528 the citizens of Aarau allowed the introduction of Protestantism at the urging of the Bernese. A growth in population during the 16th Century led to taller buildings and denser construction methods. Early forms of industry developed at this time; however, unlike in other cities, no guilds were formed in Aarau.
On 11 August 1712, the Peace of Aarau was signed into effect. This granted each canton the right to choose their own religion thereby ending Catholicism's control. Starting in the early 18th century, the textile industry was established in Aarau. German immigration contributed to the city's favorable conditions, in that they introduced the cotton and silk factories. These highly educated immigrants were also responsible for educational reform and the enlightened, revolutionary spirit that developed in Aarau.
On 27 December 1797, the last Tagsatzung of the Old Swiss Confederacy was held in Aarau. Two weeks later a French envoy continued to foment the revolutionary opinions of the city. The contrast between a high level of education and a low level of political rights was particularly great in Aarau, and the city refused to send troops to defend the Bernese border. By Mid-March 1798 Aarau was occupied by French troops.
On 22 March 1798 Aarau was declared the capital of the Helvetic Republic. It is therefore the first capital of a unified Switzerland. Parliament met in the city hall. On 20 September, the capital was moved to Lucerne.
In 1803, Napoleon ordered the fusion of the cantons of Aargau, Baden and Fricktal. Aarau was declared the capital of the new, enlarged canton of Aargau. In 1820 the city wall was torn down, with the exception of the individual towers and gates, and the defensive ditches were filled in.
The wooden bridge, dating from the Middle Ages, across the Aare was destroyed by floods three times in thirty years, and was replaced with a steel suspension bridge in 1851. This was replaced by a concrete bridge in 1952. The city was linked up to the Swiss Central Railway in 1856.
The textile industry in Aarau broke down in about 1850 because of the protectionist tariff policies of neighboring states. Other industries had developed by that time to replace it, including the production of mathematical instruments, shoes and cement. Beginning in 1900, numerous electrical enterprises developed. By the 1960s, more citizens worked in service industries or for the canton-level government than in manufacturing. During the 1980s many of the industries left Aarau completely.
In 1802 the Canton School was established; it was the first non-parochial high school in Switzerland. It developed a good reputation, and was home to Nobel Prize winners Albert Einstein, Paul Karrer, and Werner Arber, as well as several Swiss politicians and authors.
The purchase of a manuscript collection in 1803 laid the foundation for what would become the Cantonal Library, which contains a Bible annotated by Huldrych Zwingli, along with the manuscripts and incunabula. More newspapers developed in the city, maintaining the revolutionary atmosphere of Aarau. Beginning in 1820, Aarau has been a refuge for political refugees.
The urban educational and cultural opportunities of Aarau were extended through numerous new institutions. A Theatre and Concert Hall was constructed in 1883, which was renovated and expanded in 1995–96. The Aargau Nature Museum opened in 1922. A former cloth warehouse was converted into a small theatre in 1974, and the alternative culture center KIFF (Culture in the fodder factory) was established in a former animal fodder factory.
The earliest use of the place name was in 1248 (in the form Arowe), and probably referred to the settlement in the area before the founding of the city. It comes, along with the name of the River Aare (which was called Arula, Arola, and Araris in early times), from the German word "Au", meaning floodplain.
The historic old town forms an irregular square, consisting of four parts (called "Stöcke"). To the south lies the Laurenzenvorstadt, that is, the part of the town formerly outside the city wall. One characteristic of the city is its painted gables, for which Aarau is sometimes called the "City of beautiful Gables". The old town, Laurenzenvorstadt, government building, cantonal library, state archive and art museum are all listed as heritage sites of national significance.
The buildings in the old city originate, on the whole, from building projects during the 16th century, when nearly all the Middle Age period buildings were replaced or expanded. The architectural development of the city ended in the 18th century, when the city began to expand beyond its (still existing) wall. Most of the buildings in the "suburb" date from this time.
The "Schlössli" (small Castle), Rore Tower and the upper gate tower have remained nearly unchanged since the 13th century. The "Schlössli" is the oldest building in the city. It was already founded at the time of the establishment of the city shortly after 1200; the exact date is not known. City hall was built around Rore Tower in 1515.
The upper gate tower stands beside the southern gate in the city wall, along the road to Lucerne and Bern. The jail has been housed in it since the Middle Ages. A Carillon was installed in the tower in the middle of the 20th century, the bells for which were provided by the centuries-old bell manufacturers of Aarau.
The town church was built between 1471 and 1478. During the Reformation, in 1528, its twelve altars and accompanying pictures were destroyed. The "Justice fountain" (Gerechtskeitbrunnen) was built in 1634, and is made of French limestone; it includes a statue of Lady Justice made of sandstone, hence the name. It was originally in the street in front of city hall, but was moved to its present location in front of the town church in 1905 due to increased traffic.
, Aarau had an unemployment rate of 2.35%. , there were 48 people employed in the primary economic sector and about 9 businesses involved in this sector. 4,181 people are employed in the secondary sector and there are 164 businesses in this sector. 20,186 people are employed in the tertiary sector, with 1,461 businesses in this sector. This is a total of over 24,000 jobs, since Aarau's population is about 16,000 it draws workers from many surrounding communities. there were 8,050 total workers who lived in the municipality. Of these, 4,308 or about 53.5% of the residents worked outside Aarau while 17,419 people commuted into the municipality for work. There were a total of 21,161 jobs (of at least 6 hours per week) in the municipality.
The largest employer in Aarau is the cantonal government, the offices of which are distributed across the entire city at numerous locations. One of the two head offices of the "Aargauer Zeitung", Switzerland's fifth largest newspaper, is located in Aarau, as are the Tele M1 television channel studios, and several radio stations.
Kern & Co., founded in 1819, was an internationally known geodetic instrument manufacturer based in Aarau. However, it was taken over by Wild Leitz in 1988, and was closed in 1991.
More than half of the workers in Aarau live in the city's suburbs, or farther away in the surrounding area. This leads to a busy rush hour, and regular traffic jams. Statistically, Aarau has the most jobs per capita of any Swiss city.
The small scale of Aarau causes it to continually expand the borders of its growth. The urban center lies in the middle of the "Golden Triangle" between Zürich, Bern, and Basel, and Aarau is having increasing difficulty in maintaining the independence of its economic base from the neighboring large cities. The idea of merging Aarau with its neighboring suburbs has been recently discussed in the hope of arresting the slowly progressing losses.
Manufacture include bells, mathematical instruments, electrical goods, cotton textiles, cutlery, chemicals, shoes, and other products. Aarau is famous for the quality of their instruments, cutlery and their bells.
Every Saturday morning there is a vegetable market in the "Graben" at the edge of the Old City. It is supplied with regional products. In the last week of September the MAG (Market of Aarauer Tradesmen) takes place there, with regional companies selling their products. The "Rüeblimärt" is held in the same place on the first Wednesday in November, which is a Carrot fair. The Aarau fair is held at the ice skating rink during the Spring.
Aarau railway station is a terminus of the S-Bahn Zürich on the line S3.
The town is also served with public transport provided by Busbetrieb Aarau AG.
The population of Aarau grew continuously from 1800 until about 1960, when the city reached a peak population of 17,045, more than five times its population in 1800. However, since 1960 the population has fallen by 8%. There are three reasons for this population loss: firstly, since the completion of Telli (a large apartment complex), the city has not had any more considerable land developments. Secondly, the number of people per household has fallen; thus, the existing dwellings do not hold as many people. Thirdly, population growth was absorbed by neighboring municipalities in the regional urban area, and numerous citizens of Aarau moved into the countryside. This trend might have stopped since the turn of the 21st century. Existing industrial developments are being used for new purposes instead of standing empty.
Aarau has a population (as of ) of . , 19.8% of the population was made up of foreign nationals. Over the last 10 years the population has grown at a rate of 1%. Most of the population () speaks German (84.5%), with Italian being second most common ( 3.3%) and Serbo-Croatian being third ( 2.9%).
The age distribution, , in Aarau is; 1,296 children or 8.1% of the population are between 0 and 9 years old and 1,334 teenagers or 8.4% are between 10 and 19. Of the adult population, 2,520 people or 15.8% of the population are between 20 and 29 years old. 2,518 people or 15.8% are between 30 and 39, 2,320 people or 14.6% are between 40 and 49, and 1,987 people or 12.5% are between 50 and 59. The senior population distribution is 1,588 people or 10.0% of the population are between 60 and 69 years old, 1,219 people or 7.7% are between 70 and 79, there are 942 people or 5.9% who are between 80 and 89,and there are 180 people or 1.1% who are 90 and older.
, there were 1,365 homes with 1 or 2 persons in the household, 3,845 homes with 3 or 4 persons in the household, and 2,119 homes with 5 or more persons in the household. The average number of people per household was 1.99 individuals. there were 1,594 single family homes (or 18.4% of the total) out of a total of 8,661 homes and apartments.
In Aarau about 74.2% of the population (between age 25–64) have completed either non-mandatory upper secondary education or additional higher education (either university or a "Fachhochschule"). Of the school age population (), there are 861 students attending primary school, there are 280 students attending secondary school, there are 455 students attending tertiary or university level schooling, there are 35 students who are seeking a job after school in the municipality.
The football club FC Aarau play in the Stadion Brügglifeld. From 1981 until 2010 they played in the top tier of the Swiss football league system when they were relegated to the Swiss Challenge League. In the 2013/2014 they climbed back to the highest tier only to be relegated again. In the 2016/17 season they will play in the Swiss Challenge League. They won the Swiss Cup in 1985 and were three times Swiss football champions, in 1912, in 1914 and in 1993.
Argovia Stars play in the MySports League, the third highest league of Swiss ice hockey. They play their home games in the 3,000-seat KeBa Aarau Arena.
Aarau is home to a number of sites that are listed as Swiss heritage sites of national significance. The list includes three churches; the Christian Catholic parish house, the Catholic parish house, and the Reformed "City Church". There are five government buildings on the list; the Cantonal Library, which contains many pieces important to the nation's history, and Art Gallery, the old Cantonal School, the Legislature, the Cantonal Administration building, and the archives. Three gardens or parks are on the list; "Garten Schmidlin", "Naturama Aargau" and the "Schlossgarten". The remaining four buildings on the list are; the former Rickenbach Factory, the Crematorium, the "Haus zum Erker" at Rathausgasse 10 and the "Restaurant Zunftstube" at Pelzgasse.
The Bally Shoe company has a unique shoe museum in the city. There is also the Trade Museum which contain stained glass windows from Muri Convent and paintings.
Each May, Aarau plays host to the annual Jazzaar Festival attracting the world's top Jazz musicians.
From the , 4,473 or 28.9% are Roman Catholic, while 6,738 or 43.6% belonged to the Swiss Reformed Church. Of the rest of the population, there are 51 individuals (or about 0.33% of the population) who belong to the Christian Catholic i.e. Old Catholic faith.
In place of a town meeting, a town assembly ("Einwohnerrat") of 50 members is elected by the citizens, and follows the policy of proportional representation. It is responsible for approving tax levels, preparing the annual account, and the business report. In addition, it can issue regulations. The term of office is four years. In the last two elections the parties had the following representation:
At the district level, some elements of the government remain a direct democracy. There are optional and obligatory referendums, and the population retains the right to establish an initiative.
The executive authority is the town council ("Stadtrat"). The term of office is four years, and its members are elected by a plurality voting system. It leads and represents the municipality. It carries out the resolutions of the assembly, and those requested by the canton and national level governments.
The seven members (and their party) are:
In the 2007 federal election the most popular party was the SP which received 27.9% of the vote. The next three most popular parties were the SVP (22.1%), the FDP (17.5%) and the Green Party (11.8%).
The blazon of the municipal coat of arms is "Argent an Eagle displayed Sable beaked langued and membered Gules and a Chief of the last."
Aarau is twinned with: | https://en.wikipedia.org/wiki?curid=2466 |
Canton of Aargau
The canton of Aargau ( ; sometimes Latinized as ; see also other names) is one of the more northerly cantons of Switzerland. It is situated by the lower course of the Aare River, which is why the canton is called "Aar-gau" (meaning "Aare province"). It is one of the most densely populated regions of Switzerland.
The area of Aargau and the surrounding areas were controlled by the Helvetians, a member of the Celts, as far back as 200 BC. It was eventually occupied by the Romans and then by the 6th century, the Franks. The Romans built a major settlement called Vindonissa, near the present location of Brugg.
The reconstructed Old High German name of Aargau is "Argowe", first unambiguously attested (in the spelling "Argue") in 795. The term described a territory only loosely equivalent to that of the modern canton, including the region between Aare and Reuss rivers, including Pilatus and Napf, i.e. including parts of the modern cantons of Bern (Bernese Aargau, Emmental, parts of the Bernese Oberland), Solothurn, Basel-Landschaft, Lucerne, Obwalden and Nidwalden, but not the parts of the modern canton east of the Reuss (Baden District), which were part of Zürichgau.
Within the Frankish Empire (8th to 10th centuries), the area was a disputed border region between the duchies of Alamannia and Burgundy. A line of the von Wetterau (Conradines) intermittently held the countship of Aargau from 750 until about 1030, when they lost it (having in the meantime taken the name von Tegerfelden). This division became the ill-defined (and sparsely settled) outer border of the early Holy Roman Empire at its formation in the second half of the 10th century. Most of the region came under the control of the ducal house of Zähringen and the comital houses of Habsburg and Kyburg by about 1200.
In the second half of the 13th century, the territory became divided between the territories claimed by the imperial cities of Bern, Lucerne and Solothurn and the Swiss canton of Unterwalden.
The remaining portion, largely corresponding to the modern canton of Aargau, remained under the control of the Habsburgs until the "conquest of Aargau" by the Old Swiss Confederacy in 1415.
Habsburg Castle itself, the original seat of the House of Habsburg, was taken by Bern in April 1415.
The Habsburgs had founded a number of monasteries (with some structures enduring, e.g., in Wettingen and Muri), the closing of which by the government in 1841 was a contributing factor to the outbreak of the Swiss civil war – the "Sonderbund War" – in 1847.
When Frederick IV of Habsburg sided with Antipope John XXIII at the Council of Constance, Emperor Sigismund placed him under the Imperial ban. In July 1414, the Pope visited Bern and received assurances from them, that they would move against the Habsburgs. A few months later the Swiss Confederation denounced the Treaty of 1412. Shortly thereafter in 1415, Bern and the rest of the Swiss Confederation used the ban as a pretext to invade the Aargau. The Confederation was able to quickly conquer the towns of Aarau, Lenzburg, Brugg and Zofingen along with most of the Habsburg castles. Bern kept the southwest portion (Zofingen, Aarburg, Aarau, Lenzburg, and Brugg), northward to the confluence of the Aare and Reuss. The important city of Baden was taken by a united Swiss army and governed by all 8 members of the Confederation. Some districts, named the "Freie Ämter" ("free bailiwicks") – Mellingen, Muri, Villmergen, and Bremgarten, with the countship of Baden – were governed as "subject lands" by all or some of the Confederates. Shortly after the conquest of the Aargau by the Swiss, Frederick humbled himself to the Pope. The Pope reconciled with him and ordered all of the taken lands to be returned. The Swiss refused and years later after no serious attempts at re-acquisition, the Duke officially relinquished rights to the Swiss.
Bern's portion of the Aargau came to be known as the Unteraargau, though can also be called the Berner or Bernese Aargau. In 1514 Bern expanded north into the Jura and so came into possession of several strategically important mountain passes into the Austrian Fricktal. This land was added to the Unteraargau and was directly ruled from Bern. It was divided into seven rural bailiwicks and four administrative cities, Aarau, Zofingen, Lenzburg and Brugg. While the Habsburgs were driven out, many of their minor nobles were allowed to keep their lands and offices, though over time they lost power to the Bernese government. The bailiwick administration was based on a very small staff of officials, mostly made up of Bernese citizens, but with a few locals.
When Bern converted during the Protestant Reformation in 1528, the Unteraargau also converted. At the beginning of the 16th century a number of anabaptists migrated into the upper Wynen and Rueder valleys from Zürich. Despite pressure from the Bernese authorities in the 16th and 17th centuries anabaptism never entirely disappeared from the Unteraargau.
Bern used the Aargau bailiwicks mostly as a source of grain for the rest of the city-state. The administrative cities remained economically only of regional importance. However, in the 17th and 18th centuries Bern encouraged industrial development in Unteraargau and by the late 18th century it was the most industrialized region in the city-state. The high industrialization led to high population growth in the 18th century, for example between 1764 and 1798, the population grew by 35%, far more than in other parts of the canton. In 1870 the proportion of farmers in Aarau, Lenzburg, Kulm, and Zofingen districts was 34–40%, while in the other districts it was 46–57%.
The rest of the Freie Ämter were collectively administered as subject territories by the rest of the Confederation. Muri "Amt" was assigned to Zürich, Lucerne, Schwyz, Unterwalden, Zug and Glarus, while the "Ämter" of Meienberg, Richensee and Villmergen were first given to Lucerne alone. The final boundary was set in 1425 by an arbitration tribunal and Lucerne had to give the three "Ämter" to be collectively ruled. The four "Ämter" were then consolidated under a single Confederation bailiff into what was known in the 15th century as the "Waggental" Bailiwick (). In the 16th century, it came to be known as the "Vogtei der Freien Ämter". While the "Freien Ämter" often had independent lower courts, they were forced to accept the Confederation's sovereignty. Finally, in 1532, the canton of Uri became part of the collective administration of the Freien Ämter.
At the time of the Protestant Reformation, the majority of the Ämter converted to the new faith. In 1529, a wave of iconoclasm swept through the area and wiped away much of the old religion. After the defeat of Zürich in the second Battle of Kappel in 1531, the victorious five Catholic cantons marched their troops into the Freie Ämter and reconverted them to Catholicism.
In the First War of Villmergen, in 1656, and the Toggenburg War (or Second War of Villmergen), in 1712, the Freie Ämter became the staging ground for the warring Reformed and Catholic armies. While the peace after the 1656 war did not change the status quo, the fourth Peace of Aarau in 1712 brought about a reorganization of power relations. The victory gave Zürich the opportunity to force the Catholic cantons out of the government in the county of Baden and the adjacent area of the Freie Ämter. The Freie Ämter were then divided in two by a line drawn from the gallows in Fahrwangen to the Oberlunkhofen church steeple. The northern part, the so-called Unteren Freie Ämter (lower Freie Ämter), which included the districts of Boswil (in part) and Hermetschwil and the Niederamt, were ruled by Zürich, Bern and Glarus. The southern part, the Oberen Freie Ämter (upper Freie Ämter), were ruled by the previous seven cantons but Bern was added to make an eighth.
During the Helvetic Republic (1798–1803), the county of Baden, the Freie Ämter and the area known as the Kelleramt were combined into the canton of Baden.
The County of Baden was a shared condominium of the entire Old Swiss Confederacy. After the Confederacy conquest in 1415, they retained much of the Habsburg legal structure, which caused a number of problems. The local nobility had the right to hold the low court in only about one fifth of the territory. There were over 30 different nobles who had the right to hold courts scattered around the surrounding lands. All these overlapping jurisdictions caused numerous conflicts, but gradually the Confederation was able to acquire these rights in the County. The cities of Baden, Bremgarten and Mellingen became the administrative centers and held the high courts. Together with the courts, the three administrative centers had considerable local autonomy, but were ruled by a governor who was appointed by the "Acht Orte" every two years. After the Protestant victory at the Second Battle of Villmergen, the administration of the County changed slightly. Instead of the "Acht Orte" appointing a bailiff together, Zürich and Bern each appointed the governor for 7 out of 16 years while Glarus appointed him for the remaining two years.
The chaotic legal structure and fragmented land ownership combined with a tradition of dividing the land among all the heirs in an inheritance prevented any large scale reforms. The governor tried in the 18th century to reform and standardize laws and ownership across the County, but with limited success. With an ever-changing administration, the County lacked a coherent long-term economic policy or support for reforms. By the end of the 18th century there were no factories or mills and only a few small cottage industries along the border with Zürich. Road construction first became a priority after 1750, when Zürich and Bern began appointing a governor for seven years.
During the Protestant Reformation, some of the municipalities converted to the new faith. However, starting in 1531, some of the old parishes were converted back to the old faith. The governors were appointed from both Catholic and Protestant cantons and since they changed every two years, neither faith gained a majority in the County.
After the French invasion, on 19 March 1798, the governments of Zürich and Bern agreed to the creation of the short lived canton of Baden in the Helvetic Republic. With the Act of Mediation in 1803, the canton of Baden was dissolved. Portions of the lands of the former County of Baden now became the District of Baden in the newly created canton of Aargau. After World War II, this formerly agrarian region saw striking growth and became the district with the largest and densest population in the canton (110,000 in 1990, 715 persons per km2).
The contemporary canton of Aargau was formed in 1803, a canton of the Swiss Confederation as a result of the Act of Mediation. It was a combination of three short-lived cantons of the Helvetic Republic: Aargau (1798–1803), Baden (1798–1803) and Fricktal (1802–1803). Its creation is therefore rooted in the Napoleonic era. In the year 2003, the canton of Aargau celebrated its 200th anniversary.
French forces occupied the Aargau from 10 March to 18 April 1798; thereafter the Bernese portion became the canton of Aargau and the remainder formed the canton of Baden. Aborted plans to merge the two halves came in 1801 and 1802, and they were eventually united under the name Aargau, which was then admitted as a full member of the reconstituted Confederation following the Act of Mediation. Some parts of the canton of Baden at this point were transferred to other cantons: the "Amt" of Hitzkirch to Lucerne, whilst Hüttikon, Oetwil an der Limmat, Dietikon and Schlieren went to Zürich. In return, Lucerne's "Amt" of Merenschwand was transferred to Aargau (district of Muri).
The Fricktal, ceded in 1802 by Austria via Napoleonic France to the Helvetic Republic, was briefly a separate canton of the Helvetic Republic (the canton of Fricktal) under a "Statthalter" ('Lieutenant'), but on 19 March 1803 (following the Act of Mediation) was incorporated into the canton of Aargau.
The former cantons of Baden and Fricktal can still be identified with the contemporary districts – the canton of Baden is covered by the districts of Zurzach, Baden, Bremgarten, and Muri (albeit with the gains and losses of 1803 detailed above); the canton of Fricktal by the districts of Rheinfelden and Laufenburg (except for Hottwil which was transferred to that district in 2010).
The chief magistracy of Aargau changed its style repeatedly:
In the 17th century, Aargau was the only federal condominium where Jews were tolerated. In 1774, they were restricted to just two towns, Endingen and Lengnau. While the rural upper class pressed incessantly for the expulsion the Jews, the financial interests of the authorities prevented it. They imposed special taxes on peddling and cattle trading, the primary Jewish professions. The Protestant occupiers also enjoyed the discomfort of the local Catholics by the presence of the Jewish community. The Jews were directly subordinate to the governor; from 1696, they were compelled to renew a letter of protection from him every 16 years.
During this period, Jews and Christians were not allowed to live under the same roof, neither were Jews allowed to own land or houses. They were taxed at a much higher rate than others and, in 1712, the Lengnau community was "pillaged." In 1760, they were further restricted regarding marriages and procreation. An exorbitant tax was levied on marriage licenses; oftentimes, they were outright refused. This remained the case until the 19th century. In 1799, all special tolls were abolished, and, in 1802, the poll tax was removed. On 5 May 1809, they were declared citizens and given broad rights regarding trade and farming. They were still restricted to Endingen and Lengnau until 7 May 1846, when their right to move and reside freely within the canton of Aargau was granted. On 24 September 1856, the Swiss Federal Council granted them full political rights within Aargau, as well as broad business rights; however the majority Christian population did not abide by these new liberal laws fully. The time of 1860 saw the canton government voting to grant suffrage in all local rights and to give their communities autonomy. Before the law was enacted, it was repealed due to vocal opposition led by the Ultramonte Party. Finally, the federal authorities in July 1863, granted all Jews full rights of citizens. However, they did not receive all of the rights in Endingen and Lengn until a resolution of the Grand Council, on 15 May 1877, granted citizens' rights to the members of the Jewish communities of those places, giving them charters under the names of New Endingen and New Lengnau. The "Swiss Jewish Kulturverein" was instrumental in this fight from its founding in 1862 until it was dissolved 20 years later. During this period of diminished rights, they were not even allowed to bury their dead in Swiss soil and had to bury their dead on an island called "Judenäule" (Jews' Isle) on the Rhine near Waldshut. Beginning in 1603, the deceased Jews of the Surbtal communities were buried on the river island which was leased by the Jewish community. As the island was repeatedly flooded and devastated, in 1750 the Surbtal Jews asked the "Tagsatzung" to establish the Endingen cemetery in the vicinity of their communities.
The capital of the canton is Aarau, which is located on its western border, on the Aare. The canton borders Germany (Baden-Württemberg) to the north, the Rhine forming the border. To the west lie the Swiss cantons of Basel-Landschaft, Solothurn and Bern; the canton of Lucerne lies south, and Zürich and Zug to the east. Its total area is . It contains both large rivers, the Aare and the Reuss.
The canton of Aargau is one of the least mountainous Swiss cantons, forming part of a great table-land, to the north of the Alps and the east of the Jura, above which rise low hills. The surface of the country is diversified with undulating tracts and well-wooded hills, alternating with fertile valleys watered mainly by the Aare and its tributaries. The valleys alternate with hills, many of which are wooded. Slightly over one-third of the canton is wooded (), while nearly half is used from farming (). or about 2.4% of the canton is considered unproductive, mostly lakes (notably Lake Hallwil) and streams. With a population density of 450/km2 (1,200/sq mi), the canton has a relatively high amount of land used for human development, with or about 15% of the canton developed for housing or transportation.
It contains the hot sulphur springs of Baden and Schinznach-Bad, while at Rheinfelden there are very extensive saline springs. Just below Brugg the Reuss and the Limmat join the Aar, while around Brugg are the ruined castle of Habsburg, the old convent of Königsfelden (with fine painted medieval glass) and the remains of the Roman settlement of "Vindonissa" (Windisch).
Fahr Monastery forms a small exclave of the canton, otherwise surrounded by the canton of Zürich, and since 2008 is part of the Aargau municipality of Würenlos.
Aargau is divided into 11 districts:
The most recent change in district boundaries occurred in 2010 when Hottwil transferred from Brugg to Laufenburg, following its merger with other municipalities, all of which were in Laufenburg.
There are (as of 2014) 213 municipalities in the canton of Aargau. As with most Swiss cantons there has been a trend since the early 2000s for municipalities to merge, though mergers in Aargau have so far been less radical than in other cantons.
The blazon of the coat of arms is "Per pale, dexter: sable, a fess wavy argent, charged with two cotises wavy azure; sinister: sky blue, three mullets of five argent."
The flag and arms of the canton of Aargau date to 1803 and are an original design by Samuel Ringier-Seelmatter; the current official design, specifying the stars as five-pointed, dates to 1930.
Aargau has a population () of . , 21.5% of the population are resident foreign nationals. Over the last 10 years (2000–2010) the population has changed at a rate of 11%. Migration accounted for 8.7%, while births and deaths accounted for 2.8%. Most of the population () speaks German (477,093 or 87.1%) as their first language, Italian is the second most common (17,847 or 3.3%) and Serbo-Croatian is the third (10,645 or 1.9%). There are 4,151 people who speak French and 618 people who speak Romansh.
Of the population in the canton, 146,421 or about 26.7% were born in Aargau and lived there in 2000. There were 140,768 or 25.7% who were born in the same canton, while 136,865 or 25.0% were born somewhere else in Switzerland, and 107,396 or 19.6% were born outside of Switzerland.
, children and teenagers (0–19 years old) make up 24.3% of the population, while adults (20–64 years old) make up 62.3% and seniors (over 64 years old) make up 13.4%.
, there were 227,656 people who were single and never married in the canton. There were 264,939 married individuals, 27,603 widows or widowers and 27,295 individuals who are divorced.
, there were 224,128 private households in the canton, and an average of 2.4 persons per household. There were 69,062 households that consist of only one person and 16,254 households with five or more people. , the construction rate of new housing units was 6.5 new units per 1000 residents. The vacancy rate for the canton, , was 1.54%.
The majority of the population is centered on one of three areas: the Aare Valley, the side branches of the Aare Valley, or along the Rhine.
The historical population is given in the following chart:
In the 2011 federal election, the most popular party was the SVP which received 34.7% of the vote. The next three most popular parties were the SP/PS (18.0%), the FDP (11.5%) and the CVP (10.6%).
The SVP received about the same percentage of the vote as they did in the 2007 Federal election (36.2% in 2007 vs 34.7% in 2011). The SPS retained about the same popularity (17.9% in 2007), the FDP retained about the same popularity (13.6% in 2007) and the CVP retained about the same popularity (13.5% in 2007).
The Grand Council of the canton of Aargau is called Grosser Rat. It's the legislature of the canton, has 140 seats, with members elected every four years.
From the , 219,800 or 40.1% were Roman Catholic, while 189,606 or 34.6% belonged to the Swiss Reformed Church. Of the rest of the population, there were 11,523 members of an Orthodox church (or about 2.10% of the population), there were 3,418 individuals (or about 0.62% of the population) who belonged to the Christian Catholic Church, and there were 29,580 individuals (or about 5.40% of the population) who belonged to another Christian church. There were 342 individuals (or about 0.06% of the population) who were Jewish, and 30,072 (or about 5.49% of the population) who were Muslim. There were 1,463 individuals who were Buddhist, 2,089 individuals who were Hindu and 495 individuals who belonged to another church. 57,573 (or about 10.52% of the population) belonged to no church, are agnostic or atheist, and 15,875 individuals (or about 2.90% of the population) did not answer the question.
In Aargau about 212,069 or (38.7%) of the population have completed non-mandatory upper secondary education, and 70,896 or (12.9%) have completed additional higher education (either university or a "Fachhochschule"). Of the 70,896 who completed tertiary schooling, 63.6% were Swiss men, 20.9% were Swiss women, 10.4% were non-Swiss men and 5.2% were non-Swiss women.
, Aargau had an unemployment rate of 3.6%. , there were 11,436 people employed in the primary economic sector and about 3,927 businesses involved in this sector. 95,844 people were employed in the secondary sector and there were 6,055 businesses in this sector. 177,782 people were employed in the tertiary sector, with 21,530 businesses in this sector.
Of the working population, 19.5% used public transportation to get to work, and 55.3% used a private car. Public transportation – bus and train – is provided by Busbetrieb Aarau AG.
The farmland of the canton of Aargau is some of the most fertile in Switzerland. Dairy farming, cereal and fruit farming are among the canton's main economic activities. The canton is also industrially developed, particularly in the fields of electrical engineering, precision instruments, iron, steel, cement and textiles.
Three of Switzerland's five nuclear power plants are in the canton of Aargau (Beznau I + II and Leibstadt). Additionally, the many rivers supply enough water for numerous hydroelectric power plants throughout the canton. The canton of Aargau is often called "the energy canton".
A significant number of people commute into the financial center of the city of Zürich, which is just across the cantonal border. As such the per capita cantonal income (in 2005) is 49,209 CHF.
Tourism is significant, particularly for the hot springs at Baden and Schinznach-Bad, the ancient castles, the landscape, and the many old museums in the canton. Hillwalking is another tourist attraction but is of only limited significance. | https://en.wikipedia.org/wiki?curid=2467 |
American Quarter Horse
The American Quarter Horse, or Quarter Horse, is an American breed of horse that excels at sprinting short distances. Its name is derived from its ability to outrun other horse breeds in races of a quarter mile or less; some have been clocked at speeds up to 55 mph (88.5 km/h). The development of the Quarter Horse traces to the 1600s.
The American Quarter Horse is the most popular breed in the United States today, and the American Quarter Horse Association is the largest breed registry in the world, with almost 3 million living American Quarter Horses registered in 2014. The American Quarter Horse is well known both as a race horse and for its performance in rodeos, horse shows and as a working ranch horse.
The compact body of the American Quarter Horse is well-suited for the intricate and quick maneuvers required in reining, cutting, working cow horse, barrel racing, calf roping, and other western riding events, especially those involving live cattle. The American Quarter Horse is also used in English disciplines, driving, show jumping, dressage, hunting, and many other equestrian activities.
In the 1600s on the Eastern seaboard of what today is the United States began to breed imported English Thoroughbred horses with assorted "native" horses.
One of the most famous of these early imports was Janus, a Thoroughbred who was the grandson of the Godolphin Arabian. He was foaled in 1746, and imported to colonial Virginia in 1756. The influence of Thoroughbreds like Janus contributed genes crucial to the development of the colonial "Quarter Horse". The resulting horse was small, hardy, quick, and was used as a work horse during the week and a race horse on the weekends.
As flat racing became popular with the colonists, the Quarter Horse gained even more popularity as a sprinter over courses that, by necessity, were shorter than the classic racecourses of England. These courses were often no more than a straight stretch of road or flat piece of open land. When competing against a Thoroughbred, local sprinters often won. As the Thoroughbred breed became established in America, many colonial Quarter Horses were included in the original American stud books. This began a long association between the Thoroughbred breed and what would later become officially known as the "Quarter Horse", named after the race distance at which it excelled. Some Quarter Horses have been clocked at up to 55 mph.
In the 19th century, pioneers heading West needed a hardy, willing horse. On the Great Plains, settlers encountered horses that descended from the Spanish stock Hernán Cortés and other Conquistadors had introduced into the viceroyalty of New Spain, which today includes the Southwestern United States and Mexico.
The horses of the West included herds of feral animals known as Mustangs, as well as horses domesticated by Native Americans, including the Comanche, Shoshoni and Nez Perce tribes . As the colonial Quarter Horse was crossed with these western horses, the pioneers found that the new crossbred had innate "cow sense", a natural instinct for working with cattle, making it popular with cattlemen on ranches.
Early foundation sires of Quarter horse type included Steel Dust, foaled 1843; Shiloh (or Old Shiloh), foaled 1844; Old Cold Deck (1862); Lock's Rondo, one of many "Rondo" horses, foaled in 1880; Old Billy—again, one of many "Billy" horses—foaled circa 1880; Traveler, a stallion of unknown breeding, known to have been in Texas by 1889; and Peter McCue, foaled 1895, registered as a Thoroughbred but of disputed pedigree. Another early foundation sire for the breed was Copperbottom, foaled in 1828, who tracks his lineage through the Byerley Turk, a foundation sire of the Thoroughbred horse breed.
The main duty of the ranch horse in the American West was working cattle. Even after the invention of the automobile, horses were still irreplaceable for handling livestock on the range. Thus, major Texas cattle ranches, such as the King Ranch, the 6666 (Four Sixes) Ranch, and the Waggoner Ranch played a significant role in the development of the modern Quarter Horse. The skills required by cowboys and their horses became the foundation of the rodeo, a contest which began with informal competition between cowboys and expanded to become a major competitive event throughout the west. To this day, the Quarter Horse dominates in events that require speed as well as the ability to handle cattle.
Sprint races were also popular weekend entertainment and racing became a source of economic gain for breeders. As a result, more Thoroughbred blood was added into the developing American Quarter Horse breed. The American Quarter Horse also benefitted from the addition of Arabian, Morgan, and even Standardbred bloodlines.
In 1940, the American Quarter Horse Association (AQHA) was formed by a group of horsemen and ranchers from the Southwestern United States dedicated to preserving the pedigrees of their ranch horses. After winning the 1941 Fort Worth Exposition and Fat Stock Show grand champion stallion, the horse honored with the first registration number, P-1, was Wimpy, a descendant of the King Ranch foundation sire Old Sorrel. Other sires alive at the founding of the AQHA were given the earliest registration numbers Joe Reed P-3, Chief P-5, Oklahoma Star P-6, Cowboy P-12, and Waggoner's Rainy Day P-13. The Thoroughbred race horse Three Bars, alive in the early years of the AQHA, is recognized by the American Quarter Horse Hall of Fame as one of the significant foundation sires for the Quarter Horse breed. Other significant Thoroughbred sires seen in early AQHA pedigrees include Rocket Bar, Top Deck and Depth Charge.
Since the American Quarter Horse was formally established as a breed, the AQHA stud book has remained open to additional Thoroughbred blood via a performance standard. An "Appendix" American Quarter Horse is a first generation cross between a registered Thoroughbred and an American Quarter Horse or a cross between a "numbered" American Quarter Horse and an "appendix" American Quarter Horse. The resulting offspring is registered in the "appendix" of the American Quarter Horse Association's studbook, hence the nickname. Horses listed in the appendix may be entered in competition, but offspring are not initially eligible for full AQHA registration. If the Appendix horse meets certain conformational criteria and is shown or raced successfully in sanctioned AQHA events, the horse can earn its way from the appendix into the permanent studbook, making its offspring eligible for AQHA registration.
Since Quarter Horse/Thoroughbred crosses continue to enter the official registry of the American Quarter Horse breed, this creates a continual gene flow from the Thoroughbred breed into the American Quarter Horse breed, which has altered many of the characteristics that typified the breed in the early years of its formation. Some breeders argue that the continued addition of Thoroughbred bloodlines are beginning to compromise the integrity of the breed standard. Some favor the earlier style of horse and have created several separate organizations to promote and register "Foundation" Quarter Horses.
The American Quarter Horse is best known today as a show horse, race horse, reining and cutting horse, rodeo competitor, ranch horse, and all-around family horse. Quarter Horses are commonly used in rodeo events such as barrel racing, calf roping and team roping; and gymkhana or O-Mok-See. Other stock horse events such as cutting and reining are open to all breeds but are dominated by American Quarter Horse.
The breed is not only well-suited for western riding and cattle work. Many race tracks offer Quarter Horses a wide assortment of pari-mutuel horse racing with earnings in the millions. Quarter Horses have also been trained to compete in dressage and show jumping. They are also used for recreational trail riding and in mounted police units.
The American Quarter Horse has also been exported worldwide. European nations such as Germany and Italy have imported large numbers of Quarter Horses. Next to the American Quarter Horse Association (which also encompasses Quarter Horses from Canada), the second largest registry of Quarter Horses is in Brazil, followed by Australia. In the UK the breed is also becoming very popular, especially with the two Western riding Associations, the Western Horse Association and The Western Equestrian Society. The British American Quarter Horse breed society is the AQHA-UK. With the internationalization of the discipline of reining and its acceptance as one of the official seven events of the World Equestrian Games, there is a growing international interest in Quarter Horses. The American Quarter Horse is the most popular breed in the United States today, and the American Quarter Horse Association is the largest breed registry in the world, with nearly 3 million American Quarter Horses registered worldwide in 2014.
The Quarter Horse has a small, short, refined head with a straight profile, and a strong, well-muscled body, featuring a broad chest and powerful, rounded hindquarters. They usually stand between high, although some Halter-type and English hunter-type horses may grow as tall as .
There are two main body types: the stock type and the hunter or racing type. The stock horse type is shorter, more compact, stocky and well-muscled, yet agile. The racing and hunter type Quarter Horses are somewhat taller and smoother muscled than the stock type, more closely resembling the Thoroughbred.
Quarter Horses come in nearly all colors. The most common color is sorrel, a brownish red, part of the color group called chestnut by most other breed registries. Other recognized colors include bay, black, brown, buckskin, palomino, gray, dun, red dun, grullo (also occasionally referred to as blue dun), red roan, blue roan, bay roan, perlino, cremello, and white. In the past, spotted color patterns were excluded, but now with the advent of DNA testing to verify parentage, the registry accepts all colors as long as both parents are registered.
A stock horse is a horse of a type that is well suited for working with livestock, particularly cattle. Reining and cutting horses are smaller in stature, with quick, agile movements and very powerful hindquarters. Western pleasure show horses are often slightly taller, with slower movements, smoother gaits, and a somewhat more level topline – though still featuring the powerful hindquarters characteristic of the Quarter Horse.
Horses shown in-hand in Halter competition are larger yet, with a very heavily muscled appearance, while retaining small heads with wide jowls and refined muzzles. There is controversy amongst owners, breeder and veterinarians regarding the health effects of the extreme muscle mass that is currently fashionable in the specialized halter horse, which typically is and weighs in at over when fitted for halter competition. Not only are there concerns about the weight to frame ratio on the horse's skeletal system, but the massive build is also linked to hyperkalemic periodic paralysis (HYPP) in descendants of the stallion Impressive (see Genetic diseases below).
Quarter Horse race horses are bred to sprint short distances ranging from 220 to 870 yards. Thus, they have long legs and are leaner than their stock type counterparts, but are still characterized by muscular hindquarters and powerful legs. Quarter Horses race primarily against other Quarter Horses, and their sprinting ability has earned them the nickname, "the world's fastest athlete." The show hunter type is slimmer, even more closely resembling a Thoroughbred, usually reflecting a higher percentage of appendix breeding. They are shown in hunter/jumper classes at both breed shows and in open USEF-rated horse show competition.
There are several genetic diseases of concern to Quarter Horse breeders: | https://en.wikipedia.org/wiki?curid=2472 |
Abacá
Abacá ( ; ), binomial name Musa textilis, is a species of banana native to the Philippines, grown as a commercial crop in the Philippines, Ecuador, and Costa Rica. The plant, also known as Manila hemp, has great economic importance, being harvested for its fiber, also called Manila hemp, extracted from the leaf-stems. Abacá is also the traditional source of lustrous fiber hand-loomed into various indigenous textiles in the Philippines like "t'nalak", as well as colonial-era sheer luxury fabrics known as "nipis". They are also the source of fibers for "sinamay", a loosely woven stiff material used for textiles as well as in traditional Philippine millinery.
The plant grows to , and averages about . The fiber was originally used for making twines and ropes; now most is pulped and used in a variety of specialized paper products including tea bags, filter paper and banknotes. It is classified as a hard fiber, along with coir, henequin and sisal.
The abacá plant is stoloniferous, meaning that the plant produces runners or shoots along the ground that then root at each segment. Cutting and transplanting rooted runners is the primary technique for creating new plants, since seed growth is substantially slower. Abacá has a "false trunk" or pseudostem about in diameter. The leaf stalks (petioles) are expanded at the base to form sheaths that are tightly wrapped together to form the pseudostem. There are from 12 to 25 leaves, dark green on the top and pale green on the underside, sometimes with large brown patches. They are oblong in shape with a deltoid base. They grow in succession. The petioles grow to at least in length.
When the plant is mature, the flower stalk grows up inside the pseudostem. The male flower has five petals, each about long. The leaf sheaths contain the valuable fiber. After harvesting, the coarse fibers range in length from long. They are composed primarily of cellulose, lignin, and pectin.
The fruit, which is inedible and is rarely seen as harvesting occurs before the plant fruits, grows to about in length and in diameter. It has black turbinate seeds that are in diameter.
The abacá plant belongs to the banana family, Musaceae; it resembles the closely related wild seeded bananas, "Musa acuminata" and "Musa balbisiana". Its scientific name is "Musa textilis". Within the genus "Musa", it is placed in section "Callimusa" (now including the former section "Australimusa"), members of which have a diploid chromosome number of 2n = 20.
Before synthetic textiles came into use, "M. textilis" was a major source of high quality fiber: soft, silky and fine. Ancestors of the modern abacá are thought to have originated from the Eastern Philippines where there is significant rainfall throughout the year. Wild varieties of abacá can still be found in the interior forests of Catanduanes Island, away from cultivated areas.
Today, Catanduanes has many other modern kinds of abacá which are more competitive. For many years, breeders from various research institutions have made the cultivated varieties of Catanduanes Island even more competitive in local and international markets. This results in the optimum production of the island which had a consistent highest production throughout the archipelago.
Europeans first came into contact with Abacá fibre when Magellan made land in the Philippines in 1521, as the natives were cultivating it and utilizing it in bulk for textiles already. Throughout the Spanish colonial era it was referred to as "medriñaque" cloth. By 1897, the Philippines were exporting almost 100,000 tons of abacá, and it was one of the three biggest cash crops, along with tobacco and sugar. In fact, from 1850 through the end of the 19th century, sugar or abacá alternated with each other as the biggest export crop of the Philippines. This 19th-century trade was predominantly with the United States and the making of ropes was done mainly in New England, although in time the rope-making was moved back to the Philippines.
Excluding the Philippines, abacá was first cultivated on a large scale in Sumatra in 1925 under the Dutch, who had observed its cultivation in the Philippines for cordage since the nineteenth century, followed up by plantings in Central America in 1929 sponsored by the U.S. Department of Agriculture. It also was transplanted into India and Guam. Commercial planting began in 1930 in British North Borneo; with the commencement of World War II, the supply from the Philippines was eliminated by the Japanese.
In the early 1900s, a train running from Danao to Argao would transport Philippine abacá from the plantations to Cebu city for export. The train and tracks were destroyed during the Second world war, however the Abaca plantations continue and are now transported to Cebu by road.
After the war, the U.S. Department of Agriculture started production in Panama, Costa Rica, Honduras, and Guatemala. Today, abacá is produced primarily in the Philippines and Ecuador. The Philippines produces between 85% and 95% of the world's abacá, and the production employs 1.5 million people. Production has declined because of virus diseases.
Due to its strength, it is a sought after product and is the strongest of the natural fibers. It is used by the paper industry for such specialty uses such as tea bags, banknotes and decorative papers. It can be used to make handcrafts such as bags, carpets, clothing and furniture.
Abacá rope is very durable, flexible and resistant to salt water damage, allowing its use in hawsers, ship's lines and fishing nets. A rope can require to break. Abacá fiber was once used primarily for rope, but this application is now of minor significance. Lupis is the finest quality of abacá. Sinamay is woven chiefly from abacá.
The inner fibers are used in the making of hats, including the "Manila hats," hammocks, matting, cordage, ropes, coarse twines, and types of canvas. Abacá cloth is found in museum collections around the world, like the Boston Museum of Fine Arts and the Textile Museum of Canada.
Philippine indigenous tribes still weave abacá-based textiles like "t'nalak", made by the Tiboli tribe of South Cotabato, and "dagmay", made by the Bagobo people.
The plant is normally grown in well-drained loamy soil, using rhizomes planted at the start of the rainy season. In addition, new plants can be started by seeds. Growers harvest abacá fields every three to eight months after an initial growth period of 12–25 months. Harvesting is done by removing the leaf-stems after flowering but before fruit appears. The plant loses productivity between 15 and 40 years. The slopes of volcanoes provide a preferred growing environment. Harvesting generally includes several operations involving the leaf sheaths:
When the processing is complete, the bundles of fiber are pale and lustrous with a length of .
In Costa Rica, more modern harvest and drying techniques are being developed to accommodate the very high yields obtained there.
According to the Philippine Fiber Industry Development Authority, the Philippines provided 87.4% of the world's abacá in 2014, earning the Philippines US$111.33 million. The demand is still greater than the supply. The remainder came from Ecuador (12.5%) and Costa Rica (0.1%). The Bicol region in the Philippines produced 27,885 metric tons of abacá in 2014, the largest of any Philippine region.
The Philippine Rural Development Program (PRDP) and the Department of Agriculture reported that in 2009–2013, Bicol Region had 39% share of Philippine abacá production while overwhelming 92% comes from Catanduanes Island. Eastern Visayas, the second largest producer had 24% and the Davao Region, the third largest producer had 11% of the total production. Around 42 percent of the total abacá fiber shipments from the Philippines went to the United Kingdom in 2014, making it the top importer. Germany imported 37.1 percent abacá pulp from the Philippines, importing around 7,755 metric tons (MT). Sales of abacá cordage surged 20 percent in 2014 to a total of 5,093 MT from 4,240 MT, with the United States holding around 68 percent of the market.
Abacá is vulnerable to a number of pathogens, notably abaca bunchy top virus and abaca bract mosaic virus. | https://en.wikipedia.org/wiki?curid=2473 |
Abadeh
Abadeh (, also Romanized as Ābādeh) is a city and capital of Abadeh County, in Fars Province, Iran. Abadeh is situated at an elevation of in a fertile plain on the high road between Isfahan and Shiraz, from the former and from the latter. At the 2006 census, its population was 52,042, in 14,184 families. As of 2009, the population was estimated to be 59,042.
It is the largest city in the Northern Fars Region (South Central-Iran), which is famed for its carved wood-work, made of the wood of pear and box trees. Sesame oil, castor oil, grain, and various fruits are also produced there. The area is famous for its Abadeh rugs.
An interesting fact is that Abadeh is closer, road-distance-wise to 4 provincial capitals of Isfahan (193 km), Yasuj (197 km), Yazd (217 km), and Shahrekord (237 km) compared to the distance to the provincial capital of its corresponding province, Shiraz (260 km).
Abadeh historical monuments include Emirate Kolah Farangi, Tymcheh Sarafyan and Khaje tomb, located in the Khoja mountains.
Abadeh crafts can be embroidered in cotton. The town also produces Abadeh rugs.
The rugs tend to be based on a cotton warp and have a thin, tightly knotted pile. Most Abadeh rugs are closely cut making them very flat. Although some of the older Abadehs vary in style, many of the new designs are easily recognisable. These new designs, known as Heybatlu consist of a single diamond shaped medallion in the centre with smaller medallions on each corner. The pattern is typically geometrical flowers or animals and the main colours are light reds or burnt orange on top of a dark blue background with strong green details. The corners or borders are generally ivory in colour. Although some Abadeh and Shiraz rugs appear similar Abadeh can normally be differentiated by their higher knot counts as well as the fact that the warp is invariably cotton. The rugs are almost always exclusively medium in size and the KPSI of an average Abadeh is around 90. As always in the rug-world you get what you pay for however in general Abadeh are well made and fairly popular items, particularly in modern interiors or those with a Mediterranean or North African style.
Expressway 65 passes through Abadeh. This situation helps Abadeh to improve its capabilities compared to the neighboring city, Eqlid. Road 78 makes connections from Abadeh to Abarkuh, Yazd Eqlid and Yasuj. It has a junction with Abadeh Shiraz Expressway 24 km south of the city. A road starts from Abadeh Ring Road to Soqad and Semirom, Road 55.
The railroad from Isfahan to Shiraz passes Abadeh and there are train services at Abadeh Railway Station to Shiraz, Esfahan, Tehran and Mashad. Abadeh Airport (OISA) was planned to be built in the mid 1990s.
Abadeh's main sport is Football, like the rest of the country. The main stadium is Takhti Stadium located in Mo'allem Square. The main team in Abadeh is Behineh Rahbar Abadeh F.C. which is currently playing in Iran Football's 3rd Division after finishing first in Fars Provincial League (FPL) last year. It played in Hazfi Cup 2010-11 reaching the fourth round.
In 2012 Iran announced it had started the construction an air defense site in city of Abadeh, the site is planned to be the largest in the country and will house 6,000 personnel for a variety duties including educational ones.
Abadeh features a continental semi-arid climate classification|Köppen "BSk") with heat and dryness over summer, and cold (extreme at times) and wet winter, with huge variations between daytime and nighttime throughout the year. It's very cold due to its high elevation. | https://en.wikipedia.org/wiki?curid=2475 |
Abakan
Abakan (; Khakas: or ) is the capital city of the Republic of Khakassia, Russia, located in the central part of Minusinsk Depression, at the confluence of the Yenisei and Abakan Rivers. As of the 2010 Census, it had a population of 165,214—a slight increase over 165,197 recorded during the 2002 Census and a further increase from 154,092 recorded during the 1989 Census.
Abakansky "ostrog" (), also known as Abakansk (), was built at the mouth of the Abakan River in 1675. In the 1780s, the "selo" of Ust-Abakanskoye () was established in this area. It was granted town status and given its current name on 30 April 1931.
In 1940, Russian construction workers found ancient ruins during the construction of a highway between Abakan and Askiz. When the site was excavated by Soviet archaeologists in 1941–1945, they realized that they had discovered a building absolutely unique for the area: a large (1500 square meters) Chinese-style, likely Han Dynasty era (206 BC–220 AD) palace. The identity of the high-ranking personage who lived luxuriously in Chinese style, far outside of the borders of the Han Empire, has remained a matter for discussion ever since. Russian archaeologist L.A. Yevtyukhova surmised, based on circumstantial evidence, that the palace may have been the residence of Li Ling, a Chinese general who had been defeated by the Xiongnu in 99 BCE, and defected to them as a result. While this opinion has remained popular, other views have been expressed as well. More recently, for example, it was claimed by A.A. Kovalyov as the residence of Lu Fang (), a Han throne pretender from the Guangwu era.
In the late 18th and during the 19th century, Lithuanian participants in the 1794, 1830–1831, and 1863 rebellions against Russian rule were exiled to Abakan. A group of camps was established where prisoners were forced to work in the coal mines. After Stalin's death, Lithuanian exiles from the nearby settlements moved in.
Abakan is the capital of the republic. Within the framework of administrative divisions, it is incorporated as the City of Abakan—an administrative unit with the status equal to that of the districts As a municipal division, the City of Abakan is incorporated as Abakan Urban Okrug.
The city has a river port, industry enterprises, Katanov State University of Khakasia, and three theatres. Furthermore, it has a commercial center that produces footwear, foodstuffs, and metal products.
Abakan (together with Tayshet) was a terminal of the major Abakan-Taishet Railway. Now it is an important railway junction.
The city is served by the Abakan International Airport.
The 100th Air Assault Brigade of the Russian Airborne Troops was based in the city until circa 1996.
Abakan's sites of interest include:
Bandy, similar to hockey, is the one of the most popular sports in the city. Sayany-Khakassia was playing in the top-tier Super League in the 2012–13 season but was relegated for the 2013–14 season and has been playing in the Russian Bandy Supreme League ever since. Russian Government Cup was played here in 1988 and in 2012.
Abakan has a borderline humid continental (Köppen climate classification "Dwb")/cold semi-arid climate (Köppen "BSk"). Temperature differences between seasons are extreme, which is typical for Siberia. Precipitation is concentrated in the summer and is less common because of rain shadow from nearby mountains. | https://en.wikipedia.org/wiki?curid=2477 |
Arc de Triomphe
The Arc de Triomphe de l'Étoile (, , ; ) is one of the most famous monuments in Paris, France, standing at the western end of the Champs-Élysées at the centre of Place Charles de Gaulle, formerly named Place de l'Étoile—the "étoile" or "star" of the juncture formed by its twelve radiating avenues. The location of the arc and the plaza is shared between three arrondissements, 16th (south and west), 17th (north) and 8th (east). The Arc de Triomphe honours those who fought and died for France in the French Revolutionary and Napoleonic Wars, with the names of all French victories and generals inscribed on its inner and outer surfaces. Beneath its vault lies the Tomb of the Unknown Soldier from World War I.
As the central cohesive element of the "Axe historique" (historic axis, a sequence of monuments and grand thoroughfares on a route running from the courtyard of the Louvre to the Grande Arche de la Défense), the Arc de Triomphe was designed by Jean Chalgrin in 1806; its iconographic programme pits heroically nude French youths against bearded Germanic warriors in chain mail. It set the tone for public monuments with triumphant patriotic messages. Inspired by the Arch of Titus in Rome, Italy, the Arc de Triomphe has an overall height of , width of and depth of , while its large vault is high and wide. The smaller transverse vaults are high and wide. Three weeks after the Paris victory parade in 1919 (marking the end of hostilities in World War I), Charles Godefroy flew his Nieuport biplane under the arch's primary vault, with the event captured on newsreel.
Paris's Arc de Triomphe was the tallest triumphal arch until the completion of the Monumento a la Revolución in Mexico City in 1938, which is high. The Arch of Triumph in Pyongyang, completed in 1982, is modelled on the Arc de Triomphe and is slightly taller at . La Grande Arche in La Defense near Paris is 110 metres high. Although it is not named an Arc de Triomphe, it has been designed on the same model and in the perspective of the Arc de Triomphe. It qualifies as the world's tallest arch.
The Arc de Triomphe is located on the right bank of the Seine at the centre of a dodecagonal configuration of twelve radiating avenues. It was commissioned in 1806 after the victory at Austerlitz by Emperor Napoleon at the peak of his fortunes. Laying the foundations alone took two years and, in 1810, when Napoleon entered Paris from the west with his bride Archduchess Marie-Louise of Austria, he had a wooden mock-up of the completed arch constructed. The architect, Jean Chalgrin, died in 1811 and the work was taken over by Jean-Nicolas Huyot.
During the Bourbon Restoration, construction was halted and it would not be completed until the reign of King Louis-Philippe, between 1833 and 1836, by the architects Goust, then Huyot, under the direction of Héricart de Thury. On 15 December 1840, brought back to France from Saint Helena, Napoleon's remains passed under it on their way to the Emperor's final resting place at the Invalides. Prior to burial in the Panthéon, the body of Victor Hugo was displayed under the Arc during the night of 22 May 1885.
The sword carried by the "Republic" in the "Marseillaise" relief broke off on the day, it is said, that the Battle of Verdun began in 1916. The relief was immediately hidden by tarpaulins to conceal the accident and avoid any undesired ominous interpretations. On 7 August 1919, Charles Godefroy successfully flew his biplane under the Arc. Jean Navarre was the pilot who was tasked to make the flight, but he died on 10 July 1919 when he crashed near Villacoublay while training for the flight.
Following its construction, the Arc de Triomphe became the rallying point of French troops parading after successful military campaigns and for the annual Bastille Day military parade. Famous victory marches around or under the Arc have included the Germans in 1871, the French in 1919, the Germans in 1940, and the French and Allies in 1944 and 1945. A United States postage stamp of 1945 shows the "Arc de Triomphe" in the background as victorious American troops march down the Champs-Élysées and U.S. airplanes fly overhead on 29 August 1944. After the interment of the Unknown Soldier, however, all military parades (including the aforementioned post-1919) have avoided marching through the actual arch. The route taken is up to the arch and then around its side, out of respect for the tomb and its symbolism. Both Hitler in 1940 and de Gaulle in 1944 observed this custom.
By the early 1960s, the monument had grown very blackened from coal soot and automobile exhaust, and during 1965–1966 it was cleaned through bleaching. In the prolongation of the Avenue des Champs-Élysées, a new arch, the Grande Arche de la Défense, was built in 1982, completing the line of monuments that forms Paris's "Axe historique". After the "Arc de Triomphe du Carrousel" and the "Arc de Triomphe de l'Étoile", the "Grande Arche" is the third arch built on the same perspective.
In 1995, the Armed Islamic Group of Algeria placed a bomb near the Arc de Triomphe which wounded 17 people as part of a campaign of bombings.
In late 2018, the Arc de Triomphe suffered acts of vandalism as part of the Yellow vests movement protests.
The astylar design is by Jean Chalgrin (1739–1811), in the Neoclassical version of ancient Roman architecture. Major academic sculptors of France are represented in the sculpture of the "Arc de Triomphe": Jean-Pierre Cortot; François Rude; Antoine Étex; James Pradier and Philippe Joseph Henri Lemaire. The main sculptures are not integral friezes but are treated as independent trophies applied to the vast ashlar masonry masses, not unlike the gilt-bronze appliqués on Empire furniture. The four sculptural groups at the base of the Arc are "The Triumph of 1810" (Cortot), "Resistance" and "Peace" (both by Antoine Étex) and the most renowned of them all, "Departure of the Volunteers of 1792" commonly called "La Marseillaise" (François Rude). The face of the allegorical representation of France calling forth her people on this last was used as the belt buckle for the honorary rank of Marshal of France. Since the fall of Napoleon (1815), the sculpture representing "Peace" is interpreted as commemorating the Peace of 1815.
In the attic above the richly sculptured frieze of soldiers are 30 shields engraved with the names of major French victories in the French Revolution and Napoleonic wars. The inside walls of the monument list the names of 660 people, among which are 558 French generals of the First French Empire; The names of those generals killed in battle are underlined. Also inscribed, on the shorter sides of the four supporting columns, are the names of the major French victories in the Napoleonic Wars. The battles that took place in the period between the departure of Napoleon from Elba to his final defeat at Waterloo are not included.
For four years from 1882 to 1886, a monumental sculpture by Alexandre Falguière topped the arch. Titled "Le triomphe de la Révolution" ("The Triumph of the Revolution"), it depicted a chariot drawn by horses preparing "to crush Anarchy and Despotism". It remained there only four years before falling in ruins.
Inside the monument, a permanent exhibition conceived by the artist Maurice Benayoun and the architect Christophe Girault opened in February 2007. The steel and new media installation interrogates the symbolism of the national monument, questioning the balance of its symbolic message during the last two centuries, oscillating between war and peace.
Beneath the Arc is the Tomb of the Unknown Soldier from World War I. Interred on Armistice Day 1920, it has the first eternal flame lit in Western and Eastern Europe since the Vestal Virgins' fire was extinguished in the fourth century. It burns in memory of the dead who were never identified (now in both world wars).
A ceremony is held at the Tomb of the Unknown Soldier every 11 November on the anniversary of the Armistice of 11 November 1918 signed by the Entente Powers and Germany in 1918. It was originally decided on 12 November 1919 to bury the unknown soldier's remains in the Panthéon, but a public letter-writing campaign led to the decision to bury him beneath the Arc de Triomphe. The coffin was put in the chapel on the first floor of the Arc on 10 November 1920, and put in its final resting place on 28 January 1921. The slab on top bears the inscription ICI REPOSE UN SOLDAT FRANÇAIS MORT POUR LA PATRIE 1914–1918 ("Here lies a French soldier who died for the fatherland 1914–1918").
In 1961, U.S. President John F. Kennedy and First Lady Jacqueline Kennedy paid their respects at the Tomb of the Unknown Soldier, accompanied by President Charles de Gaulle. After the 1963 assassination of President Kennedy, Mrs Kennedy remembered the eternal flame at the Arc de Triomphe and requested that an eternal flame be placed next to her husband's grave at Arlington National Cemetery in Virginia. President Charles de Gaulle went to Washington to attend the state funeral, and witnessed Jacqueline Kennedy lighting the eternal flame that had been inspired by her visit to France.
The "Arc de Triomphe" is accessible by the RER and Métro, with exit at the Charles de Gaulle—Étoile station. Because of heavy traffic on the roundabout of which the Arc is the centre, it is recommended that pedestrians use one of two underpasses located at the "Champs Élysées" and the "Avenue de la Grande Armée". A lift will take visitors almost to the top – to the attic, where there is a small museum which contains large models of the Arc and tells its story from the time of its construction. Another 40 steps remain to climb in order to reach the top, the "terrasse", from where one can enjoy a panoramic view of Paris.
The location of the arc, as well as the Place de l'Étoile, is shared between three arrondissements, 16th (south and west), 17th (north), and 8th (east). | https://en.wikipedia.org/wiki?curid=2482 |
Amazonite
Amazonite, also known as Amazonstone, is a green tectosilicate mineral, a variety of the potassium feldspar called microcline. Its chemical formula is (KAlSi3O8), which is polymorphic to orthoclase.
Its name is taken from that of the Amazon River, from which green stones were formerly obtained, though it is unknown whether those stones were amazonite. Although it has been used for over two thousand years, as attested by archaeological finds in Egypt and Mesopotamia, no ancient or medieval authority mentions it. It was first described as a distinct mineral only in the 18th century.
Green and greenish-blue varieties of potassium feldspars which are predominantly triclinic are designated
as amazonite. It has been described as a "beautiful crystallized variety of a bright verdigris-green" and as possessing a "lively green colour." It is occasionally cut and used as a gemstone.
Amazonite is a mineral of limited occurrence. Formerly it was obtained almost exclusively from the area of Miass in the Ilmensky Mountains, 50 miles southwest of Chelyabinsk, Russia, where it occurs in granitic rocks.
Amazonite is now known to occur in various places around the globe. Those places are, among others;
China:
Libya:
Mongolia:
South Africa
United States:
For many years, the source of amazonite's color was a mystery. Some people assumed the color was due to copper because copper compounds often have blue and green colors. A 1985 study suggest that the blue-green color results from quantities of lead and water in the feldspar. Subsequent 1998 theoretical studies by A. Julg expand on the potential role of aliovalent lead in the color of microcline.
Other studies suggest the colors are associated with the increasing content of lead, rubidium, and thallium ranging in amounts between 0.00X and 0.0X in the feldspars, with even extremely high contents of PbO, lead monoxide, (1% or more) known from the literature. A recent 2010 study also implicated the role of divalent iron in the green coloration. These studies and associated hypotheses indicate the complex nature of the color in amazonite, in other words the aggregate effect of several mutually inclusive and necessary factors. | https://en.wikipedia.org/wiki?curid=2487 |
Anthroposophy
Anthroposophy is a philosophy founded in the early 20th century by the esotericist Rudolf Steiner that postulates the existence of an objective, intellectually comprehensible spiritual world, accessible to human experience. Followers of anthroposophy aim to develop mental faculties of spiritual discovery through a mode of thought independent of sensory experience. They also aim to present their ideas in a manner verifiable by rational discourse and specifically seek a precision and clarity in studying the spiritual world mirroring that obtained by scientists investigating the physical world.
The philosophy has its roots in German idealist and mystical philosophies. Steiner chose the term "anthroposophy" (from "anthropo-", human, and "Sophia", wisdom) to emphasize his philosophy's humanistic orientation. Anthroposophical ideas have been employed in alternative movements in many areas including education (both in Waldorf schools and in the Camphill movement), agriculture, medicine, banking, organizational development, and the arts. The main organization for advocacy of Steiner's ideas, the Anthroposophical Society, is headquartered at the Goetheanum in Dornach, Switzerland.
Anthroposophy's supporters include Hilma af Klint, Pulitzer Prize-winning and Nobel Laureate Saul Bellow, Nobel prize winner Selma Lagerlöf, Andrei Bely, Joseph Beuys, Owen Barfield, architect Walter Burley Griffin, Wassily Kandinsky, Andrei Tarkovsky, Bruno Walter, Right Livelihood Award winners Sir George Trevelyan, and Ibrahim Abouleish, child psychiatrist Eva Frommer, Fortune magazine editor Russell Davenport, Romuva (Lithuanian pagan) religious founder Vydūnas, and former president of Georgia, Zviad Gamsakhurdia. Albert Schweitzer was a friend of Steiner's and was supportive of his ideals for cultural renewal. The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history." However, many scientists and physicians, including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific.
The early work of the founder of anthroposophy, Rudolf Steiner, culminated in his "Philosophy of Freedom" (also translated as "The Philosophy of Spiritual Activity" and "Intuitive Thinking as a Spiritual Path"). Here, Steiner developed a concept of free will based on inner experiences, especially those that occur in the creative activity of independent thought.
By the beginning of the twentieth century, Steiner's interests turned almost exclusively to spirituality. His work began to interest others interested in spiritual ideas; among these was the Theosophical Society. From 1900 on, thanks to the positive reception his ideas received from Theosophists, Steiner focused increasingly on his work with the Theosophical Society, becoming the secretary of its section in Germany in 1902. During his leadership, membership increased dramatically, from just a few individuals to sixty-nine lodges.
By 1907, a split between Steiner and the Theosophical Society became apparent. While the Society was oriented toward an Eastern and especially Indian approach, Steiner was trying to develop a path that embraced Christianity and natural science. The split became irrevocable when Annie Besant, then president of the Theosophical Society, presented the child Jiddu Krishnamurti as the reincarnated Christ. Steiner strongly objected and considered any comparison between Krishnamurti and Christ to be nonsense; many years later, Krishnamurti also repudiated the assertion. Steiner's continuing differences with Besant led him to separate from the Theosophical Society Adyar. He was subsequently followed by the great majority of the Theosophical Society's German members, as well as many members of other national sections.
By this time, Steiner, had reached considerable stature as a spiritual teacher and expert in the occult. He spoke about what he considered to be his direct experience of the Akashic Records (sometimes called the "Akasha Chronicle"), thought to be a spiritual chronicle of the history, pre-history, and future of the world and mankind. In a number of works, Steiner described a path of inner development he felt would let anyone attain comparable spiritual experiences. In Steiner's view, sound vision could be developed, in part, by practicing rigorous forms of ethical and cognitive self-discipline, concentration, and meditation. In particular, Steiner believed a person's spiritual development could occur only after a period of moral development.
In 1912, the Anthroposophical Society was founded. After World War I, the Anthroposophical movement took on new directions. Followers of Steiner's ideas soon began applying them to create counter-cultural movements in traditional and special education, farming, and medicine.
By 1923, a schism had formed between older members focused on inner development and younger members eager to become active in contemporary social transformations. In response, Steiner attempted to bridge the gap by establishing an overall School for "Spiritual Science". As a spiritual basis for the reborn movement, Steiner wrote a "" which remains a central touchstone of anthroposophical ideas.
Steiner died just over a year later, in 1925. The Second World War temporarily hindered the anthroposophical movement in most of Continental Europe, as the Anthroposophical Society and most of its practical counter-cultural applications were banned by the Nazi government. Though at least one prominent member of the Nazi Party, Rudolf Hess, was a strong supporter of anthroposophy, very few anthroposophists belonged to the National Socialist Party.
By 2007, national branches of the Anthroposophical Society had been established in fifty countries and about 10,000 institutions around the world were working on the basis of anthroposophical ideas.
"Anthroposophy" is an amalgam of the Greek terms ("anthropos" = "human") and ("sophia" = "wisdom"). An early English usage is recorded by Nathan Bailey (1742) as meaning "the knowledge of the nature of man."
The first known use of the term "anthroposophy" occurs within "Arbatel de magia veterum, summum sapientiae studium", a book published anonymously in 1575 and attributed to Heinrich Cornelius Agrippa. The work describes anthroposophy (as well as theosophy) variously as an understanding of goodness, nature, or human affairs. In 1648, the Welsh philosopher Thomas Vaughan published his "Anthroposophia Theomagica, or a discourse of the nature of man and his state after death."
The term began to appear with some frequency in philosophical works of the mid- and late-nineteenth century. In the early part of that century, Ignaz Troxler used the term "anthroposophy" to refer to philosophy deepened to self-knowledge, which he suggested allows deeper knowledge of nature as well. He spoke of human nature as a mystical unity of God and world. Immanuel Hermann Fichte used the term "anthroposophy" to refer to "rigorous human self-knowledge," achievable through thorough comprehension of the human spirit and of the working of God in this spirit, in his 1856 work "Anthropology: The Study of the Human Soul". In 1872, the philosopher of religion Gideon Spicker used the term "anthroposophy" to refer to self-knowledge that would unite God and world: "the true study of the human being is the human being, and philosophy's highest aim is self-knowledge, or Anthroposophy."
In 1882, the philosopher Robert Zimmermann published the treatise, "An Outline of Anthroposophy: Proposal for a System of Idealism on a Realistic Basis," proposing that idealistic philosophy should employ logical thinking to extend empirical experience. Steiner attended lectures by Zimmermann at the University of Vienna in the early 1880s, thus at the time of this book's publication.
In the early 1900s, Steiner began using the term "anthroposophy" (i.e. human wisdom) as an alternative to the term "theosophy" (i.e. divine wisdom).
Anthroposophical proponents aim to extend the clarity of the scientific method to phenomena of human soul-life and spiritual experiences. Steiner believed this required developing new faculties of objective spiritual perception, which he maintained was still possible for contemporary humans. The steps of this process of inner development he identified as consciously achieved "imagination", "inspiration", and "intuition". Steiner believed results of this form of spiritual research should be expressed in a way that can be understood and evaluated on the same basis as the results of natural science.
Steiner hoped to form a spiritual movement that would free the individual from any external authority. For Steiner, the human capacity for rational thought would allow individuals to comprehend spiritual research on their own and bypass the danger of dependency on an authority such as himself.
Steiner contrasted the anthroposophical approach with both conventional mysticism, which he considered lacking the clarity necessary for exact knowledge, and natural science, which he considered arbitrarily limited to what can be seen, heard, or felt with the outward senses.
In "Theosophy", Steiner suggested that human beings unite a physical body of substances gathered from and returning to the inorganic world; a life body (also called the etheric body), in common with all living creatures (including plants); a bearer of sentience or consciousness (also called the astral body), in common with all animals; and the ego, which anchors the faculty of self-awareness unique to human beings.
Anthroposophy describes a broad evolution of human consciousness. Early stages of human evolution possess an intuitive perception of reality, including a clairvoyant perception of spiritual realities. Humanity has progressively evolved an increasing reliance on intellectual faculties and a corresponding loss of intuitive or clairvoyant experiences, which have become atavistic. The increasing intellectualization of consciousness, initially a progressive direction of evolution, has led to an excessive reliance on abstraction and a loss of contact with both natural and spiritual realities. However, to go further requires new capacities that combine the clarity of intellectual thought with the imagination and with consciously achieved inspiration and intuitive insights.
Anthroposophy speaks of the reincarnation of the human spirit: that the human being passes between stages of existence, incarnating into an earthly body, living on earth, leaving the body behind, and entering into the spiritual worlds before returning to be born again into a new life on earth. After the death of the physical body, the human spirit recapitulates the past life, perceiving its events as they were experienced by the objects of its actions. A complex transformation takes place between the review of the past life and the preparation for the next life. The individual's karmic condition eventually leads to a choice of parents, physical body, disposition, and capacities that provide the challenges and opportunities that further development requires, which includes karmically chosen tasks for the future life.
Steiner described some conditions that determine the interdependence of a person's lives, or karma.
The anthroposophical view of evolution considers all animals to have evolved from an early, unspecialized form. As the least specialized animal, human beings have maintained the closest connection to the archetypal form; contrary to the Darwinian conception of human evolution, all other animals "devolve" from this archetype. The spiritual archetype originally created by spiritual beings was devoid of physical substance; only later did this descend into material existence on Earth. In this view, human evolution has accompanied the Earth's evolution throughout the existence of the Earth.
Anthroposophy adapted Theosophy's complex system of cycles of world development and human evolution. The evolution of the world is said to have occurred in cycles. The first phase of the world consisted only of heat. In the second phase, a more active condition, light, and a more condensed, gaseous state separate out from the heat. In the third phase, a fluid state arose, as well as a sounding, forming energy. In the fourth (current) phase, solid physical matter first exists. This process is said to have been accompanied by an evolution of consciousness which led up to present human culture.
The anthroposophical view is that good is found in the balance between two polar influences on world and human evolution. These are often described through their mythological embodiments as spiritual adversaries which endeavour to tempt and corrupt humanity, Lucifer and his counterpart Ahriman. These have both positive and negative aspects. Lucifer is the light spirit, which "plays on human pride and offers the delusion of divinity", but also motivates creativity and spirituality; Ahriman is the dark spirit that tempts human beings to "...deny [their] link with divinity and to live entirely on the material plane", but that also stimulates intellectuality and technology. Both figures exert a negative effect on humanity when their influence becomes misplaced or one-sided, yet their influences are necessary for human freedom to unfold.
Each human being has the task to find a balance between these opposing influences, and each is helped in this task by the mediation of the "Representative of Humanity", also known as the Christ being, a spiritual entity who stands between and harmonizes the two extremes.
The applications of anthroposophy to practical fields include:
This is a pedagogical movement with over 1000 Steiner or Waldorf schools (the latter name stems from the first such school, founded in Stuttgart in 1919) located in some 60 countries; the great majority of these are independent (private) schools. Sixteen of the schools have been affiliated with the United Nations' UNESCO Associated Schools Project Network, which sponsors education projects that foster improved quality of education throughout the world. Waldorf schools receive full or partial governmental funding in some European nations, Australia and in parts of the United States (as Waldorf method public or charter schools) and Canada.
The schools have been founded in a variety of communities: for example in the "favelas" of São Paulo to wealthy suburbs of major cities; in India, Egypt, Australia, the Netherlands, Mexico and South Africa. Though most of the early Waldorf schools were teacher-founded, the schools today are usually initiated and later supported by a parent community. Waldorf schools are among the most visible anthroposophical institutions.
Biodynamic agriculture, the first intentional form of organic farming, began in 1924, when Rudolf Steiner gave a series of lectures published in English as "The Agriculture Course". Steiner is considered one of the founders of the modern organic farming movement.
Steiner gave several series of lectures to physicians and medical students. Out of those grew an alternative medical movement intending to "extend the knowledge gained through the methods of the natural sciences of the present age with insights from spiritual science." This movement now includes hundreds of M.D.s, chiefly in Europe and North America, and has its own clinics, hospitals, and medical schools.
One of the most studied applications has been the use of mistletoe extracts in cancer therapy, but research has found no evidence of benefit.
In 1922, Ita Wegman founded an anthroposophical center for special needs education, the Sonnenhof, in Switzerland. In 1940, Karl König founded the Camphill Movement in Scotland. The latter in particular has spread widely, and there are now over a hundred Camphill communities and other anthroposophical homes for children and adults in need of special care in about 22 countries around the world. Both Karl König, Thomas Weihs and others have written extensively on these ideas underlying Special education.
Steiner designed around thirteen buildings in an organic—expressionist architectural style. Foremost among these are his designs for the two Goetheanum buildings in Dornach, Switzerland. Thousands of further buildings have been built by later generations of anthroposophic architects.
Architects who have been strongly influenced by the anthroposophic style include Imre Makovecz in Hungary, Hans Scharoun and Joachim Eble in Germany, Erik Asmussen in Sweden, Kenji Imai in Japan, Thomas Rau, Anton Alberts and Max van Huut in the Netherlands, Christopher Day and Camphill Architects in the UK, Thompson and Rose in America, Denis Bowman in Canada, and Walter Burley Griffin and Gregory Burgess in Australia.
ING House in Amsterdam is a contemporary building by an anthroposophical architect which has received awards for its ecological design and approach to a self-sustaining ecology as an autonomous building and example of sustainable architecture.
Together with Marie von Sivers, Steiner developed eurythmy, a performance art combining dance, speech, and music.
Around the world today are a number of banks, companies, charities, and schools for developing co-operative forms of business using Steiner's ideas about economic associations, aiming at harmonious and socially responsible roles in the world economy. The first anthroposophic bank was the "Gemeinschaftsbank für Leihen und Schenken" in Bochum, Germany, founded in 1974.
Socially responsible banks founded out of anthroposophy in the English-speaking world include Triodos Bank, founded in 1980 and active in the UK, Netherlands, Germany, Belgium, Spain and France.
Cultura Sparebank dates from 1982 when a group of Norwegian anthroposophists began an initiative for ethical banking but only began to operate as a savings bank in Norway in the late 90s.
La Nef in France and RSF Social Finance in San Francisco are other examples.
Harvard Business School historian Geoffrey Jones traced the considerable impact both Steiner and later anthroposophical entrepreneurs had on the creation of many businesses in organic food, ecological architecture and sustainable finance.
Bernard Lievegoed, a psychiatrist, founded a new method of individual and institutional development oriented towards humanizing organizations and linked with Steiner's ideas of the threefold social order. This work is represented by the NPI Institute for Organizational Development in the Netherlands and sister organizations in many other countries. Various forms of biographic and counselling work have been developed on the basis of anthroposophy.
There are also anthroposophical movements to renew speech and drama, the most important of which are based in the work of Marie Steiner-von Sivers ("speech formation", also known as "Creative Speech") and the "Chekhov Method" originated by Michael Chekhov (nephew of Anton Chekhov).
Anthroposophic painting, a style inspired by Rudolf Steiner, featured prominently in the first Goetheanum's cupola. The technique frequently begins by filling the surface to be painted with color, out of which forms are gradually developed, often images with symbolic-spiritual significance. Paints that allow for many transparent layers are preferred, and often these are derived from plant materials. Rudolf Steiner appointed the English sculptor Edith Maryon as head of the School of Fine Art at the Goetheanum. Together they carved the 9 metre tall sculpture ‘The Representative of Man’ which is on display at the Goetheanum.
Other applications include:
For a period after World War I, Steiner was extremely active and well known in Germany, in part because he lectured widely proposing social reforms. Steiner was a sharp critic of nationalism, which he saw as outdated, and a proponent of achieving social solidarity through individual freedom. A petition proposing a radical change in the German constitution and expressing his basic social ideas (signed by Herman Hesse, among others) was widely circulated. His main book on social reform is "Toward Social Renewal".
Anthroposophy continues to aim at reforming society through maintaining and strengthening the independence of the spheres of cultural life, human rights and the economy. It emphasizes a particular ideal in each of these three realms of society:
According to Steiner, a real spiritual world exists, evolving along with the material one. Steiner held that the spiritual world can be researched in the right circumstances through direct experience, by persons practicing rigorous forms of ethical and cognitive self-discipline. Steiner described many exercises he said were suited to strengthening such self-discipline; the most complete exposition of these is found in his book "How To Know Higher Worlds". The aim of these exercises is to develop higher levels of consciousness through meditation and observation. Details about the spiritual world, Steiner suggested, could on such a basis be discovered and reported, though no more infallibly than the results of natural science.
Steiner regarded his research reports as being important aids to others seeking to enter into spiritual experience. He suggested that a combination of spiritual exercises (for example, concentrating on an object such as a seed), moral development (control of thought, feelings and will combined with openness, tolerance and flexibility) and familiarity with other spiritual researchers' results would best further an individual's spiritual development. He consistently emphasised that any inner, spiritual practice should be undertaken in such a way as not to interfere with one's responsibilities in outer life. Steiner distinguished between what he considered were true and false paths of spiritual investigation.
In anthroposophy, artistic expression is also treated as a potentially valuable bridge between spiritual and material reality.
Steiner's stated prerequisites to beginning on a spiritual path include a willingness to take up serious cognitive studies, a respect for factual evidence, and a responsible attitude. Central to progress on the path itself is a harmonious cultivation of the following qualities:
Steiner sees meditation as a concentration and enhancement of the power of thought. By focusing consciously on an idea, feeling or intention the meditant seeks to arrive at pure thinking, a state exemplified by but not confined to pure mathematics. In Steiner's view, conventional sensory-material knowledge is achieved through relating perception and concepts. The anthroposophic path of esoteric training articulates three further stages of supersensory knowledge, which do not necessarily follow strictly sequentially in any single individual's spiritual progress.
Steiner described numerous exercises he believed would bring spiritual development; other anthroposophists have added many others. A central principle is that "for every step in spiritual perception, three steps are to be taken in moral development." According to Steiner, moral development reveals the extent to which one has achieved control over one's inner life and can exercise it in harmony with the spiritual life of other people; it shows the real progress in spiritual development, the fruits of which are given in spiritual perception. It also guarantees the capacity to distinguish between false perceptions or illusions (which are possible in perceptions of both the outer world and the inner world) and true perceptions: i.e., the capacity to distinguish in any perception between the influence of subjective elements (i.e., viewpoint) and objective reality.
Steiner built upon Goethe's conception of an imaginative power capable of synthesizing the sense-perceptible form of a thing (an image of its outer appearance) and the concept we have of that thing (an image of its inner structure or nature). Steiner added to this the conception that a further step in the development of thinking is possible when the thinker observes his or her own thought processes. "The organ of observation and the observed thought process are then identical, so that the condition thus arrived at is simultaneously one of perception through thinking and one of thought through perception."
Thus, in Steiner's view, we can overcome the subject-object divide through inner activity, even though all human experience begins by being conditioned by it. In this connection, Steiner examines the step from thinking determined by outer impressions to what he calls sense-free thinking. He characterizes thoughts he considers without sensory content, such as mathematical or logical thoughts, as free deeds. Steiner believed he had thus located the origin of free will in our thinking, and in particular in sense-free thinking.
Some of the epistemic basis for Steiner's later anthroposophical work is contained in the seminal work, Philosophy of Freedom. In his early works, Steiner sought to overcome what he perceived as the dualism of Cartesian idealism and Kantian subjectivism by developing Goethe's conception of the human being as a natural-supernatural entity, that is: natural in that humanity is a product of nature, supernatural in that through our conceptual powers we extend nature's realm, allowing it to achieve a reflective capacity in us as philosophy, art and science. Steiner was one of the first European philosophers to overcome the subject-object split in Western thought. Though not well known among philosophers, his philosophical work was taken up by Owen Barfield (and through him influenced the Inklings, an Oxford group of Christian writers that included J. R. R. Tolkien and C. S. Lewis).
Christian and Jewish mystical thought have also influenced the development of anthroposophy.
Steiner believed in the possibility of applying the clarity of scientific thinking to spiritual experience, which he saw as deriving from an objectively existing spiritual world. Steiner identified mathematics, which attains certainty through thinking itself, thus through inner experience rather than empirical observation, as the basis of his epistemology of spiritual experience.
Steiner's writing, though appreciative of all religions and cultural developments, emphasizes Western tradition as having evolved to meet contemporary needs. He describes Christ and his mission on earth of bringing individuated consciousness as having a particularly important place in human evolution, whereby:
Thus, anthroposophy considers there to be a being who unifies all religions, and who is not represented by any particular religious faith. This being is, according to Steiner, not only the Redeemer of the Fall from Paradise, but also the unique pivot and meaning of earth's evolutionary processes and of human history. To describe this being, Steiner periodically used terms such as the "Representative of Humanity" or the "good spirit" rather than any denominational term.
Steiner's views of Christianity diverge from conventional Christian thought in key places, and include gnostic elements:
Rudolf Steiner wrote and lectured on Judaism and Jewish issues over much of his adult life. He was a fierce opponent of popular antisemitism, but asserted that there was no justification for the existence of Judaism and Jewish culture in the modern world, a radical assimilationist perspective which saw the Jews completely integrating into the larger society. He also supported Émile Zola's position in the Dreyfus affair. Steiner emphasized Judaism's central importance to the constitution of the modern era in the West but suggested that to appreciate the spirituality of the future it would need to overcome its tendency toward abstraction.
In his later life, Steiner was accused by the Nazis of being a Jew, and Adolf Hitler called anthroposophy "Jewish methods". The anthroposophical institutions in Germany were banned during Nazi rule and several anthroposophists sent to concentration camps.
Important early anthroposophists who were Jewish included two central members on the executive boards of the precursors to the modern Anthroposophical Society, and Karl König, the founder of the Camphill movement, who had converted to Christianity. Martin Buber and Hugo Bergmann, who viewed Steiner's social ideas as a solution to the Arab–Jewish conflict, were also influenced by anthroposophy.
There are numerous anthroposophical organisations in Israel, including the anthroposophical kibbutz Harduf, founded by Jesaiah Ben-Aharon, forty Waldorf kindergartens and seventeen Waldorf schools (stand as of 2018). A number of these organizations are striving to foster positive relationships between the Arab and Jewish populations: The Harduf Waldorf school includes both Jewish and Arab faculty and students, and has extensive contact with the surrounding Arab communities, while the first joint Arab-Jewish kindergarten was a Waldorf program in Hilf near Haifa.
Towards the end of Steiner's life, a group of theology students (primarily Lutheran, with some Roman Catholic members) approached Steiner for help in reviving Christianity, in particular "to bridge the widening gulf between modern science and the world of spirit". They approached a notable Lutheran pastor, Friedrich Rittelmeyer, who was already working with Steiner's ideas, to join their efforts. Out of their co-operative endeavor, the "Movement for Religious Renewal", now generally known as The Christian Community, was born. Steiner emphasized that he considered this movement, and his role in creating it, to be independent of his anthroposophical work, as he wished anthroposophy to be independent of any particular religion or religious denomination.
Anthroposophy's supporters include Pulitzer Prize-winning and Nobel Laureate Saul Bellow, Nobel prize winner Selma Lagerlöf, Andrei Bely, Joseph Beuys, Owen Barfield, architect Walter Burley Griffin, Wassily Kandinsky, Andrei Tarkovsky, Bruno Walter, Right Livelihood Award winners Sir George Trevelyan, and Ibrahim Abouleish, and child psychiatrist Eva Frommer. Albert Schweitzer was a friend of Steiner's and was supportive of his ideals for cultural renewal.
The historian of religion Olav Hammer has termed anthroposophy "the most important esoteric society in European history." Authors, scientists, and physicians including Michael Shermer, Michael Ruse, Edzard Ernst, David Gorski, and Simon Singh have criticized anthroposophy's application in the areas of medicine, biology, agriculture, and education to be dangerous and pseudoscientific. Others including former Waldorf pupil Dan Dugan and historian Geoffrey Ahern have criticized anthroposophy itself as a dangerous quasi-religious movement that is fundamentally anti-rational and anti-scientific.
Though Rudolf Steiner studied natural science at the Vienna Technical University at the undergraduate level, his doctorate was in epistemology and very little of his work is directly concerned with the empirical sciences. In his mature work, when he did refer to science it was often to present phenomenological or Goethean science as an alternative to what he considered the materialistic science of his contemporaries.
Steiner's primary interest was in applying the methodology of science to realms of inner experience and the spiritual worlds (his appreciation that the essence of science is its method of inquiry is unusual among esotericists), and Steiner called anthroposophy "Geisteswissenschaft" (science of the mind, cultural/spiritual science), a term generally used in German to refer to the humanities and social sciences.
Whether this is a sufficient basis for anthroposophy to be considered a spiritual science has been a matter of controversy. As Freda Easton explained in her study of Waldorf schools, "Whether one accepts anthroposophy as a science depends upon whether one accepts Steiner's interpretation of a science that extends the consciousness and capacity of human beings to experience their inner spiritual world."
Sven Ove Hansson has disputed anthroposophy's claim to a scientific basis, stating that its ideas are not empirically derived and neither reproducible nor testable. Carlo Willmann points out that as, on its own terms, anthroposophical methodology offers no possibility of being falsified except through its own procedures of spiritual investigation, no intersubjective validation is possible by conventional scientific methods; it thus cannot stand up to empiricist critics. Peter Schneider describes such objections as untenable, asserting that if a non-sensory, non-physical realm exists, then according to Steiner the experiences of pure thinking possible within the normal realm of consciousness would already be experiences of that, and it would be impossible to exclude the possibility of empirically grounded experiences of other supersensory content.
Olav Hammer suggests that anthroposophy carries scientism "to lengths unparalleled in any other Esoteric position" due to its dependence upon claims of clairvoyant experience, its subsuming natural science under "spiritual science." Hammer also asserts that the development of what she calls "fringe" sciences such as anthroposophic medicine and biodynamic agriculture are justified partly on the basis of the ethical and ecological values they promote, rather than purely on a scientific basis.
Though Steiner saw that spiritual vision itself is difficult for others to achieve, he recommended open-mindedly exploring and rationally testing the results of such research; he also urged others to follow a spiritual training that would allow them directly to apply his methods to achieve comparable results.
Anthony Storr stated about Rudolf Steiner's Anthroposophy: "His belief system is so eccentric, so unsupported by evidence, so manifestly bizarre, that rational skeptics are bound to consider it delusional... But, whereas Einstein's way of perceiving the world by thought became confirmed by experiment and mathematical proof, Steiner's remained intensely subjective and insusceptible of objective confirmation."
As an explicitly spiritual movement, anthroposophy has sometimes been called a religious philosophy. In 1998 People for Legal and Non-Sectarian Schools (PLANS) started a lawsuit alleging that anthroposophy is a religion for Establishment Clause purposes and therefore several California school districts should not be chartering Waldorf schools; the lawsuit was dismissed in 2012 for failure to show anthroposophy was a religion. In 2000, a French court ruled that a government minister's description of anthroposophy as a cult was defamatory.
Anthroposophical ideas have been criticized from both sides in the race debate:
In response to such critiques, the Anthroposophical Society in America published a statement clarifying its stance:
We explicitly reject any racial theory that may be construed to be part of Rudolf Steiner's writings. The Anthroposophical Society in America is an open, public society and it rejects any purported spiritual or scientific theory on the basis of which the alleged superiority of one race is justified at the expense of another race. | https://en.wikipedia.org/wiki?curid=2493 |
Aurochs
The aurochs ( or ; pl. aurochs, or rarely aurochsen, aurochses), also known as urus or ure ("Bos primigenius"), is an extinct species of large wild cattle that inhabited Asia, Europe, and North Africa. It is the ancestor of domestic cattle. The species survived in Europe until 1627, when the last recorded aurochs died in the Jaktorów Forest, Poland.
During the Neolithic Revolution, which occurred during the early Holocene, at least two aurochs domestication events occurred: one related to the Indian subspecies, leading to zebu cattle, and the other related to the Eurasian subspecies, leading to taurine cattle. Other species of wild bovines were also domesticated, namely the wild water buffalo, gaur, wild yak and banteng. In modern cattle, many breeds share characteristics of the aurochs, such as a dark colour in the bulls with a light eel stripe along the back (the cows being lighter), or a typical aurochs-like horn shape.
The aurochs was variously classified as "Bos primigenius", "Bos taurus", or, in old sources, "Bos urus". However, in 2003, the International Commission on Zoological Nomenclature "conserved the usage of 17 specific names based on wild species, which are predated by or contemporary with those based on domestic forms", confirming "Bos primigenius" for the aurochs. Taxonomists who consider domesticated cattle a subspecies of the wild aurochs should use "B. primigenius taurus"; those who consider domesticated cattle to be a separate species may use the name "B. taurus", which the Commission has kept available for that purpose.
The words aurochs, urus, and wisent have all been used synonymously in English., but the extinct aurochs/urus is a completely separate species from the still-extant wisent, also known as the European bison. The two were often confused, and some 16th-century illustrations of aurochs and wisent have hybrid features. The word "urus" (; plural "uri") is a Latin word, but was borrowed into Latin from Germanic (cf. Old English/Old High German "ūr", Old Norse "úr"). In German, OHG "ūr" "primordial" was compounded with "ohso" "ox", giving "ūrohso", which became the early modern "Aurochs". The modern form is "Auerochse".
The word "aurochs" was borrowed from early modern German, replacing archaic "urochs", also from an earlier form of German. The word is invariable in number in English, though sometimes a back-formed singular "auroch" and/or innovated plural "aurochses" occur. The use in English of the plural form "" is nonstandard, but mentioned in "The Cambridge Encyclopedia of the English Language". It is directly parallel to the German plural "Ochsen" (singular "Ochse") and recreates by analogy the same distinction as English "ox" (singular) and "oxen" (plural).
During the Pliocene, the colder climate caused an extension of open grassland, which led to the evolution of large grazers, such as wild bovines. "Bos acutifrons" is an extinct species of cattle that has been suggested as an ancestor for the aurochs.
The oldest aurochs remains have been dated to about 2 million years ago, in India. The Indian subspecies was the first to appear. During the Pleistocene, the species migrated west into the Middle East (western Asia), as well as to the east. They reached Europe about 270,000 years ago. The South Asian domestic cattle, or zebu, descended from Indian aurochs at the edge of the Thar Desert; the zebu is resistant to drought. Domestic yak, gayal, and Bali cattle do not descend from aurochs.
The first complete mitochondrial genome (16,338 base pairs) DNA sequence analysis of "Bos primigenius" from an archaeologically verified and exceptionally well preserved aurochs bone sample was published in 2010, followed by the publication in 2015 of the complete genome sequence of "Bos primigenius" using DNA isolated from a 6,750-year-old British aurochs bone. Further studies using the "Bos primigenius" whole genome sequence have identified candidate microRNA-regulated domestication genes.
A DNA study has also suggested that the modern European bison originally developed as a prehistoric cross-breed between the aurochs and the steppe bison.
Three wild subspecies of aurochs are recognised. Only the Eurasian subspecies survived until recent times.
The appearance of the aurochs has been reconstructed from skeletal material, historical descriptions, and contemporaneous depictions, such as cave paintings, engravings, or Sigismund von Herberstein’s illustration. The work by Charles Hamilton Smith is a copy of a painting owned by a merchant in Augsburg, which may date to the 16th century. Scholars have proposed that Smith's illustration was based on a cattle/aurochs hybrid, or an aurochs-like breed. The aurochs was depicted in prehistoric cave paintings and described in Julius Caesar's "The Gallic War, Book 6, Ch. 28".
The aurochs were one of the largest herbivores in postglacial Europe, comparable to the European bison. The size of an aurochs appears to have varied by region; in Europe, northern populations were bigger on average than those from the south. For example, during the Holocene, aurochs from Denmark and Germany had an average height at the shoulders of in bulls and in cows, while aurochs populations in Hungary had bulls reaching . The body mass of aurochs appears to have shown some variability. Some individuals were comparable in weight to the wisent and the banteng, reaching around , whereas those from the Late Middle Pleistocene are estimated to have weighed up to , as much as the largest gaur (the largest extant bovid). The sexual dimorphism between bulls and cows was strongly expressed, with the cows being significantly shorter than bulls on average.
Because of the massive horns, the frontal bones of aurochs were elongated and broad. The horns of the aurochs were characteristic in size, curvature, and orientation. They were curved in three directions: upwards and outwards at the base, then swinging forwards and inwards, then inwards and upwards. Aurochs horns could reach in length and between in diameter. The horns of bulls were larger, with the curvature more strongly expressed than in cows. The horns grew from the skull at a 60° angle to the muzzle, facing forwards.
The proportions and body shape of the aurochs were strikingly different from many modern cattle breeds. For example, the legs were considerably longer and more slender, resulting in a shoulder height that nearly equalled the trunk length. The skull, carrying the large horns, was substantially larger and more elongated than in most cattle breeds. As in other wild bovines, the body shape of the aurochs was athletic, and especially in bulls, showed a strongly expressed neck and shoulder musculature. Therefore, the fore hand was larger than the rear, similar to the wisent, but unlike many domesticated cattle. Even in carrying cows, the udder was small and hardly visible from the side; this feature is equal to that of other wild bovines.
The coat colour of the aurochs can be reconstructed by using historical and contemporary depictions. In his letter to Conrad Gesner (1602), Anton Schneeberger describes the aurochs, a description that agrees with cave paintings in Lascaux and Chauvet. Calves were born a chestnut colour. Young bulls changed their coat colour at a few months old to a very deep brown or black, with a white eel stripe running down the spine. Cows retained the reddish-brown colour. Both sexes had a light-coloured muzzle. Some North African engravings show aurochs with a light-coloured "saddle" on the back, but otherwise no evidence of variation in coat colour is seen throughout its range. A passage from Mucante (1596) describes the “wild ox” as gray, but is ambiguous and may refer to the wisent. Egyptian grave paintings show cattle with a reddish-brown coat colour in both sexes, with a light saddle, but the horn shape of these suggest that they may depict domesticated cattle. Remains of aurochs hair were not known until the early 1980s.
Some primitive cattle breeds display similar coat colours to the aurochs, including the black colour in bulls with a light eel stripe, a pale mouth, and similar sexual dimorphism in colour. A feature often attributed to the aurochs is blond forehead hairs. Historical descriptions tell that the aurochs had long and curly forehead hair, but none mentions a certain colour for it. Cis van Vuure (2005) says that, although the colour is present in a variety of primitive cattle breeds, it is probably a discolouration that appeared after domestication. The gene responsible for this feature has not yet been identified. Zebu breeds show lightly coloured inner sides of the legs and belly, caused by the so-called zebu-tipping gene. It has not been tested if this gene is present in remains of Indian aurochs.
Like many bovids, aurochs formed herds for at least a part of the year. These probably did not number much more than 30. If aurochs had social behaviour similar to their descendants, social status was gained through displays and fights, in which cows engaged as well as bulls. Indeed, aurochs bulls were reported to often have had severe fights. As in other wild cattle ungulates that form unisexual herds, considerable sexual dimorphism was expressed. Ungulates that form herds containing animals of both sexes, such as horses, have more weakly developed sexual dimorphism.
During the mating season, which probably took place during the late summer or early autumn, the bulls had severe fights, and evidence from the forest of Jaktorów shows these could lead to death. In autumn, aurochs fed up for the winter and got fatter and shinier than during the rest of the year, according to Schneeberger. Calves were born in spring. According to Schneeberger, the calf stayed at the cow's side until it was strong enough to join and keep up with the herd on the feeding grounds.
Calves were vulnerable to wolves and, to an extent, bears, while healthy adult aurochs probably did not have to fear these predators. In prehistoric Europe, North Africa, and Asia, big cats, such as lions and tigers, and hyenas were additional predators that probably preyed on aurochs.
Historical descriptions, like Caesar's "Commentarii de Bello Gallico" or Schneeberger, tell that aurochs were swift and fast, and could be very aggressive. According to Schneeberger, aurochs were not concerned when a man approached, but when teased or hunted, an aurochs could get very aggressive and dangerous, and throw the teasing person into the air, as he described in a 1602 letter to Gesner.
No consensus exists concerning the habitat of the aurochs. Van Vuure points out that throughout much of the last few thousand years European landscapes probably consisted of dense forests, and as such the aurochs were confined to open areas in marshlands along rivers. Comparisons of the ratios of certain mineral isotopes in recovered bones of aurochs from the Mesolithic with domestic cattle has shown they lived in floodplain forests or marshes, areas much wetter than in which modern domesticated cattle live. According to the author such cattle were not able to create and maintain open landscapes without the help of man. While some authors propose that the habitat selection of the aurochs was comparable to the African forest buffalo, others describe the species as inhabiting open grassland and helping maintain open areas by grazing, together with other large herbivores. With its hypsodont jaw, the aurochs was probably a grazer and had a food selection very similar to domesticated cattle. It was not a browser like many deer species, nor a semi-intermediary feeder like the wisent. Schneeberger describes that during winter, the aurochs ate twigs and acorns in addition to grasses.
After the beginning of the Common Era, the habitat of aurochs became more fragmented because of the steadily growing human population. During the last centuries of its existence, the aurochs was limited to remote regions in northeastern Europe.
At one point, the range of the aurochs was from Europe (excluding Ireland and northern Scandinavia), to northern Africa, the Middle East, India, and Central Asia. Until at least 3,000 years ago, the aurochs was also found in eastern China, where it is recorded at the Dingjiabao Reservoir in Yangyuan County. Most remains in China are known from the area east of 105°E, but the species has also been reported from the eastern margin of the Tibetan plateau, close to the Heihe River. In Japan, excavations in various locations, such as in Iwate and Tochigi Prefectures, have found aurochs which may have herded with steppe bison.
The aurochs, which ranged throughout much of Eurasia and Northern Africa during the late Pleistocene and early Holocene, is the wild ancestor of modern cattle. Archaeological evidence shows that domestication occurred independently in the Near East and the Indian subcontinent between 10,000 and 8,000 years ago, giving rise to the two major domestic subspecies observed today: the humpless taurine cattle (European cattle, "Bos taurus taurus") and the humped indicine cattle (Zebu, "Bos taurus indicus"), respectively. This is confirmed by genetic analyses of matrilineal mitochondrial DNA sequences, which reveal a marked differentiation between modern "B. t. taurus" and "B. t. indicus" haplotypes, demonstrating their derivation from two genetically divergent wild populations.
Other possible domestication events may have occurred: the sanga cattle ("Bos taurus africanus"), a zebu-like cattle breed with no back hump, is commonly believed to originate from crosses between humped zebus and taurine cattle breeds. However, a 1991 study examining the remains of domestic taurine cattle from Egypt from the third millennium theorised that sanga cattle was independently domesticated in Africa and that bloodlines of taurine and zebu cattle were introduced only within the last few hundreds years, based on the similarity of the bones. However, a 1996 study of cow genetics indicates this is highly unlikely.
A number of mitochondrial DNA studies, most recently from the 2010s, suggest that all domesticated taurine cattle originated from about 80 wild female aurochs in the Near East. Domestication of the aurochs began in the southern Caucasus and northern Mesopotamia from about the sixth millennium BC. Domesticated cattle and aurochs are so different in size that they have been regarded as separate species; however, large ancient cattle and aurochs have more similar morphological characteristics, with significant differences only in the horns and some parts of the skull.
Aurochs were independently domesticated in India. Indian zebu, although domesticated eight to 10 thousand years ago, are related to Indian aurochs ("B. p. namadicus") that diverged from the Near Eastern ones some 200,000 years ago. The Near Eastern ("B. p. primigenius") and African aurochs ("B. p. africanus") groups are thought to have split some 25,000 years ago, probably 15,000 years before domestication.
Aurochs became extinct in Britain during the Bronze Age, and analysis of bones from aurochs that lived about the same time as domesticated cattle has suggested no genetic contribution to modern breeds. Some older studies dispute this. One study has pointed to possible introgression of local aurochs into the "Turano-Mongolian" type of cattle now found in northern China, Mongolia, Korea and Japan, another found small introgression into local Italian breeds, with a later study finding similar results in indigenous British and Irish cattle landraces. In this last study, researchers mapped the draft genome of a British aurochs dated to 6,750 years before present and compared it to the genomes of 73 modern cattle populations and found that traditional cattle breeds of Scottish, Irish, Welsh, and English origin – such as Highland, Dexter, Kerry, Welsh Black, and White Park, had more genetic similarity to the aurochs in question than other populations. Another study concluded that because of this genomic introgression of the aurochs into cattle breeds, one might argue, that in "the bigger picture across the aurochs/cattle range, perhaps several subpopulations of aurochs are not extinct at all" but partially survive in such breeds.
By the time of Herodotus (5th century BC), aurochs had disappeared from southern Greece, but remained common in the area north and east of the Echedorus River close to modern Thessaloniki. The last reports of the species in the southern tip of the Balkans date to the 1st century BC, when Varro reported that fierce wild oxen lived in Dardania (southern Serbia) and Thrace. By the 13th century AD, the aurochs' range was restricted to Poland, Lithuania, Moldavia, Transylvania, and East Prussia. Archeological data indicate that they survived in Bulgaria, in the northeastern part of the country and around Sofia, until the 16th - 17th century, in northwestern Transylvania until 14th - 16th century AD and in Romanian Moldavia till probably the beginning of the 17th century AD, almost at the same time as in Poland. In Poland, the right to hunt large animals on any land was restricted first to nobles, and then gradually, to only the royal households. As the population of aurochs declined, hunting ceased altogether. The Polish Royal Family used gamekeepers to provide open fields for grazing for the aurochs, exempting them from local taxes in exchange for their service. Poaching aurochs was made a crime punishable by death.
According to a Polish royal survey in 1564, the gamekeepers knew of 38 animals. The last recorded live aurochs, a female, died in 1627 in the Jaktorów Forest, Poland, from natural causes. The causes of extinction were unrestricted hunting, a narrowing of habitat due to the development of farming, and diseases transmitted by domesticated cattle.
While all the wild subspecies are extinct, "B. primigenius" lives on in domesticated cattle, and attempts are being made to breed similar types suitable for filling the extinct subspecies' role in the former ecosystem.
The idea of breeding back the aurochs was first proposed in the 19th century by Feliks Paweł Jarocki. In the 1920s, a first attempt was undertaken by the Heck brothers in Germany with the aim of breeding an effigy (a look-alike) of the aurochs. Starting in the 1990s grazing and rewilding projects brought new impetus to the idea and new breeding-back efforts came underway, this time with the aim of recreating an animal not only with the looks, but also with the behaviour and the ecological impact of the aurochs, to be able to fill the ecological role of the aurochs.
The drive behind reintroduction efforts of the aurochs is largely motivated by a belief that an aesthetically pleasing open park-like landscape is "natural". The former natural European landscapes probably consisted of dense forests, with the aurochs being confined to open areas in marshlands along rivers. Research into the impact of large herbivores on forest growth has concluded that large herbivores are only able to create and maintain an open park-like landscape with the help of man. Grazing behaviour by livestock alters the landscape, which one organisation promotes as "natural grazing" (also called conservation grazing). The Rewilding Europe foundation advocates for "returning" lands to their "natural state" and believes that, without grazing, everything becomes forest. According to one theory, "mosaic landscapes" and gradients between different environments, from open soil to grassland, are important for biodiversity.
Approaches that aim to breed an aurochs-like phenotype do not equate to an aurochs-like genotype. One study proposed that using the mapped out genomes of prehistoric specimens it will be possible to breed back cattle "that are genetically akin to specific original aurochs populations, through selective cross-breeding of local cattle breeds bearing local aurochs-genome ancestry."
In the early 1920s, two German zoo directors (in Berlin and Munich), the brothers Heinz and Lutz Heck, began a selective breeding program to breed back the aurochs into existence from the descendant domesticated cattle. Their plan was based on the concept that a species is not extinct as long as all its genes are still present in a living population. The result is the breed called Heck cattle. According to van Vuure, it bears little resemblance to what is known about the appearance of the aurochs.
The "Arbeitsgemeinschaft Biologischer Umweltschutz", a conservation group in Germany, started to crossbreed Heck cattle with southern-European primitive breeds in 1996, with the goal of increasing the aurochs-likeness of certain Heck cattle herds. These crossbreeds are called Taurus cattle. It is intended to bring in aurochs-like features that are supposedly missing in Heck cattle using Sayaguesa Cattle and Chianina, and to a lesser extent Spanish Fighting Cattle (Lidia). The same breeding program is being carried out in Latvia, in Lille Vildmose National Park in Denmark, and in the Hungarian Hortobágy National Park. The program in Hungary also includes Hungarian Grey cattle and Watusi.
The Dutch-based Tauros Programme, (initially TaurOs Project) is trying to DNA-sequence breeds of primitive cattle to find gene sequences that match those found in "ancient DNA" from aurochs samples. The modern cattle would be selectively bred to try to produce the aurochs-type genes in a single animal. Starting around 2007, Tauros Programme selected a number of primitive breeds mainly from Iberia and Italy, such as Sayaguesa cattle, Maremmana primitivo, Pajuna cattle, Limia cattle, Maronesa cattle, Tudanca cattle, and others, which already bear considerable resemblance to the aurochs in certain features. Tauros Programme started collaborations with Rewilding Europe and European Wildlife, two European organizations for ecological restoration and rewilding, and now has breeding herds not only in the Netherlands but also in Portugal, Croatia, Romania, and the Czech Republic. Numerous crossbred calves of the first, second, and third offspring generations have already been born. An ecologist working on the Tauros programme has estimated it will take 7 generations for the project to achieve its aims, possibly by 2025.
Another back-breeding effort, the Uruz project, was started in 2013 by the True Nature Foundation, an organization for ecological restoration and rewilding. It differs from the other projects in that it is planning to make use of genome editing. In 2013 it planned to use either Sayaguesa, Maremmana primitive, Hungarian Grey (Steppe) cattle, Texas Longhorn with wild-type colour or Barrosã cattle.
Another back-breeding effort, the "Auerrindprojekt", was started in 2015 as a conjoined effort of the Experimentalarchäologisches Freilichtlabor Lauresham (run by Lorsch Abbey), the Förderkreis Große Pflanzenfresser im Kreis Bergstraße e.V. and the Landschaftspflegebetrieb Hohmeyer. The five breeds used include Watusi, Chianina, Sayaguesa, Maremmana and Hungarian Grey cattle. The project will not use Heck cattle as they have been deemed too genetically dissimilar to the extinct Aurochs, and it will not use any fighting breeds of cattle, because the breeders prefer to create a docile type of cattle.
Scientists of the Polish Foundation for Recreating the Aurochs (PFOT) in Poland hope to use DNA from bones in museums to recreate the aurochs. They plan to return this animal to the forests of Poland. The project has gained the support of the Polish Ministry of the Environment. They plan research on ancient preserved DNA. Polish scientists Ryszard Słomski and Jacek A. Modliński believe that modern genetics and biotechnology make it possible to recreate an animal similar to the aurochs.
The aurochs was an important game animal appearing in both Paleolithic European and Mesopotamian cave paintings, such as those found at Lascaux and Livernon in France. An archaeological excavation in Israel found traces of a feast held by the Natufian culture around 12,000 B.P., in which three aurochs (and numerous tortoises) were eaten, this appears to be an uncommon occurrence in the culture and was held in conjunction with the burial of an older woman, presumably of some social status. A 2012 archaeological mission in Sidon, Lebanon, discovered the remains numerous animal species, including an aurochs, and a few human bones and plant foods, dating from around 3700 B.P., which appear to have been buried together in some sort of necropolis. A 1999 archaeological dig in Peterborough, England, uncovered the skull of an aurochs. The front part of the skull had been removed, but the horns remained attached. The supposition is that the killing of the aurochs in this instance was a sacrificial act.
Also during antiquity, the aurochs was regarded as an animal of cultural value. Aurochs are depicted on the Ishtar Gate. In the Peloponnese there is a 15th-century B.C. depiction on the so-called violent cup of Vaphio, of hunters trying to capture with nets three wild bulls being probably aurochs, in a possibly Cretan date palm stand. The one of the bulls throws one hunter on the ground while attacking the second with its horns. The cup despite the older perception of being Minoan seems to be Mycenaean. Greeks and Paeonians were hunting aurochs (wild oxen/bulls) and used their huge horns as trophies, cups for wine, and offers to the gods and heroes. For example, according to Douglas (1927), the ox, mentioned by Samus, Philippus of Thessalonica and Antipater, killed by Philip V of Macedon on the foothills of mountain Orvilos, was actually an aurochs; Philip offered the horns which were 105 cm long and the skin to a temple of Hercules.
They survived in the wild in Europe till late in the Roman Empire and in 1847 were believed to be occasionally captured and exhibited in shows ("venationes") in Roman amphitheatres such as the colosseum. Aurochs horns were often used by Romans as hunting horns. Julius Caesar described aurochs in Gaul:
The Hebrew Bible contains numerous references to the untameable strength of the "re'em", translated as "bullock" or "wild-ox" in Jewish translations and translated rather poorly in the King James Version as "unicorn", but recognized from the last century by Hebrew scholars as the aurochs.
When the aurochs became rarer, hunting it became a privilege of the nobility and a sign of a high social status. The "Nibelungenlied" describes Siegfried killing aurochs: ""Dar nâch sluoc er schiere einen wisent und einen elch / starker ûwer viere und einen grimmen schelch"" (Nibelungenlied 937.1-2), meaning "After that, he quickly defeated one wisent and one elk, four strong aurochs, and one terrible schelch." Aurochs horns were commonly used as drinking horns by the nobility, which led to the fact that many aurochs horn sheaths are preserved today (albeit often discoloured). The drinking horn at Corpus Christi College, Cambridge, given to the college on its foundation in 1352, probably by the college's founders, the Guilds of Corpus Christi and the Blessed Virgin Mary, is thought to come from an aurochs. A painting by Willem Kalf depicts an aurochs horn. The horns of the last aurochs bulls, which died in 1620, were ornamented with gold and are located at the Livrustkammaren in Stockholm today.
Schneeberger writes that aurochs were hunted with arrows, nets, and hunting dogs. With the aurochs immobilized, the curly hair on the forehead was cut from the living animal. Belts were made out of this hair and were believed to increase the fertility of women. When the aurochs was slaughtered, a cross-like bone ("os cardis") was extracted from the heart. This bone, which is also present in domesticated cattle, contributed to the mystique of the animal and magical powers have been attributed to it.
In eastern Europe, where it survived until nearly 400 years ago, the aurochs has left traces in fixed expressions. In Russia, a drunken person behaving badly was described as "behaving like an aurochs", whereas in Poland, big, strong people were characterized as being "a bloke like an aurochs".
In Central Europe, the aurochs features in toponyms and heraldic coats of arms. For example, the names Ursenbach and Aurach am Hongar are derived from the aurochs. An aurochs head, the traditional arms of the German region Mecklenburg, figures in the coat of arms of Mecklenburg-Vorpommern. The aurochs (Romanian "bour", from Latin "būbalus") was also the symbol of Moldavia; nowadays, they can be found in the coat of arms of both Romania and Moldova. An aurochs head is featured on an 1858 series of Moldavian stamps, the so-called Bull's Heads ("cap de bour" in Romanian), renowned for their rarity and price among collectors. In Romania there are still villages named Boureni, after the Romanian word for the aurochs. The horn of the aurochs is a charge of the coat of arms of Tauragė, Lithuania, (the name of Tauragė is a compound of "taũras" "auroch" and "ragas" "horn"). It is also present in the emblem of Kaunas, Lithuania, and was part of the emblem of Bukovina during its time as an Austro-Hungarian "Kronland". The Swiss Canton of Uri is named after the aurochs; its yellow flag shows a black aurochs head. East Slavic surnames Turenin, Turishchev, Turov, and Turovsky originate from the Slavic name of the species "tur". In Slovakia, toponyms such as Turany, Turíčky, Turie, Turie Pole, Turík, Turová (villages), Turiec (river and region), Turská dolina (valley) and others are used. Turopolje, a large lowland floodplain south of the Sava River in Croatia, got its name from the aurochs (Croatian: ).
Aurochs is a commonly used symbol in Estonia. The town of Tartu (and its ancient name "Tarvatu", "Tarvato" or "Tarbatu") is likely named after the Estonian word "tarvas" ("aurochs"). The ancient name of another Estonian town Rakvere, "Tarvanpää", "Tarvanpea" or "Tarwanpe", also derives from the same source as "Aurochs' Head" in ancient Estonian.. The aurochs is nowadays a symbol of Rakvere, with a well known aurochs monument at the Rakvere Castle ruins and several "Rakvere Tarvas" sports clubs.
In 2002, a 3.5-m-high and 7.1-m-long statue of an aurochs was erected in Rakvere, Estonia, for the town's 700th birthday. The sculpture, by artist Tauno Kangro, has become a symbol of the town.
This article incorporates Creative Commons license CC BY-2.5 text from reference. | https://en.wikipedia.org/wiki?curid=2494 |
Asynchronous transfer mode
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by ANSI and ITU (formerly CCITT) for digital transmission of multiple types of traffic, including telephony (voice), data, and video signals in one network without the use separate overlay networks. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as voice and video. ATM provides functionality that uses features of circuit switching and packet switching networks. It uses asynchronous time-division multiplexing, and encodes data into small, fixed-sized network packets.
In the ISO-OSI reference model data link layer (layer 2), the basic transfer units are generically called frames. In ATM these frames are of a fixed (53 octets or bytes) length and specifically called "cells". This differs from approaches such as IP or Ethernet that use variable sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent, i.e. dedicated connections that are usually preconfigured by the service provider, or switched, i.e. set up on a per-call basis using signaling and disconnected when the call is terminated.
The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the SONET/SDH backbone of the public switched telephone network (PSTN) and in the Integrated Services Digital Network (ISDN), but has largely been superseded in favor of next-generation networks based in Internet Protocol (IP) technology, while wireless and mobile ATM never established a significant foothold.
If a speech signal is reduced to packets, and it is forced to share a link with bursty data traffic (traffic with some large data packets) then no matter how small the speech packets could be made, they would always encounter full-size data packets. Under normal queuing conditions the cells might experience maximum queuing delays. To avoid this issue, all ATM packets, or "cells," are the same small size. In addition, the fixed cell structure means that ATM can be readily switched by hardware without the inherent delays introduced by software switched and routed frames.
Thus, the designers of ATM utilized small data cells to reduce jitter (delay variance, in this case) in the multiplexing of data streams. Reduction of jitter (and also end-to-end round-trip delays) is particularly important when carrying voice traffic, because the conversion of digitized voice into an analogue audio signal is an inherently real-time process, and to do a good job, the decoder (codec) that does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess — and if the data is late, it is useless, because the time period when it should have been converted to a signal has already passed.
At the time of the design of ATM, 155 Mbit/s synchronous digital hierarchy (SDH) with 135 Mbit/s payload was considered a fast optical network link, and many plesiochronous digital hierarchy (PDH) links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the US, and 2 to 34 Mbit/s in Europe.
At 155 Mbit/s, a typical full-length 1,500 byte (12,000-bit) data packet, sufficient to contain a maximum-sized IP packet for Ethernet, would take 77.42 µs to transmit. In a lower-speed link, such as a 1.544 Mbit/s T1 line, the same packet would take up to 7.8 milliseconds.
A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over, in addition to any packet generation delay in the shorter speech packet. This was considered unacceptable for speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce good-quality sound. A packet voice system can produce this low jitter in a number of ways:
The design of ATM aimed for a low-jitter network interface. However, "cells" were introduced into the design to provide short queuing delays while continuing to support datagram traffic. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be reassembled later. The choice of 48 bytes was political rather than technical. When the CCITT (now ITU-T) was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise in larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice; parties from Europe wanted 32-byte payloads because the small size (and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets which reduced worst-case cell contention jitter by a factor of almost 30, reducing the need for echo cancellers.
An ATM cell consists of a 5-byte header and a 48-byte payload. The payload size of 48 bytes was chosen as described above.
ATM defines two different cell formats: user–network interface (UNI) and network–network interface (NNI). Most ATM links use UNI cell format.
ATM uses the PT field to designate various special kinds of cells for operations, administration and management (OAM) purposes, and to delineate packet boundaries in some ATM adaptation layers (AAL). If the most significant bit (MSB) of the PT field is 0, this is a user data cell, and the other two bits are used to indicate network congestion and as a general purpose header bit available for ATM adaptation layers. If the MSB is 1, this is a management cell, and the other two bits indicate the type. (Network management segment, network management end-to-end, resource management, and reserved for future use.)
Several ATM link protocols use the HEC field to drive a CRC-based framing algorithm, which allows locating the ATM cells with no overhead beyond what is otherwise needed for header protection. The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is found.
A UNI cell reserves the GFC field for a local flow control/submultiplexing system between users. This was intended to allow several terminals to share a single network connection, in the same way that two Integrated Services Digital Network (ISDN) phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default.
The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable of addressing almost 212 VPs of up to almost 216 VCs each (in practice some of the VP and VC numbers are reserved).
ATM supports different types of services via AALs. Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained at AAL1. AAL2 through AAL4 are used for variable bitrate (VBR) services, and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis.
Following the initial design of ATM, networks have become much faster. A 1500 byte (12000-bit) full-size Ethernet frame takes only 1.2 µs to transmit on a 10 Gbit/s network, reducing the need for small cells to reduce jitter due to contention. Some consider that this makes a case for replacing ATM with Ethernet in the network backbone. The increased link speeds by themselves do not alleviate jitter due to queuing. Additionally, the hardware for implementing the service adaptation for IP packets is expensive at very high speeds. Specifically, at speeds of OC-3 and above, the cost of segmentation and reassembly (SAR) hardware makes ATM less competitive for IP than Packet Over SONET (POS); because of its fixed 48-byte cell payload, ATM is not suitable as a data link layer "directly" underlying IP (without the need for SAR at the data link level) since the OSI layer on which IP operates must provide a maximum transmission unit (MTU) of at least 576 bytes. SAR performance limits mean that the fastest IP router ATM interfaces are STM16 - STM64 which actually compares, while POS can operate at OC-192 (STM64) with higher speeds expected in the future, limits based on segmentation and reassembly (SAR).
On slower or congested links (622 Mbit/s and below), ATM does make sense, and for this reason most asymmetric digital subscriber line (ADSL) systems use ATM as an intermediate layer between the physical link layer and a Layer 2 protocol like PPP or Ethernet.
At these lower speeds, ATM provides a useful ability to carry multiple logical circuits on a single physical or virtual medium, although other techniques exist, such as Multi-link PPP and Ethernet VLANs, which are optional in VDSL implementations. DSL can be used as an access method for an ATM network, allowing a DSL termination point in a telephone central office to connect to many internet service providers across a wide-area ATM network. In the United States, at least, this has allowed DSL providers to provide DSL access to the customers of many internet service providers. Since one DSL termination point can support multiple ISPs, the economic feasibility of DSL is substantially improved.
A network must establish a connection before two parties can send cells to each other. In ATM this is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the end points, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. "Call admission" is then performed by the network to confirm that the requested resources are available and that a route exists for the connection.
ATM operates as a channel-based transport layer, using VCs. This is encompassed in the concept of the virtual paths (VP) and virtual channels. Every ATM cell has an 8- or 12-bit virtual path identifier (VPI) and 16-bit virtual channel identifier (VCI) pair defined in its header. The VCI, together with the VPI, is used to identify the next destination of a cell as it passes through a series of ATM switches on its way to its destination. The length of the VPI varies according to whether the cell is sent on the user-network interface (on the edge of the network), or if it is sent on the network-network interface (inside the network).
As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit "is" consistent (unlike IP, where any given packet could get to its destination by a different route than the others). ATM switches use the VPI/VCI fields to identify the virtual channel link (VCL) of the next network that a cell needs to transit on its way to its final destination. The function of the VCI is similar to that of the data link connection identifier (DLCI) in frame relay and the logical channel number and logical channel group number in X.25.
Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, frame relay, n* 64 channels, IP). The VPI is useful for reducing the switching table of some virtual circuits which have common paths.
ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the circuit is composed of a series of segments, one for each pair of interfaces through which it passes.
PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service "contract") and the two end points.
ATM networks create and remove switched virtual circuits (SVCs) on demand when requested by an end piece of equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are inter-connected using ATM. SVCs were also used in attempts to replace local area networks with ATM.
Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network Node Interface or the Private Network-to-Network Interface (PNNI) protocol to share topology information between switches and select a route through a network. PNNI is a link-state routing protocol like OSPF and IS-IS. PNNI also includes a very powerful route summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm which determines the availability of sufficient bandwidth on a proposed route through a network in order to satisfy the service requirements of a VC or VP.
Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection.
ATM traffic contracts form part of the mechanism by which "quality of service" (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection.
VBR has real-time and non-real-time variants, and serves for "bursty" traffic. Non-real-time is sometimes abbreviated to vbr-nrt.
Most traffic classes also introduce the concept of Cell-delay variation tolerance (CDVT), which defines the "clumping" of cells in time.
To maintain network performance, networks may apply traffic policing to virtual circuits to limit them to their traffic contracts at the entry points to the network, i.e. the user–network interfaces (UNIs) and network-to-network interfaces (NNIs): usage/network parameter control (UPC and NPC). The reference model given by the ITU-T and ATM Forum for UPC and NPC is the generic cell rate algorithm (GCRA), which is a version of the leaky bucket algorithm. CBR traffic will normally be policed to a PCR and CDVt alone, whereas VBR traffic will normally be policed using a dual leaky bucket controller to a PCR and CDVt and an SCR and Maximum Burst Size (MBS). The MBS will normally be the packet (SAR-SDU) size for the VBR VC in cells.
If the traffic on a virtual circuit is exceeding its traffic contract, as determined by the GCRA, the network can either drop the cells or mark the Cell Loss Priority (CLP) bit (to identify a cell as potentially redundant). Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic (as discarding a single cell will invalidate the whole packet). As a result, schemes such as partial packet discard (PPD) and early packet discard (EPD) have been created that will discard a whole series of cells until the next packet starts. This reduces the number of useless cells in the network, saving bandwidth for full packets. EPD and PPD work with AAL5 connections as they use the end of packet marker: the ATM user-to-ATM user (AUU) indication bit in the payload-type field of the header, which is set in the last cell of a SAR-SDU.
Traffic shaping usually takes place in the network interface card (NIC) in user equipment, and attempts to ensure that the cell flow on a VC will meet its traffic contract, i.e. cells will not be dropped or reduced in priority at the UNI. Since the reference model given for traffic policing in the network is the GCRA, this algorithm is normally used for shaping as well, and single and dual leaky bucket implementations may be used as appropriate.
The ATM network reference model approximately maps to the three lowest layers of the OSI reference model. It specifies the following layers:
ATM became popular with telephone companies and many computer makers in the 1990s. However, even by the end of the decade, the better price/performance of Internet Protocol-based products was competing with ATM technology for integrating real-time and bursty network traffic. Companies such as FORE Systems focused on ATM products, while other large vendors such as Cisco Systems provided ATM as an option. After the burst of the dot-com bubble, some still predicted that "ATM is going to dominate". However, in 2005 the ATM Forum, which had been the trade organization promoting the technology, merged with groups promoting other technologies, and eventually became the Broadband Forum.
Wireless ATM, or mobile ATM, consists of an ATM core network with a wireless access network. ATM cells are transmitted from base stations to mobile terminals. Mobility functions are performed at an ATM switch in the core network, known as "crossover switch", which is similar to the MSC (mobile switching center) of GSM networks. The advantage of wireless ATM is its high bandwidth and high speed handoffs done at layer 2. In the early 1990s, Bell Labs and NEC research labs worked actively in this field. Andy Hopper from Cambridge University Computer Laboratory also worked in this area. There was a wireless ATM forum formed to standardize the technology behind wireless ATM networks. The forum was supported by several telecommunication companies, including NEC, Fujitsu and AT&T. Mobile ATM aimed to provide high speed multimedia communications technology, capable of delivering broadband mobile communications beyond that of GSM and WLANs. | https://en.wikipedia.org/wiki?curid=2499 |
Anus
The anus (from Latin "anus" meaning "ring", "circle") is an opening at the opposite end of an animal's digestive tract from the mouth. Its function is to control the expulsion of feces, unwanted semi-solid matter produced during digestion, which, depending on the type of animal, may include: matter which the animal cannot digest, such as bones; food material after all the nutrients have been extracted, for example cellulose or lignin; ingested matter which would be toxic if it remained in the digestive tract; and dead or excess gut bacteria and other endosymbionts.
Amphibians, reptiles, and birds use the same orifice (known as the cloaca) for excreting liquid and solid wastes, for copulation and egg-laying. Monotreme mammals also have a cloaca, which is thought to be a feature inherited from the earliest amniotes via the therapsids. Marsupials have a single orifice for excreting both solids and liquids and, in females, a separate vagina for reproduction. Female placental mammals have completely separate orifices for defecation, urination, and reproduction; males have one opening for defecation and another for both urination and reproduction, although the channels flowing to that orifice are almost completely separate.
The development of the anus was an important stage in the evolution of multicellular animals. It appears to have happened at least twice, following different paths in protostomes and deuterostomes. This accompanied or facilitated other important evolutionary developments: the bilaterian body plan, the coelom, and metamerism, in which the body was built of repeated "modules" which could later specialize, such as the heads of most arthropods, which are composed of fused, specialized segments.
In animals at least as complex as an earthworm, the embryo forms a dent on one side, the blastopore, which deepens to become the archenteron, the first phase in the growth of the gut. In deuterostomes, the original dent becomes the anus while the gut eventually tunnels through to make another opening, which forms the mouth. The protostomes were so named because it was thought that in their embryos the dent formed the mouth first ("proto–" meaning "first") and the anus was formed later at the opening made by the other end of the gut. More recent research, however, shows that in protostomes the edges of the dent close up in the middle, leaving openings at the ends which become the mouth and anus. | https://en.wikipedia.org/wiki?curid=2500 |
Acantharea
The Acantharea (Acantharia) are a group of radiolarian protozoa, distinguished mainly by their strontium sulfate skeletons.
Acantharian skeletons are composed of strontium sulfate crystals secreted by vacuoles surrounding each spicule or spine. Acantharians are the only marine organisms known to biomineralize strontium sulfate as the main component of their skeletons, making them quite unique. Unlike other radiolarians, whose skeletons are made of silica, acantharian skeletons do not fossilize, primarily because strontium sulfate is very scarce in seawater and the crystals dissolve after the acantharians die. The skeletons are made up of either ten diametric or twenty radial spicules. Diametric spicules cross the center of the cell, whereas radial spicules terminate at the center of the cell where they either form a tight or flexible junction depending on species.
The cell is divided into two regions: the endoplasm and the ectoplasm. The endoplasm, at the core of the cell, contains the main organelles, including many nuclei, and is delineated from the ectoplasm by a capsular wall made of a microfibril mesh. In symbiotic species, the algal symbionts are maintained in the endoplasm. The ectoplasm consists of cytoplasmic extensions used for prey capture and also contains food vacuoles for prey digestion. The ectoplasm is surrounded by a periplasmic cortex, also made up of microfibrils, but arranged into twenty plates, each with a hole through which one spicule projects. The cortex is linked to the spines by contractile myonemes, which assist in buoyancy control by allowing the ectoplasm to expand and contract, increasing and decreasing the total volume of the cell.
The arrangement of the spines is very precise, and is described by what is called the Müllerian law, which can be described in terms of lines of latitude and longitude – the spines lie on the intersections between five of the former, symmetric about an equator, and eight of the latter, spaced uniformly. Each line of longitude carries either two "tropical" spines or one "equatorial" and two "polar" spines, in alternation. The way that the spines are joined together at the center of the cell varies and is one of the primary characteristics by which acantharians are classified. Acantharians with diametric spicules or loosely attached radial spicules are able to rearrange or shed spicules and form cysts.
The morphological classification system roughly agrees with phylogenetic trees based on the alignment of ribosomal RNA genes, although the groups are mostly polyphyletic. Holacanthida seems to have evolved first and includes molecular clades A, B, and D. Chaunacanthida evolved second and includes only one molecular clade, clade C. Arthracanthida and Symphacanthida, which have the most complex skeletons, evolved most recently and constitute molecular clades E and F.
Many acantharians, including some in clade B (Holacanthida) and all in clades E & F (Symphiacanthida and Arthracanthida), host single-celled algae within their inner cytoplasm (endoplasm). By participating in this photosymbiosis, acantharians are essentially mixotrophs: they acquire energy through both heterotrophy and autotrophy. The relationship may make it possible for acantharians to be abundant in low-nutrient regions of the oceans and may also provide extra energy necessary to maintain their elaborate strontium sulfate skeletons. It is hypothesized that the acantharians provide the algae with nutrients (N & P) that they acquire by capturing and digesting prey in return for sugar that the algae produces during photosynthesis. It is not known, however, whether the algal symbionts benefit from the relationship or if they are simply being exploited and then digested by the acantharians.
Symbiotic Holacanthida acantharians host diverse symbiont assemblages, including several genera of dinoflagellates ("Pelagodinium, Heterocapsa, Scrippsiella, Azadinium") and a haptophyte ("Chrysochromulina"). Clade E & F acantharians have a more specific symbiosis and primarily host symbionts from the haptophyte genus "Phaeocystis", although they sometimes also host "Chrysochromulina" symbionts. Clade F acantharians simultaneously host multiple species and strains of "Phaeocystis" and their internal symbiont community does not necessarily match the relative availability of potential symbionts in the surrounding environment. The mismatch between internal and external symbiont communities suggests that acantharians can be selective in choosing symbionts and probably do not continuously digest and recruit new symbionts, and maintain symbionts for extended periods of time instead.
Adults are usually multinucleated. Reproduction is thought to take place by formation of swarmer cells (formerly referred to as "spores"), which may be flagellate. Not all life cycle stages have been observed. Study of these organisms has been hampered mainly by an inability to maintain these organisms in culture through successive generations. | https://en.wikipedia.org/wiki?curid=2502 |
African National Congress
The African National Congress (ANC) is the Republic of South Africa's governing political party. It has been the ruling party of post-apartheid South Africa since the election of Nelson Mandela in the 1994 election, winning every election since then. Cyril Ramaphosa, the incumbent President of South Africa, has served as leader of the ANC since 18 December 2017.
Founded on 8 January 1912 by John Langalibalele Dube in Bloemfontein as the South African Native National Congress (SANNC), its primary mission was to bring all Africans together as one people, to defend their rights and freedoms. This included giving full voting rights to black South Africans and mixed-race South Africans and, from 1948 onwards, to end the system of apartheid introduced by the Nationalist Party government after their election (by White voters only) in that year.
The ANC originally attempted to use non-violent protests to end apartheid; however, the Sharpeville massacre in March 1960, in which 69 black Africans were shot and killed by police and hundreds wounded during a peaceful protest, contributed to deteriorating relations with the South African government. On 8 April 1960, the administration of Charles Robberts Swart banned the ANC in South Africa. After the ban, the ANC formed the Umkhonto we Sizwe (Spear of the Nation) to fight against apartheid utilising guerrilla warfare and sabotage.
After 30 years of exiled struggle, during which many ANC members had been imprisoned or forced abroad, the country began its move towards full democracy. On 3 February 1990, State President F. W. de Klerk lifted the ban on the ANC and released Nelson Mandela from prison on 11 February 1990. On 17 March 1992, the apartheid referendum was passed by the white only electorate, removing apartheid and allowing the ANC to run in the 1994 election, which for the first time allowed all South Africans to vote for their national government. Since the 1994 election the ANC has performed better than 55% in all general elections, including the most recent 2019 election. However the party has been embroiled in a number of controversies since 2011.
The founding of the SANNC was in direct response to injustice against black South Africans at the hands of the government then in power. It can be said that the SANNC had its origins in a pronouncement by Pixley ka Isaka Seme who said in 1911, "Forget all the past differences among Africans and unite in one national organisation." The SANNC was founded the following year on 8 January 1912.
The government of the newly formed Union of South Africa began a systematic oppression of black people in South Africa. The Land Act was promulgated in 1913 forcing many black South Africans from their farms into the cities and towns to work, and to restrict their movement within South Africa.
By 1919, the SANNC was leading a campaign against passes (an ID which black South Africans had to possess). However, it then became dormant in the mid-1920s. During that time, black people were also represented by the ICU and the previously white-only Communist party. In 1923, the organisation became the African National Congress, and in 1929 the ANC supported a militant mineworkers' strike.
By 1927, J.T. Gumede (president of the ANC) proposed co-operation with the Communists in a bid to revitalise the organisation, but he was voted out of power in the 1930s. This led to the ANC becoming largely ineffectual and inactive, until the mid-1940s when the ANC was remodelled as a mass movement.
The ANC responded to attacks on the rights of black South Africans, as well as calling for strikes, boycotts, and defiance. This led to a later Defiance Campaign in the 1950s, a mass movement of resistance to apartheid. The government tried to stop the ANC by banning party leaders and enacting new laws to stop the ANC, however these measures ultimately proved to be ineffective.
In 1955, the Congress of the People officially adopted the Freedom Charter, stating the core principles of the South African Congress Alliance, which consisted of the African National Congress and its allies the South African Communist Party (SACP), the South African Indian Congress, the South African Congress of Democrats (COD) and the Coloured People's Congress. The government claimed that this was a communist document, and consequently leaders of the ANC and Congress were arrested. 1960 saw the Sharpeville massacre, in which 69 people were killed when police opened fire on anti-apartheid protesters.
uMkhonto we Sizwe or MK, translated "The Spear of the Nation", was the military wing of the ANC. Partly in response to the Sharpeville massacre of 1960, individual members of the ANC found it necessary to consider violence to combat what passive protests had failed to quell.
In co-operation with the South African Communist Party, MK was founded in 1961. MK commenced the military struggle against apartheid with acts of sabotage aimed at the installations of the state, and in the early stages was reluctant to target civilian targets. MK was responsible for the deaths of both civilians and members of the military. Acts committed by MK include the Church Street bombing, the Magoo's Bar bombing and bombing a branch of the Standard Bank in Roodepoort. It was integrated into the South African National Defence Force by 1994.
The ANC and its members were officially removed from the US terrorism watch list in 2008.
The ANC deems itself a force of national liberation in the post-apartheid era; it officially defines its agenda as the "National Democratic Revolution". The ANC is a member of the Socialist International. It also sets forth the redressing of socio-economic differences stemming from colonial- and apartheid-era policies as a central focus of ANC policy.
The National Democratic Revolution (NDR) is described as a process through which the National Democratic Society (NDS) is achieved; a society in which people are intellectually, socially, economically and politically empowered. The drivers of the NDR are also called the motive forces and are defined as the elements within society that gain from the success of the NDR. Using contour plots or concentric circles the centre represents the elements in society that gain the most out of the success of the NDR. Moving away from the centre results in the reduction of the gains that those elements derive. It is generally believed that the force that occupies the centre of those concentric circles in countries with low unemployment is the working class while in countries with higher levels of unemployment it is the unemployed. Some of the many theoreticians that have written about the NDR include Joe Slovo, Joel Netshitenzhe and Tshilidzi Marwala.
In 2004, the ANC declared itself to be a social democratic party.
The 53rd National Conference of the ANC, held in 2015, stated in its "Discussion Document" that "China economic development trajectory remains a leading example of the triumph of humanity over adversity. The exemplary role of the collective leadership of the Communist Party of China in this regard should be a guiding lodestar of our own struggle." It went on to state that "The collapse of the Berlin Wall and socialism in the Soviet Union and Eastern European States influenced our transition towards the negotiated political settlement in our country. The cause of events in the world changed tremendously in favour of the US led imperialism."
The ANC holds a historic alliance with the South African Communist Party (SACP) and Congress of South African Trade Unions (COSATU), known as the "Tripartite Alliance". The SACP and COSATU have not contested any election in South Africa, but field candidates through the ANC, hold senior positions in the ANC, and influence party policy and dialogue. During Mbeki's presidency, the government took a more pro-capitalist stance, often running counter to the demands of the SACP and COSATU.
Following Zuma's accession to the ANC leadership in 2007 and Mbeki's resignation as president in 2008, a number of former ANC leaders led by Mosiuoa Lekota split away from the ANC to form the Congress of the People.
On 20 December 2013, a special congress of the National Union of Metalworkers of South Africa (NUMSA), the country's biggest trade union with 338,000 members, voted to withdraw support from the ANC and SACP, and form a socialist party to protect the interests of the working class. NUMSA secretary general Irvin Jim condemned the ANC and SACP's support for big business and stated: "It is clear that the working class cannot any longer see the ANC or the SACP as its class allies in any meaningful sense."
The ANC flag comprises three equal horizontal stripes – black, green and gold. Black symbolises the native people of South Africa, green represents the land and gold represents the mineral and other natural wealth of South Africa.
This flag was also the battle flag of uMkhonto we Sizwe.
The Grand Duchy of Saxe-Weimar-Eisenach used an unrelated but identical flag from 1813 to 1897.
The black, green and gold tricolor was also used on the flag of the KwaZulu 'bantustan'.
Although the colours of the new national Flag of South Africa since the transition from apartheid in 1994 have no official meaning, the three colours of the ANC flag were included in it, together with red, white and blue.
Politicians in the party win a place in parliament by being on the "Party List", which is drawn up before the elections and enumerates, in order, the party's preferred MPs. The number of seats allocated is proportional to the popular national vote, and this determines the cut-off point.
The ANC has also gained members through the controversial floor crossing process.
Although most South African parties announced their candidate list for provincial premierships in the 2009 election, the ANC did not, as it is not required for parties to do so.
In 2001, the ANC launched an online weekly web-based newsletter, "ANC Today – Online Voice of the African National Congress" to offset the alleged bias of the press. It consists mainly of updates on current programmes and initiatives of the ANC.
The ANC represented the main opposition to the government during apartheid and therefore they played a major role in resolving the conflict through participating in the peacemaking and peace-building processes. Initially intelligence agents of the National Party met in secret with ANC leaders, including Nelson Mandela, to judge whether conflict resolution was possible. Discussions and negotiations took place leading to the eventual unbanning of the ANC and other opposing political parties by then President de Klerk on 2 February 1990.
The next official step towards rebuilding South Africa was the Groote Schuur Minute where the government and the ANC agreed on a common commitment towards the resolution of the existing climate of violence and intimidation, as well as a commitment to stability and to a peaceful process of negotiations. The ANC negotiated the release of political prisoners and the indemnity from prosecution for returning exiles and moreover channels of communication were established between the Government and the ANC.
Later the Pretoria Minute represented another step towards resolution where agreements at Groote Schuur were reconsolidated and steps towards setting up an interim government and drafting a new constitution were established as well as suspension of the military wing of the ANC – the Umkhonto we Sizwe. This step helped end much of the violence within South Africa. Another agreement that came out of the Pretoria Minute was that both parties would try and raise awareness that a new way of governance was being created for South Africa, and that further violence would only hinder this process. However, violence still continued in Kwazulu-Natal, which violated the trust between Mandela and de Klerk. Moreover, internal disputes in the ANC prolonged the war as consensus on peace was not reached.
The next significant steps towards resolution were the Repeal of the Population Registration Act, the repeal of the Group Areas and the Native Land Acts and a catch-all Abolition of Racially Based Land Measures Act was passed. These measures ensured no one could claim, or be deprived of, any land rights on the basis of race.
In December 1991 the Convention for a Democratic South Africa (CODESA) was held with the aim of establishing an interim government. However, a few months later in June 1992 the Boipatong massacre occurred and all negotiations crumbled as the ANC pulled out. After this negotiations proceeded between two agents, Cyril Ramaphosa of the ANC, and Roelf Meyer of the National Party. In over 40 sessions the two men discussed and negotiated over many issues including the nature of the future political system, the fate of over 40,000 government employees and if/how the country would be divided. The result of these negotiations was an interim constitution that meant the transition from apartheid to democracy was a constitutional continuation and that the rule of law and state sovereignty remained intact during the transition, which was vital for stability within the country. A date was set for the first democratic elections on 27 April 1994. The ANC won 62.5% of the votes and has been in power ever since.
The most prominent corruption case involving the ANC relates to a series of bribes paid to companies involved in the ongoing R55 billion Arms Deal saga, which resulted in a long term jail sentence to then Deputy President Jacob Zuma's legal adviser Schabir Shaik. Zuma, the former South African President, was charged with fraud, bribery and corruption in the Arms Deal, but the charges were subsequently withdrawn by the National Prosecuting Authority of South Africa due to their delay in prosecution. The ANC has also been criticised for its subsequent abolition of the Scorpions, the multidisciplinary agency that investigated and prosecuted organised crime and corruption, and was heavily involved in the investigation into Zuma and Shaik. Tony Yengeni, in his position as chief whip of the ANC and head of the Parliaments defence committee has recently been named as being involved in bribing the German company ThyssenKrupp over the purchase of four corvettes for the SANDF.
Other recent corruption issues include the sexual misconduct and criminal charges of Beaufort West municipal manager Truman Prince, and the Oilgate scandal, in which millions of Rand in funds from a state-owned company were funnelled into ANC coffers.
The ANC has also been accused of using government and civil society to fight its political battles against opposition parties such as the Democratic Alliance. The result has been a number of complaints and allegations that none of the political parties truly represent the interests of the poor. This has resulted in the "No Land! No House! No Vote!" Campaign which became very prominent during elections.
In 2018, the "New York Times" reported on the killings of ANC corruption whistleblowers.
In late 2011 the ANC was heavily criticised over the passage of the Protection of State Information Bill, which opponents claimed would improperly restrict the freedom of the press. Opposition to the bill included otherwise ANC-aligned groups such as COSATU. Notably, Nelson Mandela and other Nobel laureates Nadine Gordimer, Archbishop Desmond Tutu, and F. W. de Klerk have expressed disappointment with the bill for not meeting standards of constitutionality and aspirations for freedom of information and expression.
The ANC have been criticised for its role in failing to prevent 16 August 2012 massacre of Lonmin miners at Marikana in the North West. Some allege that Police Commissioner Riah Phiyega and Police Minister Nathi Mthethwa, a close confidant of Jacob Zuma, may have given the go ahead for the police action against the miners on that day.
Commissioner Phiyega of the ANC came under further criticism as being insensitive and uncaring when she was caught smiling and laughing during the Farlam Commission's video playback of the 'massacre'. Archbishop Desmond Tutu has announced that he no longer can bring himself to exercise a vote for the ANC as it is no longer the party that he and Nelson Mandela fought for, and that the party has now lost its way, and is in danger of becoming a corrupt entity in power.
The ANC has a growing list of constitutional failures. One of the most prominent relates to president of the ANC and of the Republic, Jacob Zuma, and his Nkandla homestead's security upgrades, valued at around R250 million. His swimming pool, for example, was termed a 'fire pool' and his amphitheatre an 'emergency meeting point', thus leaving the taxpayer to carry the costs. After the Public Protector released her report (Secure in Comfort) which found that Zuma must pay back the money spent on the non-security features, he refused to do so. In 2016 the Constitutional Court ruled that Zuma, as well as the National Assembly, had "breached the Constitution" and failed to uphold it. Zuma apologised to the nation as follows: "The matter has caused a lot of frustration and confusion for which I apologise on my behalf and on behalf of government." However he claimed not to have asked nor known about the non-security upgrades, despite the media reporting on them almost daily.
There is also a growing trend for ANC members as well as those individuals appointed by the ANC to public positions of power to misrepresent their qualifications. The result of such lies typically lead to those appointed being unable to fulfill their obligations while being paid very large salaries, and typically cost the taxpayer large amount of money while attempting to defend themselves in court. A small selection follows:
Carl Niehaus, who served as ANC speaker, claimed to have a B.A., Masters and Doctorate degrees; in reality he never received the Masters or Doctoral degrees.
Pallo Jordan, who served as Minister of Arts and Culture claimed to be in possession of a PhD, when in reality he has no tertiary education at all.
Daniel Mtimkulu, who was employed as chief engineer at Passenger Rail Agency of South Africa (Prasa) claimed to have a PhD in engineering, which was a lie; he was merely qualified as an engineering technician. Under Mtimkulu's leadership, Prasa ordered 70 new locomotives, valued at R3.5 billion. The first 13 Afro 4000 diesel locomotives to arrive, at a cost of R600 million, were too tall to be of use on their intended routes.
Ellen Tshabalala, former chairperson of the South African Broadcasting Corporation (SABC), claimed to have a BComm degree. In reality her marks were so poor that she was not allowed to rewrite some of her exams. She later claimed that her degree certificate was stolen. Defending Tshabalala in court cost the SABC more than R1 million.
Hlaudi Motsoeneng, former COO of the SABC lied about being in possession of a Matric certificate. By his own admission, he simply invented marks for himself. He was appointed by Ellen Tshabalala, and his various court cases have cost the SABC more than R1.5 million. Further, under Motsoeneng's reign, the broadcaster recorded a net loss of R411 million in the 2015/16 financial year.
Dudu Myeni, chairperson of South African Airways (SAA) and good friend of Jacob Zuma, claimed to have a Bachelor's degree in administration. This was proven false. Under her leadership "SAA's losses for the 2014/15 financial year were R5.6-billion – close to R1-billion more than the expected amount of R4.7-billion".
Sicelo Shiceka, Minister of Cooperative Governance and Traditional Affairs lied about being in possession of a Master's degree. He used taxpayer's money to fund a party for his mother and secured a government car for his girlfriend, whereafter he was appointed as a member of the inter-ministerial task team on corruption.
A 2016 statement issued by Zizi Kodwa, the ANC National Spokesperson states that "[t]he ANC rejects these [racist] comments with the contempt they deserve and calls on all South Africans to join in the rejection of all racists in our country, wherever they are. It is sad that well meaning South Africans have to contend with this backward attitude." In support of this statement, the ANC has publicly called for legal action to be taken against whites who have publicly made racist comments against blacks, usually through social media.
Penny Sparrow is one such high-profile case. She posted the following through her Facebook account:These monkeys that are allowed to be released on New Year's eve and New Year's day on to public beaches towns etc obviously have no education what so ever so to allow them loose is inviting huge dirt and troubles and discomfort to others. I'm sorry to say that I was among the revellers and all I saw were black on black skins what a shame. I do know some wonderful and thoughtful black people. This lot of monkeys just don't want to even try. But think they can voice opinions about statute and get their way oh dear. From now I shall address the blacks of South Africa as monkeys as I see the cute little wild monkeys do the same, pick drop and litter.
Sparrow pleaded guilty to crimen injuria, and was presented with a choice of either paying a R5,000 fine or 12 months in jail, in addition to paying the legal fees incurred by the ANC, who brought the matter to court. In a separate instance, she was also ordered to pay R150,000 to the Oliver and Adelaide Tambo Trust.
In contrast to the ANC's swift and decisive action towards Sparrow and other white racists, they have mostly ignored racist comments voiced by blacks, in particular ANC members. For example, Kenny Barrel Nkosi, an ANC ward councillor (Govan Mbeki Municipality, Mpumalanga) posted the following on his Facebook account: "The first people that need to fokkof [fuck off] are whites, cubans never oppressed us. these are our true friends they were there in the times on needs. welcom cdes welcome [sic]" The municipality issued the following statement: "The matter has been investigated and at the time of the comment, the ward councillor was not representing the views of either the ANC or the Govan Mbeki Municipality, but merely as a personal opinion." No further action was taken.
At a Gupta family wedding held at Sun City in 2013, various incidents of racism occurred. The family made clear that they wanted only white workers, including waiters, security, bar staff and cleaning staff. Black workers were told to wash before they interacted with guests. These allegations were denied by the Gupta family. Nonetheless, in the Gupta e-mail leak of 2017 these allegations were shown to be correct. Moreover, the e-mails also make clear that a black worker was called a monkey by a member of the Gupta family. That the Gupta family is a large, vocal and powerful supporter of the ANC and a personal friend of Jacob Zuma, may explain why no action was taken against them with regards to racism.
Lindiwe Sisulu, ANC member and Minister of Defence and Military Veterans (who demanded that the Estate Agency Affairs Board report to her regarding action taken against Sparrow) called the Democratic Alliance leader, Mmusi Maimane, a "hired native". Ironically – due to the fact that Chris Hart, prominent economist and investment strategist at Standard Bank, was forced to resign for his racist tweet stating that "[m]ore than 25 years after Apartheid ended, the victims are increasing along with a sense of entitlement and hatred towards minorities…." – Sisulu said the following, while discussing the 2.3 million housing backlog: "What makes an 18-year-old think the state owes them a house? It's a culture of entitlement … we can't continue with a dependency culture." No action has been taken against Sisulu.
Lulu Xingwana, former ANC Minister of Women, Children and People with Disabilities, stated that "[y]oung Afrikaner men are brought up in the Calvinist religion believing that they own a woman, they own a child, they own everything and therefore they can take that life because they own it". The minister apologised, and no further action was taken against her.
Jimmy Manyi, ANC director general of labour and later ANC spokesperson, is quotes as saying the following on a TV interview: "I think its very important for coloured people in this country to understand that South Africa belongs to them in totality, not just the Western Cape. So this over-concentration of coloureds in the Western Cape is not working for them. They should spread in the rest of the country ... so they must stop this over-concentration situation because they are in over-supply where they are so you must look into the country and see where you can meet the supply." No action has been taken against Manyi.
Julius Malema, former ANCYL leader and current EFF leader, stated at a political rally in 2016 that "We [the EFF] are not calling for the slaughter of white people‚ at least for now". When asked for comment by a news agency, the ANC spokesperson, Zizi Kodwa stated that there will be no comment from the ANC, as "[h]e [Malema] was addressing his own party supporters." While still the ANCYL leader, Malema was taken to the Equality Court by AfriForum for repeatedly singing "dubul' ibhunu", which translate as "shoot the boer [white farmer]". The ANC supported Malema, though AfriForum and the ANC reached a settlement before the appeal case was due to be argued in the Supreme Court of Appeal.
In partial response to the Penny Sparrow case, Velaphi Khumalo, while working for the Department of Sport, Arts, Culture and Recreation, posted the following on his Facebook account:"I want to cleans this country of all white people. we must act as Hitler did to the Jews. I don't believe any more that the is a large number of not so racist whit people. I'm starting to be sceptical even of those within our Movement the ANC. I will from today unfriend all white people I have as friends from today u must be put under the same blanket as any other racist white because secretly u all are a bunch of racist fuck heads. as we have already seen
[all sic]." He also posted:"Noo seriously though u oppressed us when u were a minority and then manje u call us monkeys and we supposed to let it slide . white people in south Africa deserve to be hacked and killed like Jews. U have the same venom moss . look at Palestine . noo u must be bushed alive and skinned and your off springs used as garden fertiliser [all sic]".
The Department of Sports, Arts, Culture and Recreation responded with a statement wherein it "views the hateful post by Velaphi Khumalo in a serious light. Our key mandate is nation-building and social cohesion. His sentiments take our country backwards and do not reflect what the Gauteng provincial government stands for." Khumalo was suspended on full pay while an investigation was undertaken, was found to be guilty by an internal disciplinary procedure, and issued with a warning, whereafter he resumed his work at the department.
Esethu Hasane, Media and Communication Manager for the Department of Sport and Recreation tweeted the following during the severe droughts in the Western cape in 2017: "Only Western Cape still has dry dams. Please God, we have black people there, choose another way of punishing white people." Despite calls for his dismissal, no action was taken. | https://en.wikipedia.org/wiki?curid=2503 |
Amphetamine
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant marketed under the brand name Evekeo, among others. It is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered in 1887 and exists as two enantiomers: levoamphetamine and dextroamphetamine. "Amphetamine" properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers, levoamphetamine and dextroamphetamine, in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use.
The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems.
At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects.
Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group.
Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, pharmaceutical amphetamines, at therapeutic dosages, appear to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia.
Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult.
Current models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Psychostimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals.
In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine receptor D1 and adrenoceptor α2 in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control.
Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature.
According to the International Programme on Chemical Safety (IPCS) and the United States Food and Drug Administration (USFDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the USFDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the USFDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical.
Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses.
Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids.
USFDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease.
At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the USFDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility.
Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine.
Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increase the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Sustained aerobic exercise on a regular basis also appears to be an effective treatment for amphetamine addiction; exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction.
Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are "Delta FBJ murine osteosarcoma viral oncogene homolog B" (ΔFosB), "cAMP response element binding protein" (CREB), and "nuclear factor-kappa B" (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others.
ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs.
The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation.
there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction.
A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in RCTs for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil.
A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these.
Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system.
Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect.
According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose.
An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence).
In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability.
An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use.
Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD.
In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic).
Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum.
Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons.
In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival "in vitro". The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity.
The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it effects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain.
Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects.
In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state.
Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at .
Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from .
Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor.
Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis.
In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma.
The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically over 75% for dextroamphetamine. Amphetamine is a weak base with a p"K"a of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue.
The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are hours and hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose.
CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine "N"-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, "N"-oxidation, "N"-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following:
The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics.
Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of "E. coli" commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds.
Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , an isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine "N"-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine.
Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of .
The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups.
Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt.
A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine.
A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4).
A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6).
Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for days.
For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug.
Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it "phenylisopropylamine"; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes.
Amphetamine is still illegally synthesized today in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA.
As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment.
Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as “amphetamine.” Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below. | https://en.wikipedia.org/wiki?curid=2504 |
Asynchronous communication
In telecommunications, asynchronous communication is transmission of data, generally without the use of an external clock signal, where data can be transmitted intermittently rather than in a steady stream. Any timing required to recover data from the communication symbols is encoded within the symbols.
The most significant aspect of asynchronous communications is that data is not transmitted at regular intervals, thus making possible variable bit rate, and that the transmitter and receiver clock generators do not have to be exactly synchronized all the time. In asynchronous transmission, data is sent one byte at a time and each byte is preceded by start bit and stop bit.
In asynchronous serial communication the physical protocol layer, the data blocks are code words of a certain word length, for example octets (bytes) or ASCII characters, delimited by start bits and stop bits. A variable length space can be inserted between the code words. No bit synchronization signal is required. This is sometimes called character oriented communication. Examples are and MNP2 and V.2 modems and older.
Asynchronous communication at the data link layer or higher protocol layers is known as statistical multiplexing, for example asynchronous transfer mode (ATM). In this case the asynchronously transferred blocks are called data packets, for example ATM cells. The opposite is circuit switched communication, which provides constant bit rate, for example ISDN and SONET/SDH.
The packets may be encapsulated in a data frame, with a frame synchronization bit sequence indicating the start of the frame, and sometimes also a bit synchronization bit sequence, typically 01010101, for identification of the bit transition times. Note that at the physical layer, this is considered as synchronous serial communication. Examples of packet mode data link protocols that can be/are transferred using synchronous serial communication are the HDLC, Ethernet, PPP and USB protocols.
An asynchronous communication service or application does not require a constant bit rate. Examples are file transfer, email and the World Wide Web. An example of the opposite, a synchronous communication service, is realtime streaming media, for example IP telephony, IP-TV and video conferencing.
Electronically mediated communication often happens asynchronously in that the participants do not communicate concurrently. Examples include email
and bulletin-board systems, where participants send or post messages at different times. The term "asynchronous communication" acquired currency in the field of online learning, where teachers and students often exchange information asynchronously instead of synchronously (that is, simultaneously), as they would in face-to-face or in telephone conversations. | https://en.wikipedia.org/wiki?curid=2506 |
Artillery
Artillery is a class of heavy military ranged weapons built to launch munitions far beyond the range and power of infantry firearms. Early artillery development focused on the ability to breach defensive walls and fortifications during sieges, and led to heavy, fairly immobile siege engines. As technology improved, lighter, more mobile field artillery cannons developed for battlefield use. This development continues today; modern self-propelled artillery vehicles are highly mobile weapons of great versatility generally providing the largest share of an army's total firepower.
Originally, the word "artillery" referred to any group of soldiers primarily armed with some form of manufactured weapon or armor. Since the introduction of gunpowder and cannon, "artillery" has largely meant cannons, and in contemporary usage, usually refers to shell-firing guns, howitzers, mortars, and rocket artillery. In common speech, the word "artillery" is often used to refer to individual devices, along with their accessories and fittings, although these assemblages are more properly called "equipment". However, there is no generally recognized generic term for a gun, howitzer, mortar, and so forth: the United States uses "artillery piece", but most English-speaking armies use "gun" and "mortar". The projectiles fired are typically either "shot" (if solid) or "shell" (if not solid). Historically, variants of solid shot including canister, chain shot and grapeshot were also used. "Shell" is a widely used generic term for a projectile, which is a component of munitions.
By association, artillery may also refer to the arm of service that customarily operates such engines. In some armies, the artillery arm has operated field, coastal, anti-aircraft, and anti-tank artillery; in others these have been separate arms, with some nations coastal has been a naval or marine responsibility.
In the 20th century, technology-based target acquisition devices (such as radar) and systems (such as sound ranging and flash spotting) emerged to acquire targets, primarily for artillery. These are usually operated by one or more of the artillery arms. The widespread adoption of indirect fire in the early 20th century introduced the need for specialist data for field artillery, notably survey and meteorological; in some armies, provision of these are the responsibility of the artillery arm.
Artillery is arguably the most lethal form of land-based armament currently employed, and has been since at least the early Industrial Revolution. The majority of combat deaths in the Napoleonic Wars, World War I, and World War II were caused by artillery. In 1944, Joseph Stalin said in a speech that artillery was "the God of War".
Although not called as such, siege engines performing the role recognizable as artillery have been employed in warfare since antiquity. The first known catapult was developed in Syracuse in 399 BC. Until the introduction of gunpowder into western warfare, artillery was dependent upon mechanical energy which not only severely limited the kinetic energy of the projectiles, it also required the construction of very large engines to store sufficient energy. A 1st-century BC Roman catapult launching stones achieved a kinetic energy of 16,000 joules, compared to a mid-19th-century 12-pounder gun, which fired a round, with a kinetic energy of 240,000 joules, or a 20th-century US battleship that fired a projectile from its main battery with an energy level surpassing 350,000,000 joules.
From the Middle Ages through most of the modern era, artillery pieces on land were moved by horse-drawn gun carriages. In the contemporary era, artillery pieces and their crew relied on wheeled or tracked vehicles as transportation. These land versions of artillery were dwarfed by railway guns; the largest of these large-calibre guns ever conceived – Project Babylon of the Supergun affair – was theoretically capable of putting a satellite into orbit. Artillery used by naval forces has also changed significantly, with missiles generally replacing guns in surface warfare.
Over the course of military history, projectiles were manufactured from a wide variety of materials, into a wide variety of shapes, using many different methods in which to target structural/defensive works and inflict enemy casualties. The engineering applications for ordnance delivery have likewise changed significantly over time, encompassing some of the most complex and advanced technologies in use today.
In some armies, the weapon of artillery is the projectile, not the equipment that fires it. The process of delivering fire onto the target is called gunnery. The actions involved in operating an artillery piece are collectively called "serving the gun" by the "detachment" or gun crew, constituting either direct or indirect artillery fire. The manner in which gunnery crews (or formations) are employed is called artillery support. At different periods in history, this may refer to weapons designed to be fired from ground-, sea-, and even air-based weapons platforms.
The term "gunner" is used in some armed forces for the soldiers and sailors with the primary function of using artillery.
The gunners and their guns are usually grouped in teams called either "crews" or "detachments". Several such crews and teams with other functions are combined into a unit of artillery, usually called a battery, although sometimes called a company. In gun detachments, each role is numbered, starting with "1" the Detachment Commander, and the highest number being the Coverer, the second-in-command. "Gunner" is also the lowest rank and junior non-commissioned officers are "Bombardiers" in some artillery arms.
Batteries are roughly equivalent to a company in the infantry and are combined into larger military organizations for administrative and operational purposes, either battalions or regiments, depending on the army. These may be grouped into brigades; the Russian army also groups some brigades into artillery divisions, and the People's Liberation Army has artillery corps.
The term "artillery" is also applied to a combat arm of most military services when used organizationally to describe units and formations of the national armed forces that operate the weapons.
During military operations, the role of field artillery is to provide support to other arms in combat or to attack targets, particularly in-depth. Broadly, these effects fall into two categories, either to suppress or neutralize the enemy, or to cause casualties, damage, and destruction. This is mostly achieved by delivering high-explosive munitions to suppress, or inflict casualties on the enemy from casing fragments and other debris and blast, or by destroying enemy positions, equipment, and vehicles. Non-lethal munitions, notably smoke, can also be used to suppress or neutralize the enemy by obscuring their view.
Fire may be directed by an artillery observer or another observer, including manned and unmanned aircraft pilots, or called onto map coordinates.
Military doctrine has played a significant influence on the core engineering design considerations of artillery ordnance through its history, in seeking to achieve a balance between the delivered volume of fire with ordinance mobility. However, during the modern period, the consideration of protecting the gunners also arose due to the late-19th-century introduction of the new generation of infantry weapons using conoidal bullet, better known as the Minié ball, with a range almost as long as that of field artillery.
The gunners' increasing proximity to and participation in direct combat against other combat arms and attacks by aircraft made the introduction of a gun shield necessary. The problems of how to employ a fixed or horse-towed gun in mobile warfare necessitated the development of new methods of transporting the artillery into combat. Two distinct forms of artillery were developed: the towed gun, which was used primarily to attack or defend a fixed-line; and the self-propelled gun, which was designed to accompany a mobile force and provide continuous fire support and/or suppression. These influences have guided the development of artillery ordnance, systems, organizations, and operations until the present, with artillery systems capable of providing support at ranges from as little as 100 m to the intercontinental ranges of ballistic missiles. The only combat in which artillery is unable to take part in is close quarters combat, with the possible exception of artillery reconnaissance teams.
The word as used in the current context originated in the Middle Ages. One suggestion is that it comes from the Old French "atelier", meaning "to arrange", and "attillement", meaning "equipment".
From the 13th century, an "artillier" referred to a builder of any war equipment; and, for the next 250 years, the sense of the word "artillery" covered all forms of military weapons. Hence, the naming of the Honourable Artillery Company, which was essentially an infantry unit until the 19th century. Another suggestion is that it comes from the Italian "arte de tirare" (art of shooting), coined by one of the first theorists on the use of artillery, Niccolò Tartaglia.
Mechanical systems used for throwing ammunition in ancient warfare, also known as "engines of war", like the catapult, onager, trebuchet, and ballista, are also referred to by military historians as artillery.
Early Chinese artillery had vase-like shapes. This includes the "long range awe inspiring" cannon dated from 1350 and found in the 14th century Ming Dynasty treatise "Huolongjing". With the development of better metallurgy techniques, later cannons abandoned the vase shape of early Chinese artillery. This change can be seen in the bronze "thousand ball thunder cannon", an early example of field artillery. These small, crude weapons diffused into the Middle East (the "madfaa") and reached Europe in the 13th century, in a very limited manner.
In Asia, Mongols adopted the Chinese artillery and used it effectively in the great conquest. By the late 14th century, Chinese rebels used organized artillery and cavalry to push Mongols out. The usage of cannons in the Mongol invasion of Java, led to deployment of cetbang cannons by Majapahit fleet in 1300s and subsequent near universal use of the swivel-gun and cannons in the Nusantara archipelago.
As small smooth-bore tubes, these were initially cast in iron or bronze around a core, with the first drilled bore ordnance recorded in operation near Seville in 1247. They fired lead, iron, or stone balls, sometimes large arrows and on occasions simply handfuls of whatever scrap came to hand. During the Hundred Years' War, these weapons became more common, initially as the bombard and later the cannon. Cannon were always muzzle-loaders. While there were many early attempts at breech-loading designs, a lack of engineering knowledge rendered these even more dangerous to use than muzzle-loaders.
In 1415, the Portuguese invaded the Mediterranean port town of Ceuta. While it is difficult to confirm the use of firearms in the siege of the city, it is known the Portuguese defended it thereafter with firearms, namely "bombardas", "colebratas", and "falconetes". In 1419, Sultan Abu Sa'id led an army to reconquer the fallen city, and Marinids brought cannons and used them in the assault on Ceuta. Finally, hand-held firearms and riflemen appear in Morocco, in 1437, in an expedition against the people of Tangiers. It is clear these weapons had developed into several different forms, from small guns to large artillery pieces.
The artillery revolution in Europe caught on during the Hundred Years' War and changed the way that battles were fought. In the preceding decades, the English had even used a gunpowder-like weapon in military campaigns against the Scottish. However, at this time, the cannons used in battle were very small and not particularly powerful. Cannons were only useful for the defense of a castle, as demonstrated at Breteuil in 1356, when the besieged English used a cannon to destroy an attacking French assault tower. By the end of the 14th century, cannon were only powerful enough to knock in roofs, and could not penetrate castle walls.
However, a major change occurred between 1420 and 1430, when artillery became much more powerful and could now batter strongholds and fortresses quite efficiently. The English, French, and Burgundians all advanced in military technology, and as a result the traditional advantage that went to the defense in a siege was lost. The cannon during this period were elongated, and the recipe for gunpowder was improved to make it three times as powerful as before. These changes led to the increased power in the artillery weapons of the time.
Joan of Arc encountered gunpowder weaponry several times. When she led the French against the English at the Battle of Tourelles, in 1430, she faced heavy gunpowder fortifications, and yet her troops prevailed in that battle. In addition, she led assaults against the English-held towns of Jargeau, Meung, and Beaugency, all with the support of large artillery units. When she led the assault on Paris, Joan faced stiff artillery fire, especially from the suburb of St. Denis, which ultimately led to her defeat in this battle. In April 1430, she went to battle against the Burgundians, whose support was purchased by the English. At this time, the Burgundians had the strongest and largest gunpowder arsenal among the European powers, and yet the French, under Joan of Arc's leadership, were able to beat back the Burgundians and defend themselves. As a result, most of the battles of the Hundred Years' War that Joan of Arc participated in were fought with gunpowder artillery.
The army of Mehmet the Conqueror, which conquered Constantinople in 1453, included both artillery and foot soldiers armed with gunpowder weapons. The Ottomans brought to the siege sixty-nine guns in fifteen separate batteries and trained them at the walls of the city. The barrage of Ottoman cannon fire lasted forty days, and they are estimated to have fired 19,320 times. Artillery also played a decisive role in the Battle of St. Jakob an der Birs of 1444.
The new Ming Dynasty established the "Divine Engine Battalion" (神机营), which specialized in various types of artillery. Light cannons and cannons with multiple volleys were developed. In a campaign to suppress a local minority rebellion near today's Burmese border, "the Ming army used a 3-line method of arquebuses/muskets to destroy an elephant formation."
When Portuguese and Spanish arrived at Southeast Asia, they found that the local kingdoms already using cannons. One of the earliest reference to cannon and artillerymen in Java is from the year 1346. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Duarte Barbosa ca. 1510 said that the inhabitants of Java are great masters in casting artillery and very good artillerymen. They make many one-pounder cannons (cetbang or rentaka), long muskets, "spingarde" (arquebus), "schioppi" (hand cannon), Greek fire, guns (cannons), and other fire-works. Every place are considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Patih Yunus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By early 16th century, the Javanese already locally-producing large guns, some of them still survived until the present day and dubbed as "sacred cannon" or "holy cannon". These cannons varied between 180-260 pounders, weighing anywhere between 3-8 tons, length of them between 3–6 m.
Between 1593 and 1597, about 200,000 Korean and Chinese troops which fought against Japan in Korea actively used heavy artillery in both siege and field combat. Korean forces mounted artillery in ships as naval guns, providing an advantage against Japanese navy which used "Kunikuzushi" (国崩し – Japanese breech-loading swivel gun) and "Ōzutsu" (大筒 – large size Tanegashima) as their largest firearms.
Bombards were of value mainly in sieges. A famous Turkish example used at the siege of Constantinople in 1453 weighed 19 tons, took 200 men and sixty oxen to emplace, and could fire just seven times a day. The Fall of Constantinople was perhaps "the first event of supreme importance whose result was determined by the use of artillery" when the huge bronze cannons of Mehmed II breached the city's walls, ending the Byzantine Empire, according to Sir Charles Oman.
Bombards developed in Europe were massive smoothbore weapons distinguished by their lack of a field carriage, immobility once emplaced, highly individual design, and noted unreliability (in 1460 James II, King of Scots, was killed when one exploded at the siege of Roxburgh). Their large size precluded the barrels being cast and they were constructed out of metal staves or rods bound together with hoops like a barrel, giving their name to the gun barrel.
The use of the word "cannon" marks the introduction in the 15th century of a dedicated field carriage with axle, trail and animal-drawn limber—this produced mobile field pieces that could move and support an army in action, rather than being found only in the siege and static defenses. The reduction in the size of the barrel was due to improvements in both iron technology and gunpowder manufacture, while the development of trunnions—projections at the side of the cannon as an integral part of the cast—allowed the barrel to be fixed to a more movable base, and also made raising or lowering the barrel much easier.
The first land-based mobile weapon is usually credited to Jan Žižka, who deployed his oxen-hauled cannon during the Hussite Wars of Bohemia (1418–1424). However, cannons were still large and cumbersome. With the rise of musketry in the 16th century, cannon were largely (though not entirely) displaced from the battlefield—the cannon were too slow and cumbersome to be used and too easily lost to a rapid enemy advance.
The combining of shot and powder into a single unit, a cartridge, occurred in the 1620s with a simple fabric bag, and was quickly adopted by all nations. It speeded loading and made it safer, but unexpelled bag fragments were an additional fouling in the gun barrel and a new tool—a worm—was introduced to remove them. Gustavus Adolphus is identified as the general who made cannon an effective force on the battlefield—pushing the development of much lighter and smaller weapons and deploying them in far greater numbers than previously. The outcome of battles was still determined by the clash of infantry.
Shells, explosive-filled fused projectiles, were also developed in the 17th century. The development of specialized pieces—shipboard artillery, howitzers and mortars—was also begun in this period. More esoteric designs, like the multi-barrel "ribauldequin" (known as "organ guns"), were also produced.
The 1650 book by Kazimierz Siemienowicz "Artis Magnae Artilleriae pars prima" was one of the most important contemporary publications on the subject of artillery. For over two centuries this work was used in Europe as a basic artillery manual.
One of the most significant effects of artillery during this period was however somewhat more indirect—by easily reducing to rubble any medieval-type fortification or city wall (some which had stood since Roman times), it abolished millennia of siege-warfare strategies and styles of fortification building. This led, among other things, to a frenzy of new bastion-style fortifications to be built all over Europe and in its colonies, but also had a strong integrating effect on emerging nation-states, as kings were able to use their newfound artillery superiority to force any local dukes or lords to submit to their will, setting the stage for the absolutist kingdoms to come.
Modern rocket artillery can trace its heritage back to the Mysorean rockets of India. Their first recorded use was in 1780 during the battles of the Second, Third and Fourth Mysore Wars. The wars fought between the British East India Company and the Kingdom of Mysore in India made use of the rockets as a weapon. In the Battle of Pollilur, the Siege of Seringapatam (1792) and in Battle of Seringapatam in 1799 these rockets were used with considerable effect against the British." After the wars, several Mysore rockets were sent to England, but experiments with heavier payloads were unsuccessful. In 1804 William Congreve, considering the Mysorian rockets to have too short a range (less than 1,000 yards) developed rockets in numerous sizes with ranges up to 3,000 yards and eventually utilizing iron casing as the Congreve rocket which were used effectively during the Napoleonic Wars and the War of 1812.
With the Napoleonic Wars, artillery experienced changes in both physical design and operation. Rather than being overseen by "mechanics", artillery was viewed as its own service branch with the capability of dominating the battlefield. The success of the French artillery companies was at least in part due to the presence of specifically artillery officers leading and coordinating during the chaos of battle. Napoleon, himself a former artillery officer, perfected the tactic of massed artillery batteries unleashed upon a critical point in his enemies' line as a prelude to a decisive infantry and cavalry assault.
Physically, cannons continued to become smaller and lighter—Frederick II of Prussia deployed the first genuine light artillery during the Seven Years' War.
Jean-Baptiste de Gribeauval, a French artillery engineer, introduced the standardization of cannon design in the mid-18th century. He developed a 6-inch (150 mm) field howitzer whose gun barrel, carriage assembly and ammunition specifications were made uniform for all French cannons. The standardized interchangeable parts of these cannons down to the nuts, bolts and screws made their mass production and repair much easier. While the Gribeauval system made for more efficient production and assembly, the carriages used were heavy and the gunners were forced to march on foot (instead of riding on the limber and gun as in the British system). Each cannon was named for the weight of its projectiles, giving us variants such as 4, 8, and 12, indicating the weight in pounds. The projectiles themselves included solid balls or canister containing lead bullets or other material. These canister shots acted as massive shotguns, peppering the target with hundreds of projectiles at close range. The solid balls, known as round shot, was most effective when fired at shoulder-height across a flat, open area. The ball would tear through the ranks of the enemy or bounce along the ground breaking legs and ankles.
The development of modern artillery occurred in the mid to late 19th century as a result of the convergence of various improvements in the underlying technology. Advances in metallurgy allowed for the construction of breech-loading rifled guns that could fire at a much greater muzzle velocity.
After the British artillery was shown up in the Crimean War as having barely changed since the Napoleonic Wars the industrialist William Armstrong was awarded a contract by the government to design a new piece of artillery. Production started in 1855 at the Elswick Ordnance Company and the Royal Arsenal at Woolwich, and the outcome was the revolutionary Armstrong Gun, which marked the birth of modern artillery. Three of its features particularly stand out.
First, the piece was rifled, which allowed for a much more accurate and powerful action. Although rifling had been tried on small arms since the 15th century, the necessary machinery to accurately rifle artillery was not available until the mid-19th century. Martin von Wahrendorff, and Joseph Whitworth independently produced rifled cannon in the 1840s, but it was Armstrong's gun that was first to see widespread use during the Crimean War. The cast iron shell of the Armstrong gun was similar in shape to a Minié ball and had a thin lead coating which made it fractionally larger than the gun's bore and which engaged with the gun's rifling grooves to impart spin to the shell. This spin, together with the elimination of windage as a result of the tight fit, enabled the gun to achieve greater range and accuracy than existing smooth-bore muzzle-loaders with a smaller powder charge.
His gun was also a breech-loader. Although attempts at breech-loading mechanisms had been made since medieval times, the essential engineering problem was that the mechanism could not withstand the explosive charge. It was only with the advances in metallurgy and precision engineering capabilities during the Industrial Revolution that Armstrong was able to construct a viable solution. The gun combined all the properties that make up an effective artillery piece. The gun was mounted on a carriage in such a way as to return the gun to firing position after the recoil.
What made the gun really revolutionary lay in the technique of the construction of the gun barrel that allowed it to withstand much more powerful explosive forces. The "built-up" method involved assembling the barrel with wrought-iron (later mild steel was used) tubes of successively smaller diameter. The tube would then be heated to allow it to expand and fit over the previous tube. When it cooled the gun would contract although not back to its original size, which allowed an even pressure along the walls of the gun which was directed inward against the outward forces that the gun's firing exerted on the barrel.
Another innovative feature, more usually associated with 20th-century guns, was what Armstrong called its "grip", which was essentially a squeeze bore; the 6 inches of the bore at the muzzle end was of slightly smaller diameter, which centered the shell before it left the barrel and at the same time slightly swaged down its lead coating, reducing its diameter and slightly improving its ballistic qualities.
Armstrong's system was adopted in 1858, initially for "special service in the field" and initially he produced only smaller artillery pieces, 6-pounder (2.5 in/64 mm) mountain or light field guns, 9-pounder (3 in/76 mm) guns for horse artillery, and 12-pounder (3 inches /76 mm) field guns.
The first cannon to contain all 'modern' features is generally considered to be the French 75 of 1897. It was the first field gun to include a hydro-pneumatic recoil mechanism, which kept the gun's trail and wheels perfectly still during the firing sequence. Since it did not need to be re-aimed after each shot, the crew could fire as soon as the barrel returned to its resting position. In typical use, the French 75 could deliver fifteen rounds per minute on its target, either shrapnel or melinite high-explosive, up to about 5 miles (8,500 m) away. Its firing rate could even reach close to 30 rounds per minute, albeit only for a very short time and with a highly experienced crew. These were rates that contemporary bolt action rifles could not match. The gun used cased ammunition, was breech-loading, and had modern sights, a self-contained firing mechanism and hydro-pneumatic recoil dampening.
Indirect fire, the firing of a projectile without relying on direct line of sight between the gun and the target, possibly dates back to the 16th century. Early battlefield use of indirect fire may have occurred at Paltzig in July 1759, when the Russian artillery fired over the tops of trees, and at the Battle of Waterloo, where a battery of the Royal Horse Artillery fired Shrapnel indirectly against advancing French troops.
In 1882, Russian Lieutenant Colonel KG Guk published "Indirect Fire for Field Artillery", which provided a practical method of using aiming points for indirect fire by describing, "all the essentials of aiming points, crest clearance, and corrections to fire by an observer".
A few years later, the Richtfläche (lining-plane) sight was invented in Germany and provided a means of indirect laying in azimuth, complementing the clinometers for indirect laying in elevation which already existed. Despite conservative opposition within the German army, indirect fire was adopted as doctrine by the 1890s. In the early 1900s, Goertz in Germany developed an optical sight for azimuth laying. It quickly replaced the lining-plane; in English, it became the 'Dial Sight' (UK) or 'Panoramic Telescope' (US).
The British halfheartedly experimented with indirect fire techniques since the 1890s, but with the onset of the Boer War, they were the first to apply the theory in practice in 1899, although they had to improvise without a lining-plane sight.
In the next 15 years leading up to World War I, the techniques of indirect fire became available for all types of artillery. Indirect fire was the defining characteristic of 20th-century artillery and led to undreamt of changes in the amount of artillery, its tactics, organisation, and techniques, most of which occurred during World War I.
An implication of indirect fire and improving guns was increasing range between gun and target, this increased the time of flight and the vertex of the trajectory. The result was decreasing accuracy (the increasing distance between the target and the mean point of impact of the shells aimed at it) caused by the increasing effects of non-standard conditions. Indirect firing data was based on standard conditions including a specific muzzle velocity, zero wind, air temperature and density, and propellant temperature. In practice, this standard combination of conditions almost never existed, they varied throughout the day and day to day, and the greater the time of flight, the greater the inaccuracy. An added complication was the need for survey to accurately fix the coordinates of the gun position and provide accurate orientation for the guns. Of course, targets had to be accurately located, but by 1916, air photo interpretation techniques enabled this, and ground survey techniques could sometimes be used.
In 1914, the methods of correcting firing data for the actual conditions were often convoluted, and the availability of data about actual conditions was rudimentary or non-existent, the assumption was that fire would always be ranged (adjusted). British heavy artillery worked energetically to progressively solve all these problems from late 1914 onwards, and by early 1918, had effective processes in place for both field and heavy artillery. These processes enabled 'map-shooting', later called 'predicted fire'; it meant that effective fire could be delivered against an accurately located target without ranging. Nevertheless, the mean point of impact was still some tens of yards from the target-centre aiming point. It was not precision fire, but it was good enough for concentrations and barrages. These processes remain in use into the 21st Century with refinements to calculations enabled by computers and improved data capture about non-standard conditions.
The British major-general Henry Hugh Tudor pioneered armour and artillery cooperation at the breakthrough Battle of Cambrai. The improvements in providing and using data for non-standard conditions (propellant temperature, muzzle velocity, wind, air temperature, and barometric pressure) were developed by the major combatants throughout the war and enabled effective predicted fire. The effectiveness of this was demonstrated by the British in 1917 (at Cambrai) and by Germany the following year (Operation Michael).
Major General J.B.A. Bailey, British Army (retired) wrote:
An estimated 75,000 French soldiers were casualties of friendly artillery fire in the four years of World War I.
Modern artillery is most obviously distinguished by its long range, firing an explosive shell or rocket and a mobile carriage for firing and transport. However, its most important characteristic is the use of indirect fire, whereby the firing equipment is aimed without seeing the target through its sights. Indirect fire emerged at the beginning of the 20th century and was greatly enhanced by the development of predicted fire methods in World War I. However, indirect fire was area fire; it was and is not suitable for destroying point targets; its primary purpose is area suppression. Nevertheless, by the late 1970s precision-guided munitions started to appear, notably the US 155 mm Copperhead and its Soviet 152 mm Krasnopol equivalent that had success in Indian service. These relied on laser designation to 'illuminate' the target that the shell homed onto. However, in the early 21st Century, the Global Positioning System (GPS) enabled relatively cheap and accurate guidance for shells and missiles, notably the US 155 mm Excalibur and the 227 mm GMLRS rocket. The introduction of these led to a new issue, the need for very accurate three dimensional target coordinates—the mensuration process.
Weapons covered by the term 'modern artillery' include "cannon" artillery (such as howitzer, mortar, and field gun) and rocket artillery. Certain smaller-caliber mortars are more properly designated small arms rather than artillery, albeit indirect-fire small arms. This term also came to include coastal artillery which traditionally defended coastal areas against seaborne attack and controlled the passage of ships. With the advent of powered flight at the start of the 20th century, artillery also included ground-based anti-aircraft batteries.
The term "artillery" has traditionally not been used for projectiles with internal guidance systems, preferring the term "missilery", though some modern artillery units employ surface-to-surface missiles. Advances in terminal guidance systems for small munitions has allowed large-caliber guided projectiles to be developed, blurring this distinction.
One of the most important roles of logistics is the supply of munitions as a primary type of artillery consumable, their storage (ammunition dump, arsenal, magazine
) and the provision of fuses, detonators and warheads at the point where artillery troops will assemble the charge, projectile, bomb or shell.
A round of artillery ammunition comprises four components:
Fuzes are the devices that initiate an artillery projectile, either to detonate its high explosive (HE) filling or eject its cargo (illuminating flare or smoke canisters being examples). The official military spelling is "fuze". Broadly there are four main types:
Most artillery fuzes are nose fuzes. However, base fuzes have been used with armour piercing shells and for squash head (HESH or HEP) anti-tank shells. At least one nuclear shell and its non-nuclear spotting version also used a multi-deck mechanical time fuze fitted into its base.
Impact fuzes were, and in some armies remain, the standard fuze for HE projectiles. Their default action is normally 'superquick', some have had a 'graze' action which allows them to penetrate light cover and others have 'delay'. Delay fuzes allow the shell to penetrate the ground before exploding. Armor- or concrete-piercing fuzes are specially hardened. During World War I and later, ricochet fire with delay or graze fuzed HE shells, fired with a flat angle of descent, was used to achieve airburst.
HE shells can be fitted with other fuzes. Airburst fuzes usually have a combined airburst and impact function. However, until the introduction of proximity fuzes, the airburst function was mostly used with cargo munitions—for example, shrapnel, illumination, and smoke. The larger calibers of anti-aircraft artillery are almost always used airburst. Airburst fuzes have to have the fuze length (running time) set on them. This is done just before firing using either a wrench or a fuze setter pre-set to the required fuze length.
Early airburst fuzes used igniferous timers which lasted into the second half of the 20th century. Mechanical time fuzes appeared in the early part of the century. These required a means of powering them. The Thiel mechanism used a spring and escapement (i.e. 'clockwork'), Junghans used centrifugal force and gears, and Dixi used centrifugal force and balls. From about 1980, electronic time fuzes started replacing mechanical ones for use with cargo munitions.
Proximity fuzes have been of two types: photo-electric or radar. The former was not very successful and seems only to have been used with British anti-aircraft artillery 'unrotated projectiles' (rockets) in World War II. Radar proximity fuzes were a big improvement over the mechanical (time) fuzes which they replaced. Mechanical time fuzes required an accurate calculation of their running time, which was affected by non-standard conditions. With HE (requiring a burst 20 to above the ground), if this was very slightly wrong the rounds would either hit the ground or burst too high. Accurate running time was less important with cargo munitions that burst much higher.
The first radar proximity fuzes (codenamed 'VT') were invented by the British and developed by the US and initially used against aircraft in World War II. Their ground use was delayed for fear of the enemy recovering 'blinds' (artillery shells which failed to detonate) and copying the fuze. The first proximity fuzes were designed to detonate about above the ground. These air-bursts are much more lethal against personnel than ground bursts because they deliver a greater proportion of useful fragments and deliver them into terrain where a prone soldier would be protected from ground bursts.
However, proximity fuzes can suffer premature detonation because of the moisture in heavy rain clouds. This led to 'controlled variable time' (CVT) after World War II. These fuzes have a mechanical timer that switched on the radar about 5 seconds before expected impact, they also detonated on impact.
The proximity fuze emerged on the battlefields of Europe in late December 1944. They have become known as the U.S. Artillery's "Christmas present", and were much appreciated when they arrived during the Battle of the Bulge. They were also used to great effect in anti-aircraft projectiles in the Pacific against "kamikaze" as well as in Britain against V-1 flying bombs.
Electronic multi-function fuzes started to appear around 1980. Using solid-state electronics they were relatively cheap and reliable, and became the standard fitted fuze in operational ammunition stocks in some western armies. The early versions were often limited to proximity airburst, albeit with height of burst options, and impact. Some offered a go/no-go functional test through the fuze setter.
Later versions introduced induction fuze setting and testing instead of physically placing a fuze setter on the fuze. The latest, such as Junghan's DM84U provide options giving, superquick, delay, a choice of proximity heights of burst, time and a choice of foliage penetration depths.
A new type of artillery fuze will appear soon. In addition to other functions these offer some course correction capability, not full precision but sufficient to significantly reduce the dispersion of the shells on the ground.
The projectile is the munition or "bullet" fired downrange. This may or may not be an explosive device.
Traditionally, projectiles have been classified as "shot" or "shell", the former being solid and the latter having some form of "payload".
Shells can also be divided into three configurations: bursting, base ejection or nose ejection. The latter is sometimes called the shrapnel configuration. The most modern is base ejection, which was introduced in World War I. Both base and nose ejection are almost always used with airburst fuzes. Bursting shells use various types of fuze depending on the nature of the payload and the tactical need at the time.
Payloads have included:
Aeronautic Federation and modern exotics such as electronic payloads and sensor-fuzed munitions.
Most forms of artillery require a propellant to propel the projectile at the target. Propellant is always a low explosive, this means it deflagrates instead of detonating, as with high explosives. The shell is accelerated to a high velocity in a very short time by the rapid generation of gas from the burning propellant. This high pressure is achieved by burning the propellant in a contained area, either the chamber of a gun barrel or the combustion chamber of a rocket motor.
Until the late 19th century, the only available propellant was black powder. Black powder had many disadvantages as a propellant; it has relatively low power, requiring large amounts of powder to fire projectiles, and created thick clouds of white smoke that would obscure the targets, betray the positions of guns, and make aiming impossible. In 1846, nitrocellulose (also known as guncotton) was discovered, and the high explosive nitroglycerin was discovered at nearly the same time. Nitrocellulose was significantly more powerful than black powder, and was smokeless. Early guncotton was unstable, however, and burned very fast and hot, leading to greatly increased barrel wear. Widespread introduction of smokeless powder would wait until the advent of the double-base powders, which combine nitrocellulose and nitroglycerin to produce powerful, smokeless, stable propellant.
Many other formulations were developed in the following decades, generally trying to find the optimum characteristics of a good artillery propellant; low temperature, high energy, non-corrosive, highly stable, cheap, and easy to manufacture in large quantities. Broadly, modern gun propellants are divided into three classes: single-base propellants which are mainly or entirely nitrocellulose based, double-base propellants composed of a combination of nitrocellulose and nitroglycerin, and triple base composed of a combination of nitrocellulose and nitroglycerin and Nitroguanidine.
Artillery shells fired from a barrel can be assisted to greater range in three ways:
Propelling charges for tube artillery can be provided in one of two ways: either as cartridge bags or in metal cartridge cases. Generally, anti-aircraft artillery and smaller-caliber (up to 3" or 76.2 mm) guns use metal cartridge cases that include the round and propellant, similar to a modern rifle cartridge. This simplifies loading and is necessary for very high rates of fire. Bagged propellant allows the amount of powder to be raised or lowered, depending on the range to the target. It also makes handling of larger shells easier. Each requires a totally different type of breech to the other. A metal case holds an integral primer to initiate the propellant and provides the gas seal to prevent the gases leaking out of the breech; this is called obturation. With bagged charges, the breech itself provides obturation and holds the primer. In either case, the primer is usually percussion, but electrical is also used, and laser ignition is emerging. Modern 155 mm guns have a primer magazine fitted to their breech.
Artillery ammunition has four classifications according to use:
Because field artillery mostly uses indirect fire the guns have to be part of a system that enables them to attack targets invisible to them in accordance with the combined arms plan.
The main functions in the field artillery system are:
All these calculations to produce a quadrant elevation (or range) and azimuth were done manually using instruments, tabulated, data of the moment, and approximations until battlefield computers started appearing in the 1960s and 1970s. While some early calculators copied the manual method (typically substituting polynomials for tabulated data), computers use a different approach. They simulate a shell's trajectory by 'flying' it in short steps and applying data about the conditions affecting the trajectory at each step. This simulation is repeated until it produces a quadrant elevation and azimuth that lands the shell within the required 'closing' distance of the target coordinates.
NATO has a standard ballistic model for computer calculations and has expanded the scope of this into the NATO Armaments Ballistic Kernel (NABK) within the SG2 Shareable (Fire Control) Software Suite (S4).
Supply of artillery ammunition has always been a major component of military logistics. Up until World War I some armies made artillery responsible for all forward ammunition supply because the load of small arms ammunition was trivial compared to artillery. Different armies use different approaches to ammunition supply, which can vary with the nature of operations. Differences include where the logistic service transfers artillery ammunition to artillery, the amount of ammunition carried in units and extent to which stocks are held at unit or battery level. A key difference is whether supply is 'push' or 'pull'. In the former the 'pipeline' keeps pushing ammunition into formations or units at a defined rate. In the latter units fire as tactically necessary and replenish to maintain or reach their authorised holding (which can vary), so the logistic system has to be able to cope with surge and slack.
Artillery types can be categorised in several ways, for example by type or size of weapon or ordnance, by role or by organizational arrangements.
The types of cannon artillery are generally distinguished by the velocity at which they fire projectiles.
Types of artillery:
Modern field artillery can also be split into two other subcategories: towed and self-propelled. As the name suggests, towed artillery has a prime mover, usually an artillery tractor or truck, to move the piece, crew, and ammunition around. Towed artillery is in some cases equipped with an APU for small displacements. Self-propelled artillery is permanently mounted on a carriage or vehicle with room for the crew and ammunition and is thus capable of moving quickly from one firing position to another, both to support the fluid nature of modern combat and to avoid counter-battery fire. It includes mortar carrier vehicles, many of which allow the mortar to be removed from the vehicle and be used dismounted, potentially in terrain in which the vehicle cannot navigate, or in order to avoid detection.
At the beginning of the modern artillery period, the late 19th century, many armies had three main types of artillery, in some case they were sub-branches within the artillery branch in others they were separate branches or corps. There were also other types excluding the armament fitted to warships:
After World War I many nations merged these different artillery branches, in some cases keeping some as sub-branches. Naval artillery disappeared apart from that belonging to marines. However, two new branches of artillery emerged during that war and its aftermath, both used specialised guns (and a few rockets) and used direct not indirect fire, in the 1950s and 1960s both started to make extensive use of missiles:
However, the general switch by artillery to indirect fire before and during World War I led to a reaction in some armies. The result was accompanying or infantry guns. These were usually small, short range guns, that could be easily man-handled and used mostly for direct fire but some could use indirect fire. Some were operated by the artillery branch but under command of the supported unit. In World War II they were joined by self-propelled assault guns, although other armies adopted infantry or close support tanks in armoured branch units for the same purpose, subsequently tanks generally took on the accompanying role.
The three main types of artillery "gun" are guns, howitzers, and mortars. During the 20th century, guns and howitzers have steadily merged in artillery use, making a distinction between the terms somewhat meaningless. By the end of the 20th century, true guns with calibers larger than about 60 mm have become very rare in artillery use, the main users being tanks, ships, and a few residual anti-aircraft and coastal guns. The term "cannon" is a United States generic term that includes guns, howitzers, and mortars; it is not used in other English speaking armies.
The traditional definitions differentiated between guns and howitzers in terms of maximum elevation (well less than 45° as opposed to close to or greater than 45°), number of charges (one or more than one charge), and having higher or lower muzzle velocity, sometimes indicated by barrel length. These three criteria give eight possible combinations, of which guns and howitzers are but two. However, modern "howitzers" have higher velocities and longer barrels than the equivalent "guns" of the first half of the 20th century.
True guns are characterized by long range, having a maximum elevation significantly less than 45°, a high muzzle velocity and hence a relatively long barrel, smooth bore (no rifling) and a single charge. The latter often led to fixed ammunition where the projectile is locked to the cartridge case. There is no generally accepted minimum muzzle velocity or barrel length associated with a gun.
Howitzers can fire at maximum elevations at least close to 45°; elevations up to about 70° are normal for modern howitzers. Howitzers also have a choice of charges, meaning that the same elevation angle of fire will achieve a different range depending on the charge used. They have rifled bores, lower muzzle velocities and shorter barrels than equivalent guns. All this means they can deliver fire with a steep angle of descent. Because of their multi-charge capability, their ammunition is mostly separate loading (the projectile and propellant are loaded separately).
That leaves six combinations of the three criteria, some of which have been termed gun howitzers. A term first used in the 1930s when howitzers with a relatively high maximum muzzle velocities were introduced, it never became widely accepted, most armies electing to widen the definition of "gun" or "howitzer". By the 1960s, most equipments had maximum elevations up to about 70°, were multi-charge, had quite high maximum muzzle velocities and relatively long barrels.
Mortars are simpler. The modern mortar originated in World War I and there were several patterns. After that war, most mortars settled on the Stokes pattern, characterized by a short barrel, smooth bore, low muzzle velocity, elevation angle of firing generally greater than 45°, and a very simple and light mounting using a "baseplate" on the ground. The projectile with its integral propelling charge was dropped down the barrel from the muzzle to hit a fixed firing pin. Since that time, a few mortars have become rifled and adopted breech loading.
There are other recognized typifying characteristics for artillery. One such characteristic is the type of obturation used to seal the chamber and prevent gases escaping through the breech. This may use a metal cartridge case that also holds the propelling charge, a configuration called "QF" or "quickfiring" by some nations. The alternative does not use a metal cartridge case, the propellant being merely bagged or in combustible cases with the breech itself providing all the sealing. This is called "BL" or "breech loading" by some nations.
A second characteristic is the form of propulsion. Modern equipment can either be towed or self-propelled (SP). A towed gun fires from the ground and any inherent protection is limited to a gun shield. Towing by horse teams lasted throughout World War II in some armies, but others were fully mechanized with wheeled or tracked gun towing vehicles by the outbreak of that war. The size of a towing vehicle depends on the weight of the equipment and the amount of ammunition it has to carry.
A variation of towed is portee, where the vehicle carries the gun which is dismounted for firing. Mortars are often carried this way. A mortar is sometimes carried in an armored vehicle and can either fire from it or be dismounted to fire from the ground. Since the early 1960s it has been possible to carry lighter towed guns and most mortars by helicopter. Even before that, they were parachuted or landed by glider from the time of the first airborne trials in the USSR in the 1930s.
In an SP equipment, the gun is an integral part of the vehicle that carries it. SPs first appeared during World War I, but did not really develop until World War II. They are mostly tracked vehicles, but wheeled SPs started to appear in the 1970s. Some SPs have no armor and carry few or no other weapons and ammunition. Armoured SPs usually carry a useful ammunition load. Early armoured SPs were mostly a "casemate" configuration, in essence an open top armored box offering only limited traverse. However, most modern armored SPs have a full enclosed armored turret, usually giving full traverse for the gun. Many SPs cannot fire without deploying stabilizers or spades, sometimes hydraulic. A few SPs are designed so that the recoil forces of the gun are transferred directly onto the ground through a baseplate. A few towed guns have been given limited self-propulsion by means of an auxiliary engine.
Two other forms of tactical propulsion were used in the first half of the 20th century: Railways or transporting the equipment by road, as two or three separate loads, with disassembly and re-assembly at the beginning and end of the journey. Railway artillery took two forms, railway mountings for heavy and super-heavy guns and howitzers and armored trains as "fighting vehicles" armed with light artillery in a direct fire role. Disassembled transport was also used with heavy and super heavy weapons and lasted into the 1950s.
A third form of artillery typing is to classify it as "light", "medium", "heavy" and various other terms. It appears to have been introduced in World War I, which spawned a very wide array of artillery in all sorts of sizes so a simple categorical system was needed. Some armies defined these categories by bands of calibers. Different bands were used for different types of weapons—field guns, mortars, anti-aircraft guns and coastal guns.
List of countries in order of amount of artillery:
Artillery is used in a variety of roles depending on its type and caliber. The general role of artillery is to provide "fire support"—"the application of fire, coordinated with the manoeuvre of forces to destroy, "neutralize" or "suppress" the enemy". This NATO definition makes artillery a supporting arm although not all NATO armies agree with this logic. The "italicised" terms are NATO's.
Unlike rockets, guns (or howitzers as some armies still call them) and mortars are suitable for delivering "close supporting fire". However, they are all suitable for providing "deep supporting fire" although the limited range of many mortars tends to exclude them from the role. Their control arrangements and limited range also mean that mortars are most suited to "direct supporting fire". Guns are used either for this or "general supporting fire" while rockets are mostly used for the latter. However, lighter rockets may be used for direct fire support. These rules of thumb apply to NATO armies.
Modern mortars, because of their lighter weight and simpler, more transportable design, are usually an integral part of infantry and, in some armies, armor units. This means they generally do not have to "concentrate" their fire so their shorter range is not a disadvantage. Some armies also consider infantry operated mortars to be more responsive than artillery, but this is a function of the control arrangements and not the case in all armies. However, mortars have always been used by artillery units and remain with them in many armies, including a few in NATO.
In NATO armies artillery is usually assigned a tactical mission that establishes its relationship and responsibilities to the formation or units it is assigned to. It seems that not all NATO nations use the terms and outside NATO others are probably used. The standard terms are: "direct support", "general support", "general support reinforcing" and "reinforcing". These tactical missions are in the context of the command authority: "operational command", "operational control", "tactical command" or "tactical control".
In NATO direct support generally means that the directly supporting artillery unit provides observers and liaison to the manoeuvre troops being supported, typically an artillery battalion or equivalent is assigned to a brigade and its batteries to the brigade's battalions. However, some armies achieve this by placing the assigned artillery units under command of the directly supported formation. Nevertheless, the batteries' fire can be "concentrated" onto a single target, as can the fire of units in range and with the other tactical missions.
There are several dimensions to this subject. The first is the notion that fire may be against an "opportunity" target or may be "prearranged". If it is the latter it may be either "on-call" or "scheduled". Prearranged targets may be part of a "fire plan". Fire may be either "observed" or "unobserved", if the former it may be "adjusted", if the latter then it has to be "predicted". Observation of adjusted fire may be directly by a forward observer or indirectly via some other "target acquisition" system.
NATO also recognises several different types of fire support for tactical purposes:
These purposes have existed for most of the 20th century, although their definitions have evolved and will continue to do so, lack of "suppression" in "counterbattery" is an omission. Broadly they can be defined as either:
Two other NATO terms also need definition:
The tactical purposes also include various "mission verbs", a rapidly expanding subject with the modern concept of "effects based operations".
"Targeting" is the process of selecting target and matching the appropriate response to them taking account of operational requirements and capabilities. It requires consideration of the type of fire support required and the extent of coordination with the supported arm. It involves decisions about:
The "targeting" process is the key aspect of tactical fire control. Depending on the circumstances and national procedures it may all be undertaken in one place or may be distributed. In armies practicing control from the front, most of the process may be undertaken by a forward observer or other target acquirer. This is particularly the case for a smaller target requiring only a few fire units. The extent to which the process is formal or informal and makes use of computer based systems, documented norms or experience and judgement also varies widely armies and other circumstances.
Surprise may be essential or irrelevant. It depends on what effects are required and whether or not the target is likely to move or quickly improve its protective posture. During World War II UK researchers concluded that for impact fuzed munitions the relative risk were as follows:
Airburst munitions significantly increase the relative risk for lying men, etc. Historically most casualties occur in the first 10–15 seconds of fire, i.e. the time needed to react and improve protective posture, however, this is less relevant if airburst is used.
There are several ways of making best use of this brief window of maximum vulnerability:
Modern counter-battery fire developed in World War I, with the objective of defeating the enemy's artillery. Typically such fire was used to suppress enemy batteries when they were or were about to interfere with the activities of friendly forces (such as to prevent enemy defensive artillery fire against an impending attack) or to systematically destroy enemy guns. In World War I the latter required air observation. The first indirect counter-battery fire was in May 1900 by an observer in a balloon.
Enemy artillery can be detected in two ways, either by direct observation of the guns from the air or by ground observers (including specialist reconnaissance), or from their firing signatures. This includes radars tracking the shells in flight to determine their place of origin, sound ranging detecting guns firing and resecting their position from pairs of microphones or cross-observation of gun flashes using observation by human observers or opto-electronic devices, although the widespread adoption of 'flashless' propellant limited the effectiveness of the latter.
Once hostile batteries have been detected they may be engaged immediately by friendly artillery or later at an optimum time, depending on the tactical situation and the counter-battery policy. Air strike is another option. In some situations the task is to locate all active enemy batteries for attack using a counter-battery fire at the appropriate moment in accordance with a plan developed by artillery intelligence staff. In other situations counter-battery fire may occur whenever a battery is located with sufficient accuracy.
Modern counter-battery target acquisition uses unmanned aircraft, counter-battery radar, ground reconnaissance and sound-ranging. Counter-battery fire may be adjusted by some of the systems, for example the operator of an unmanned aircraft can 'follow' a battery if it moves. Defensive measures by batteries include frequently changing position or constructing defensive earthworks, the tunnels used by North Korea being an extreme example. Counter-measures include air defence against aircraft and attacking counter-battery radars physically and electronically.
'Field Artillery Team' is a US term and the following description and terminology applies to the US, other armies are broadly similar but differ in significant details. Modern field artillery (post–World War I) has three distinct parts: the forward observer (or FO), the fire direction center (FDC) and the actual guns themselves. The forward observer observes the target using tools such as binoculars, laser rangefinders, designators and call back fire missions on his radio, or relays the data through a portable computer via an encrypted digital radio connection protected from jamming by computerized frequency hopping. A lesser known part of the team is the FAS or Field Artillery Survey team which sets up the "Gun Line" for the cannons. Today most artillery battalions use a(n) "Aiming Circle" which allows for faster setup and more mobility. FAS teams are still used for checks and balances purposes and if a gun battery has issues with the "Aiming Circle" a FAS team will do it for them.
The FO can communicate directly with the battery FDC, of which there is one per each battery of 4–8 guns. Otherwise the several FOs communicate with a higher FDC such as at a Battalion level, and the higher FDC prioritizes the targets and allocates fires to individual batteries as needed to engage the targets that are spotted by the FOs or to perform preplanned fires.
The Battery FDC computes firing data—ammunition to be used, powder charge, fuse settings, the direction to the target, and the quadrant elevation to be fired at to reach the target, what gun will fire any rounds needed for adjusting on the target, and the number of rounds to be fired on the target by each gun once the target has been accurately located—to the guns. Traditionally this data is relayed via radio or wire communications as a warning order to the guns, followed by orders specifying the type of ammunition and fuse setting, direction, and the elevation needed to reach the target, and the method of adjustment or orders for fire for effect (FFE). However, in more advanced artillery units, this data is relayed through a digital radio link.
Other parts of the field artillery team include meteorological analysis to determine the temperature, humidity and pressure of the air and wind direction and speed at different altitudes. Also radar is used both for determining the location of enemy artillery and mortar batteries and to determine the precise actual strike points of rounds fired by battery and comparing that location with what was expected to compute a registration allowing future rounds to be fired with much greater accuracy.
A technique called Time on Target was developed by the British Army in North Africa at the end of 1941 and early 1942 particularly for counter-battery fire and other concentrations, it proved very popular. It relied on BBC time signals to enable officers to synchronize their watches to the second because this avoided the need to use military radio networks and the possibility of losing surprise, and the need for field telephone networks in the desert. With this technique the time of flight from each fire unit (battery or troop) to the target is taken from the range or firing tables, or the computer and each engaging fire unit subtracts its time of flight from the TOT to determine the time to fire. An executive order to fire is given to all guns in the fire unit at the correct moment to fire. When each fire unit fires their rounds at their individual firing time all the opening rounds will reach the target area almost simultaneously. This is especially effective when combined with techniques that allow fires for effect to be made without preliminary adjusting fires.
A modern version of the earlier "time on target" is a concept in which fire from different weapons is timed to arrive on target at the same time. It is possible for artillery to fire several shells per gun at a target and have all of them arrive simultaneously, which is called MRSI (Multiple Rounds Simultaneous Impact). This is because there is more than one trajectory for the rounds to fly to any given target: typically one is below 45 degrees from horizontal and the other is above it, and by using different size propelling charges with each shell, it is possible to create multiple trajectories. Because the higher trajectories cause the shells to arc higher into the air, they take longer to reach the target and so if the shells are fired on these trajectories for the first volleys (starting with the shell with the most propellant and working down) and then after the correct pause more volleys are fired on the lower trajectories, the shells will all arrive at the same time. This is useful because many more shells can land on the target with no warning. With traditional volleys along the same trajectory, anybody at the target area may have time (however long it takes to reload and re-fire the guns) to take cover between volleys. However, guns capable of burst fire can deliver several rounds in 10 seconds if they use the same firing data for each, and if guns in more than one location are firing on one target they can use Time on Target procedures so that all their shells arrive at the same time and target.
To engage targets using MRSI requires two things, firstly guns with the requisite rate of fire and sufficiently different size propelling charges, secondly a fire control computer that has been designed to compute such missions and the data handling capability that allows all the firing data to be produced, sent to each gun and then presented to the gun commander in the correct order. The number of rounds that can be delivered in MRSI depends primarily on the range to the target and the rate of fire, for maximum rounds the range is limited to that of lowest propelling charge that will reach the target.
Examples of guns with a rate of fire that makes them suitable for MRSI includes UK's AS-90, South Africa's Denel G6-52 (which can land six rounds simultaneously at targets at least away), Germany's Panzerhaubitze 2000 (which can land five rounds simultaneously at targets at least away), Slovakia's 155 mm SpGH ZUZANA model 2000, and K9 Thunder.
The Archer project (developed by BAE-Systems Bofors in Sweden) is a 155 mm howitzer on a wheeled chassis which is claimed to be able to deliver up to six shells on target simultaneously from the same gun. The 120 mm twin barrel AMOS mortar system, joint developed by Hägglunds (Sweden) and Patria (Finland), is capable of 7 + 7 shells MRSI. The United States Crusader program (now cancelled) was slated to have MRSI capability. It is unclear how many fire control computers have the necessary capabilities.
Two-round MRSI firings were a popular artillery demonstration in the 1960s, where well trained detachments could show off their skills for spectators.
The destructiveness of artillery bombardments can be enhanced when some or all of the shells are set for airburst, meaning that they explode in the air above the target instead of upon impact. This can be accomplished either through time fuzes or proximity fuzes. Time fuses use a precise timer to detonate the shell after a preset delay. This technique is tricky and slight variations in the functioning of the fuse can cause it to explode too high and be ineffective, or to strike the ground instead of exploding above it. Since December 1944 (Battle of the Bulge), proximity fuzed artillery shells have been available that take the guesswork out of this process. These employ a miniature, low powered radar transmitter in the fuse to detect the ground and explode them at a predetermined height above it. The return of the weak radar signal completes an electrical circuit in the fuze which explodes the shell. The proximity fuse itself was developed by the British to increase the effectiveness of anti-aircraft warfare.
This is a very effective tactic against infantry and light vehicles, because it scatters the fragmentation of the shell over a larger area and prevents it from being blocked by terrain or entrenchments that do not include some form of robust overhead cover. Combined with TOT or MRSI tactics that give no warning of the incoming rounds, these rounds are especially devastating because many enemy soldiers are likely to be caught in the open. This is even more so if the attack is launched against an assembly area or troops moving in the open rather than a unit in an entrenched tactical position.
Numerous war memorials around the world incorporate an artillery piece which had been used in the specific war or battle commemorated. | https://en.wikipedia.org/wiki?curid=2508 |
Alexanderplatz
Alexanderplatz () is a large public square and transport hub in the central Mitte district of Berlin. The square is named after the Russian Tsar Alexander I and is often referred to simply as Alex, which also denotes the larger neighbourhood stretching from "Mollstraße" in the northeast to "Spandauer Straße" and the Rotes Rathaus in the southwest.
With more than 360,000 visitors daily, Alexanderplatz is, according to one study, the most visited area of Berlin, beating Friedrichstrasse and City West. It is a popular starting point for tourists, with many attractions including the Fernsehturm (TV tower), the Nikolai Quarter and the Rotes Rathaus (Red city hall) situated nearby. Alexanderplatz is still one of Berlin's major commercial areas, housing various shopping malls, department stores and other large retail locations.
During the post-war reconstruction of the 1960s, Alexanderplatz was completely pedestrianized. Since then, trams were reintroduced to the area in 1998.
Alexanderplatz station provides S-Bahn connections, access to the U2, U5 and U8 subway lines, regional train lines for DB Regio and ODEG services and, on weekends, the Harz-Berlin-Express (HBX). Several tram and bus lines also service the area.
The following main roads connect to Alexanderplatz:
Several arterial roads lead radially from Alexanderplatz to the outskirts of Berlin. These include (clockwise from north to southeast):
Karl-Marx-Allee (B 1 and B 5) - Strausberger Platz - Karl-Marx-Allee / Frankfurter Tor - Frankfurter Allee (B 1 and B 5 to Berlin-Hellersdorf junction at Berliner Ring)
A hospital stood at the location of present-day Alexanderplatz since the 13th century. Named "Heiliger Georg" (St. George), the hospital gave its name to the nearby Georgentor (George Gate) of the Berlin city wall. Outside of the city walls, this area was largely undeveloped until around 1400, when the first settlers began building thatched cottages. As a gallows was located close by, the area earned the nickname the "Teufels Lustgarten" (the Devil's Pleasure Garden).
The George Gate became the most important of Berlin's city gates during the 16th century, being the main entrance point for goods arriving along the roads to the north and north-east of the city, for example from Oderberg, Prenzlau and Bernau, and the big Hanseatic cities on the Baltic Sea.
After the Thirty Years' War, the city wall was strengthened. From 1658 to 1683, a citywide fortress was constructed to plans by the Linz master builder, Johann Gregor Memhardt. The new fortress contained 13 bastions connected by ramparts and was preceded by a moat measuring up to 50 meters wide. Within the new fortress, many of the historic city wall gates were closed. For example, the southeastern Stralauer Gate was closed but the Georgian Gate remained open, making the Georgian Gate an even more important entrance to the city.
In 1681, the trade of cattle and pig fattening was banned within the city. Frederick William, the Great Elector, granted cheaper plots of land, waiving the basic interest rate, in the area in front of the Georgian Gate. Settlements grew rapidly and a weekly cattle market was established on the square in front of the Gate.
The area developed into a suburb - the Georgenvorstadt - which continued to flourish into the late 17th century. Unlike the southwestern suburbs (Friedrichstadt, Dorotheenstadt) which were strictly and geometrically planned, the suburbs in the northeast (Georgenvorstadt, Spandauervorstadt and the Stralauer Vorstadt) proliferated without plan. Despite a building ban imposed in 1691, more than 600 houses existed in the area by 1700.
At that time, the George Gate was a rectangular gatehouse with a tower. Next to the tower stood a remaining tower from the original medieval city walls. The upper floors of the gatehouse served as the city jail. A drawbridge spanned the moat and the gate was locked at nightfall by the garrison using heavy oak planks.
A highway ran through the cattle market to the northeast towards Bernau. To the right stood the George chapel, an orphanage and a hospital that was donated by the Elector Sophie Dorothea in 1672. Next to the chapel stood a dilapidated medieval plague house which was demolished in 1716. Behind it was a rifleman's field and an inn, later named the "Stelzenkrug".
By the end of the 17th century, 600 to 700 families lived in this area. They included butchers, cattle herders, shepherds and dairy farmers. The George chapel was upgraded to the George church and received its own preacher.
After his coronation in Königsberg on May 6, 1701, the Prussian King Frederick I entered Berlin through the George Gate. This led to the gate being renamed the King's Gate, and the surrounding arena became known in official documents as "Königs Thor Platz" (King's Gate Square). The Georgenvorstadt suburb was renamed Königsvorstadt (or "royal city for" short).
In 1734, the Berlin Customs Wall, which initially consisted of a ring of palisade fences, was reinforced and grew to encompass the old city and its suburbs, including Königsvorstadt. This resulted in the King's Gate losing importance as an entry-point for goods into the city. The gate was finally demolished in 1746.
By the end of the 18th century, the basic structure of the royal suburbs of the Königsvorstadt had been developed. It consisted of irregular-shaped blocks of buildings running along the historic highways which once carried goods in various directions out of the gate. At this time, the area contained large factories (silk and wool), such as the "Kurprinz" (one of Berlin's first cloth factories, located in a former barn) and a workhouse established in 1758 for beggars and homeless people, where the inmates worked a man-powered treadmill to turn a mill.
Soon, military facilities came to dominate the area, such as the 1799-1800 military parade grounds designed by David Gilly. At this time, the residents of the platz were mostly craftsmen, petty bourgeois, retired soldiers and manufacturing workers. The southern part of the later Alexanderplatz was separated from traffic by trees and served as a parade ground, whereas the northern half remained a market. Beginning in the mid-18th century, the most important wool market in Germany was held in Alexanderplatz.
Between 1752 and 1755, the writer Gotthold Ephraim Lessing lived in a house on Alexanderplatz. In 1771, a new stone bridge (the "Königsbrücke") was built over the moat and in 1777 a colonnade-lined row of shops (Königskolonnaden) was constructed by architect Carl von Gontard. Between 1783 and 1784, seven three-storey buildings were erected around the square by Georg Christian Unger, including the famous "Gasthof zum Hirschen", where Karl Friedrich Schinkel lived as a permanent tenant and Heinrich von Kleist stayed in the days before his suicide.
On October 25, 1805, the Russian Tsar Alexander I was welcomed to the city on the parade grounds in front of the old King's Gate. To mark this occasion, on the 2nd November, King Frederick William III ordered the square to be renamed "Alexanderplatz":
In the southeast of the square, the cloth factory buildings were converted into the Königstädter Theater by Carl Theodor Ottmer at a cost of 120,000 Taler. The foundation stone was laid on August 31, 1823 and the opening ceremony occurred on August 4, 1824. Sales were poor, forcing the theatre to close on June 3, 1851. Thereafter, the building was used for wool storage, then as a tenement building, and finally as an inn called "Aschinger" until the building's demolition in 1932.
During these years, Alexanderplatz was populated by fish wives, water carriers, sand sellers, rag-and-bone men, knife sharpeners and day laborers.
Because of its importance as a transport hub, horse-drawn buses ran every 15 minutes between Alexanderplatz and Potsdamer Platz in 1847.
During the March Revolution of 1848, large-scale street fighting occurred on the streets of Alexanderplatz, where revolutionaries used barricades to block the route from Alexanderplatz to the city. Novelist and poet Theodor Fontane, who all worked in the vicinity in a nearby pharmacy, participated in the construction of barricades and later described how he used materials from the Königstädter Theater to barricade Neue Königstraße.
The Königsstadt continued to grow throughout the 19th century, with three-storey developments already existing at the beginning of the century and fourth storeys being constructed from the middle of the century. By the end of the century, most of the buildings were already five storeys high. The large factories and military facilities gave way to housing developments (mainly rental housing for the factory workers who had just moved into the city) and trading houses.
At the beginning of the 1870s, the Berlin administration had the former moat filled in order to build the Berlin city railway, which was opened in 1882 along with Bahnhof Alexanderplatz (Alexanderplatz Railway Station).
In 1883–1884, the Grand Hotel, a neo-Renaissance building with 185 rooms and shops beneath was constructed. From 1886 to 1890, Hermann Blankenstein built the Police headquarters, a huge brick building whose tower on the northern corner dominated the building. In 1890, a district court at Alexanderplatz was also established.
In 1886, the local authorities built a central market hall west of the rail tracks, which replaced the weekly market on the Alexanderplatz in 1896. During the end of the 19th century, the emerging private traffic and the first horse bus lines dominated the northern part of the square, the southern part (the former parade ground) remained quiet, having green space elements added by garden director Hermann Mächtig in 1889. The northwest of the square contained a second, smaller green space where, in 1895, the 7.5-meter copper Berolina statue by sculptor Emil Hundrieser was erected.
At the beginning of the 20th century, Alexanderplatz experienced its heyday. In 1901, Ernst von Wolzogen founded the first German cabaret, the Überbrettl, in the former Sezessionsbühne (Secession stage) at Alexanderstraße 40, initially under the name Bunte Brettl. It was announced as "Kabarett as upscale entertainment with artistic ambitions. Emperor-loyal and market-oriented stands the uncritical amusement in the foreground."
The merchants Hermann Tietz, Georg Wertheim and Hahn opened large department stores on Alexanderplatz: Tietz (1904-1911), Wertheim (1910–1911) and Hahn (1911). Tietz marketed itself as a department store for the Berlin people, whereas Wertheim modelled itself as a department store for the world.
In October 1905, the first section of the Tietz department store opened to the public. It was designed by architects Wilhelm Albert Cremer and Richard Wolffenstein, who had already won second prize in the competition for the construction of the Reichstag building. The Tietz department store underwent further construction phases and, in 1911, had a commercial space of 7,300 square meters and the longest department store facade in the world at 250 meters in length.
For the construction of the Wertheim department store, by architects Heinrich Joseph Kayser and Karl von Großheim, the Königskolonnaden were removed in 1910 and now stand in the Heinrich von Kleist Park in Schöneberg.
In October 1908, the Haus des Lehrers (house of teachers) was opened next to the Bunte Brettl at Alexanderstraße 41. It was designed by Hans Toebelmann and Henry Gross. The building belonged to the Berliner Lehrererverein (teachers’ association), who rented space on the ground floor of the building out to a pastry shop and restaurant in order to raise funds for the association. The building housed the teachers' library which survived two world wars, and today is integrated into the library for educational historical research. The rear of the property contained the association's administrative building, a hotel for members and an exhibition hall. Notable events that took place in the hall include the funeral services for Karl Liebknecht and Rosa Luxemburg on February 2, 1919, and, on December 4, 1920, the Vereinigungsparteitag (Unification Party Congress) of the Communist Party and the USPD.
The First Ordinary Congress of the Communist Workers Party of Germany was held in the nearby "Zum Prälaten" restaurant, 1-4 August 1920.
Alexanderplatz's position as a main transport and traffic hub continued to fuel its development. In addition to the three U-Bahn underground lines, long-distance trains and S-Bahn trains ran along the Platz's viaduct arches. Omnibuses, horse-drawn from 1877 and, after 1898, also electric-powered trams, ran out of Alexanderplatz in all directions in a star shape. The subway station was designed by Alfred Grenander and followed the color-coded order of subway stations, which began with green at Leipziger Platz and ran through to dark red.
In the Golden Twenties, Alexanderplatz was the epitome of the lively, pulsating cosmopolitan city of Berlin, rivaled in the city only by Potsdamer Platz. Many of the buildings and rail bridges surrounding the platz bore large billboards that illuminated the night. The Berlin cigarette company Manoli had a famous billboard at the time which contained a ring of neon tubes that constantly circled a black ball. The proverbial "Berliner Tempo" of those years was characterized as "total manoli". Writer Kurt Tucholsky wrote a poem referencing the advert, and the composer Rudolf Nelson made the legendary Revue Total manoli with the dancer Lucie Berber. The writer Alfred Döblin named his novel, Berlin Alexanderplatz, after the square and Walter Ruttmann filmed his 1927 film Berlin: Die Sinfonie der Großstadt (Berlin: The symphony of the big city) at Alexanderplatz.
One of Berlin's largest air-raid shelters during the Second World War was situated under Alexanderplatz. It was built between 1941 and 1943 for the Deutsche Reichsbahn by Philipp Holzmann.
The war reached Alexanderplatz in early April 1945. The Berolina statue had already been removed in 1944 and probably melted down for use in arms production. During the Battle of Berlin, Red Army artillery bombarded the area around Alexanderplatz. The battles of the last days of the war destroyed considerable parts of the historic Königsstadt, as well as many of the buildings around Alexanderplatz.
The Wehrmacht had entrenched itself within the tunnels of the underground system. Hours before fighting ended in Berlin on May 2, 1945, troops of the SS detonated explosives inside the north-south S-Bahn tunnel under the Landwehr Canal to slow the advance of the Red Army towards Berlin's city center. The entire tunnel flooded, as well as large sections of the U-Bahn network via connecting passages at the Friedrichstraße underground station. Many of those seeking shelter in the tunnels were killed. Of the then 63.3 kilometers of subway tunnel, around 19.8 kilometers were flooded with more than one million cubic meters of water.
Before a planned reconstruction of the entire Alexanderplatz could take place, all of the war ruins needed to be demolished and cleared away. A popular black market emerged within the ruined area, which the police raided several times a day.
Reconstruction planning for post-war Berlin gave priority to the dedication space to accommodate the rapidly-growing motor traffic in inner-city thoroughfares. This idea of a traffic-orientated city was already based on considerations and plans by Hilbersheimer and Le Corbusier from the 1930s.
Alexanderplatz has been subject to redevelopment several times in its history, most recently during the 1960s, when it was turned into a pedestrian zone and enlarged as part of the German Democratic Republic's redevelopment of the city centre. It is surrounded by several notable structures including the Fernsehturm (TV Tower).
Ever since German reunification, Alexanderplatz has undergone a gradual process of change with many of the surrounding buildings being renovated. Despite the reconstruction of the tram line crossing, it has retained its socialist character, including the much-graffitied "Fountain of Friendship between Peoples" ("Brunnen der Völkerfreundschaft"), a popular venue.
In 1993, architect Hans Kollhoff's master plan for a major redevelopment including the construction of several skyscrapers was published. Due to a lack of demand it is unlikely these will be constructed. However, beginning with the reconstruction of the "Kaufhof" department store in 2004, and the biggest underground railway station of Berlin, some buildings were redesigned and new structures built on the square's south-eastern side. Sidewalks were expanded to shrink one of the avenues, a new underground garage was built, and commuter tunnels meant to keep pedestrians off the streets were removed. The surrounding buildings now house chain stores, fast-food restaurants, and fashion discounters. The "Alexa" shopping mall, with approximately 180 stores opened nearby in 2007, and a large "Saturn" electronic store was built and opened in 2008. The CUBIX multiplex cinema, which opened in November 2000, joined the team of Berlin International Film Festival cinemas in 2007, and the festival shows films on three of its screens. In January 2014, a 39-story residential tower designed by Frank Gehry was announced, but this project was put on hold in 2018.
Many historic buildings are located in the vicinity of Alexanderplatz. The traditional seat of city government, the Rotes Rathaus, or "Red City Hall", is located nearby, as was the former East German parliament building, the Palast der Republik. The Palast was demolished from 2006–2008 to make room for a full reconstruction of the Baroque Berlin Palace, or "Stadtschloss", which is set to open in 2019.
Alexanderplatz is also the name of the S-Bahn and U-Bahn stations there. It is one of Berlin's largest and most important transportation hubs, being a meetingplace of three subway (U-Bahn) lines, three S-Bahn lines, and many tram and bus lines, as well as regional trains.
It also accommodates the Park Inn Berlin and the World Time Clock, a continually rotating installation that shows the time throughout the globe, and Hermann Henselmann's "Haus des Lehrers". During the Peaceful Revolution of 1989, the Alexanderplatz demonstration on 4 November was the largest demonstration in the history of East Germany.
Apart from Hackescher Markt, Alexanderplatz is the only existing square in front of one of the medieval gates of Berlin's city wall. | https://en.wikipedia.org/wiki?curid=2511 |
Asian Development Bank
The Asian Development Bank (ADB) is a regional development bank established on 19 December 1966, which is headquartered in the Ortigas Center located in the city of Mandaluyong, Metro Manila, Philippines. The company also maintains 31 field offices around the world to promote social and economic development in Asia. The bank admits the members of the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP, formerly the Economic Commission for Asia and the Far East or ECAFE) and non-regional developed countries. From 31 members at its establishment, ADB now has 68 members.
The ADB was modeled closely on the World Bank, and has a similar weighted voting system where votes are distributed in proportion with members' capital subscriptions. ADB releases an annual report that summarizes its operations, budget and other materials for review by the public. The ADB-Japan Scholarship Program (ADB-JSP) enrolls about 300 students annually in academic institutions located in 10 countries within the Region. Upon completion of their study programs, scholars are expected to contribute to the economic and social development of their home countries. ADB is an official United Nations Observer.
As of 31 December 2016, Japan holds the largest proportion of shares at 15.677%, closely followed by United States with 15.567% capital share. China holds 6.473%, India holds 6.359%, and Australia holds 5.812%.
The highest policy-making body of the bank is the Board of Governors, composed of one representative from each member state. The Board of Governors, in turn, elect among themselves the twelve members of the Board of Directors and their deputies. Eight of the twelve members come from regional (Asia-Pacific) members while the others come from non-regional members.
The Board of Governors also elect the bank's president, who is the chairperson of the Board of Directors and manages ADB. The president has a term of office lasting five years, and may be reelected. Traditionally, and because Japan is one of the largest shareholders of the bank, the president has always been Japanese.
The current president is Masatsugu Asakawa. He succeeded Takehiko Nakao on 17 January 2020, who succeeded Haruhiko Kuroda in 2013.
The headquarters of the bank is at 6 ADB Avenue, Mandaluyong, Metro Manila, Philippines, and it has 31 field offices in Asia and the Pacific and representative offices in Washington, Frankfurt, Tokyo and Sydney. The bank employs about 3,000 people, representing 60 of its 67 members.
(*) As from 17 January 2020, Masatsugu Asakawa was president of ADB..
As early as 1956, Japan Finance Minister Hisato Ichimada had suggested to United States Secretary of State John Foster Dulles that development projects in Southeast Asia could be supported by a new financial institution for the region. A year later, Japanese Prime Minister Nobusuke Kishi announced that Japan intended to sponsor the establishment of a regional development fund with resources largely from Japan and other industrial countries. But the US did not warm to the plan and the concept was shelved. See full account in "Banking on the Future of Asia and the Pacific: 50 Years of the Asian Development Bank," July 2017.
The idea came up again late in 1962 when Kaoru Ohashi, an economist from a research institute in Tokyo, visited Takeshi Watanabe, then a private financial consultant in Tokyo, and proposed a study group to form a development bank for the Asian region. The group met regularly in 1963, examining various scenarios for setting up a new institution and drew on Watanabe's experiences with the World Bank. However, the idea received a cool reception from the World Bank itself and the study group became discouraged.
In parallel, the concept was formally proposed at a trade conference organized by the Economic Commission for Asia and the Far East (ECAFE) in 1963 by a young Thai economist, Paul Sithi-Amnuai. (ESCAP, United Nations Publication March 2007, "The first parliament of Asia" pp. 65). Despite an initial mixed reaction, support for the establishment of a new bank soon grew.
An expert group was convened to study the idea, with Japan invited to contribute to the group. When Watanabe was recommended, the two streams proposing a new bank—from ECAFE and Japan—came together. Initially, the US was on the fence, not opposing the idea but not ready to commit financial support. But a new bank for Asia was soon seen to fit in with a broader program of assistance to Asia planned by U.S. President Lyndon B. Johnson in the wake of the escalating US military support for the government of South Vietnam.
As a key player in the concept, Japan hoped that the ADB offices would be in Tokyo. However, eight other cities had also expressed an interest—Bangkok, Colombo, Kabul, Kuala Lumpur, Manila, Phnom Penh, Singapore, and Tehran. To decide, the 18 prospective regional members of the new bank held three rounds of votes at a ministerial conference in Manila in November/December 1965. In the first round on 30 November, Tokyo failed to win a majority, so a second ballot was held the next day at noon. Although Japan was in the lead, it was still inconclusive, so a final vote was held after lunch. In the third poll, Tokyo gained eight votes to Manila's nine, with one abstention. Therefore, Manila was declared the host of the new development bank. The Japanese were mystified and deeply disappointed. Watanabe later wrote in his personal history of ADB: "I felt as if the child I had so carefully reared had been taken away to a distant country." (Asian Development Bank publication, "Towards a New Asia", 1977, p. 16)
As intensive work took place during 1966 to prepare for the opening of the new bank in Manila, high on the agenda was choice of president. Japanese Prime Minister Eisaku Satō asked Watanabe to be a candidate. Although he initially declined, pressure came from other countries and Watanabe agreed. In the absence of any other candidates, Watanabe was elected first President of the Asian Development Bank at its Inaugural Meeting on 24 November 1966.
By the end of 1972, Japan had contributed $173.7 million (22.6% of the total) to the ordinary capital resources and $122.6 million (59.6% of the total) to the special funds. In contrast, the United States contributed only $1.25 million to the special fund.
After its creation in the 1960s, ADB focused much of its assistance on food production and rural development. At the time, Asia was one of the poorest regions in the world.
Early loans went largely to Indonesia, Thailand, Malaysia, South Korea and the Philippines; these nations accounted for 78.48% of the total ADB loans between 1967 and 1972. Moreover, Japan received tangible benefits, 41.67% of the total procurements between 1967 and 1976. Japan tied its special funds contributions to its preferred sectors and regions and procurements of its goods and services, as reflected in its $100 million donation for the Agricultural Special Fund in April 1968.
Watanabe served as the first ADB president to 1972.
In the 1970s, ADB's assistance to developing countries in Asia expanded into education and health, and then to infrastructure and industry. The gradual emergence of Asian economies in the latter part of the decade spurred demand for better infrastructure to support economic growth. ADB focused on improving roads and providing electricity. When the world suffered its first oil price shock, ADB shifted more of its assistance to support energy projects, especially those promoting the development of domestic energy sources in member countries.
Following considerable pressure from the Reagan Administration in the 1980s, ADB reluctantly began working with the private sector in an attempt to increase the impact of its development assistance to poor countries in Asia and the Pacific. In the wake of the second oil crisis, ADB expanded its assistance to energy projects. In 1982, ADB opened its first field office, in Bangladesh, and later in the decade it expanded its work with non-government organizations (NGOs).
Japanese presidents Inoue Shiro (1972–76) and Yoshida Taroichi (1976–81) took the spotlight in the 1970s. Fujioka Masao, the fourth president (1981–90), adopted an assertive leadership style, launching an ambitious plan to expand the ADB into a high-impact development agency.
In the 1990s, ADB began promoting regional cooperation by helping the countries on the Mekong River to trade and work together. The decade also saw an expansion of ADB's membership with the addition of several Central Asian countries following the end of the Cold War.
In mid-1997, ADB responded to the financial crisis that hit the region with projects designed to strengthen financial sectors and create social safety nets for the poor. During the crisis, ADB approved its largest single loan – a $4 billion emergency loan to the South Korea. In 1999, ADB adopted poverty reduction as its overarching goal.
The early years of 2000s saw a dramatic expansion of private sector finance. While the institution had such operations since the 1980s (under pressure from the Reagan Administration) the early attempts were highly unsuccessful with low lending volumes, considerable losses and financial scandals associated with an entity named AFIC. However, beginning in 2002, the ADB undertook a dramatic expansion of private sector lending under a new team. Over the course of the next six years, the Private Sector Operations Department (PSOD) grew by a factor of 41 times the 2001 levels of new financings and earnings for the ADB. This culminated with the Board's formal recognition if these achievements in March 2008, when the Board of Directors formally adopted the Long Term Strategic Framework (LTSF). That document formally stated that assistance to private sector development was the lead priority of the ADB and that it should constitute 50% of the bank's lending by 2020.
In 2003, the severe acute respiratory syndrome (SARS) epidemic hit the region and ADB responded with programs to help the countries in the region work together to address infectious diseases, including avian influenza and HIV/AIDS. ADB also responded to a multitude of natural disasters in the region, committing more than $850 million for recovery in areas of India, Indonesia, Maldives, and Sri Lanka which were impacted by the December 2004 Asian tsunami. In addition, $1 billion in loans and grants was provided to the victims of the October 2005 earthquake in Pakistan.
In 2009, ADB's Board of Governors agreed to triple ADB's capital base from $55 billion to $165 billion, giving it much-needed resources to respond to the global economic crisis. The 200% increase is the largest in ADB's history, and was the first since 1994.
Asia moved beyond the economic crisis and by 2010 had emerged as a new engine of global economic growth though it remained home to two-thirds of the world's poor. In addition, the increasing prosperity of many people in the region created a widening income gap that left many people behind. ADB responded to this with loans and grants that encouraged economic growth.
In early 2012, the ADB began to re-engage with Myanmar in response to reforms initiated by the government. In April 2014, ADB opened an office in Myanmar and resumed making loans and grants to the country.
In 2017, ADB combined the lending operations of its Asian Development Fund (ADF) with its ordinary capital resources (OCR). The result was to expand the OCR balance sheet to permit increasing annual lending and grants to $20 billion by 2020 — 50% more than the previous level.
The ADB defines itself as a social development organization that is dedicated to reducing poverty in Asia and the Pacific through inclusive economic growth, environmentally sustainable growth, and regional integration. This is carried out through investments – in the form of loans, grants and information sharing – in infrastructure, health care services, financial and public administration systems, helping nations prepare for the impact of climate change or better manage their natural resources, as well as other areas.
Eighty percent of ADB's lending is concentrated public sector lending in five operational areas.
The ADB offers "hard" loans on commercial terms primarily to middle income countries in Asia and "soft" loans with lower interest rates to poorer countries in the region. Based on a new policy, both types of loans will be sourced starting January 2017 from the bank's ordinary capital resources (OCR), which functions as its general operational fund.
The ADB's Private Sector Department (PSOD) can and does offer a broader range of financings beyond commercial loans. They also have the capability to provide guarantees, equity and mezzanine finance (a combination of debt and equity).
In 2017, ADB lent $19.1 billion of which $3.2 billion went to private enterprises, as part of its "nonsovereign" operations. ADB's operations in 2017, including grants and cofinancing, totaled $28.9 billion.
ADB obtains its funding by issuing bonds on the world's capital markets. It also relies on the contributions of member countries, retained earnings from lending operations, and the repayment of loans.
ADB provides direct financial assistance, in the form of debt, equity and mezzanine finance to private sector companies, for projects that have clear social benefits beyond the financial rate of return. ADB's participation is usually limited but it leverages a large amount of funds from commercial sources to finance these projects by holding no more than 25% of any given transaction.
ADB partners with other development organizations on some projects to increase the amount of funding available. In 2014, $9.2 billion—or nearly half—of ADB's $22.9 billion in operations were financed by other organizations. According to Jason Rush, Principal Communication Specialist, the Bank communicates with many other multilateral organizations.
More than 50 financing partnership facilities, trust funds, and other funds – totalling several billion each year – are administered by ADB and put toward projects that promote social and economic development in Asia and the Pacific. ADB has raised Rs 5 billion or around Rs 500 crores from its issuance of 5-year offshore Indian rupee (INR) linked bonds.
On 26 Feb 2020, ADB raises $118 million from rupee-linked bonds and supporting the development of India International Exchange in India, as it also contributes to an established yield curve which stretches from 2021 through 2030 with $1 billion of outstanding bonds.
ADB has an information disclosure policy that presumes all information that is produced by the institution should be disclosed to the public unless there is a specific reason to keep it confidential. The policy calls for accountability and transparency in operations and the timely response to requests for information and documents. ADB does not disclose information that jeopardizes personal privacy, safety and security, certain financial and commercial information, as well as other exceptions.
Since the ADB's early days, critics have charged that the two major donors, Japan and the United States, have had extensive influence over lending, policy and staffing decisions.
Oxfam Australia has criticized the Asian Development Bank for insensitivity to local communities. "Operating at a global and international level, these banks can undermine people's human rights through projects that have detrimental outcomes for poor and marginalized communities." The bank also received criticism from the United Nations Environmental Program, stating in a report that "much of the growth has bypassed more than 70 percent of its rural population, many of whom are directly dependent on natural resources for livelihoods and incomes."
There had been criticism that ADB's large scale projects cause social and environmental damage due to lack of oversight. One of the most controversial ADB-related projects is Thailand's Mae Moh coal-fired power station. Environmental and human rights activists say ADB's environmental safeguards policy as well as policies for indigenous peoples and involuntary resettlement, while usually up to international standards on paper, are often ignored in practice, are too vague or weak to be effective, or are simply not enforced by bank officials.
The bank has been criticized over its role and relevance in the food crisis. The ADB has been accused by civil society of ignoring warnings leading up the crisis and also contributing to it by pushing loan conditions that many say unfairly pressure governments to deregulate and privatize agriculture, leading to problems such as the rice supply shortage in Southeast Asia.
Indeed, whereas the Private Sector Operations Department (PSOD) closed out that year with financings of $2.4 billion, the ADB has significantly dropped below that level in the years since and is clearly not on the path to achieving its stated goal of 50% of financings to the private sector by 2020. Critics also point out that the PSOD is the only Department that actually makes money for the ADB. Hence, with the vast majority of loans going to concessionary (sub-market) loans to the public sector, the ADB is facing considerable financial difficulty and continuous operating losses.
The following table are amounts for 20 largest countries by subscribed capital and voting power at the Asian Development Bank as of December 2014.
ADB has 68 members (as of 23 March 2019): 49 members from the Asian and Pacific Region, 19 members from Other Regions. The year after a member's name indicates the year of membership. At the time a country ceases to be a member, the Bank shall arrange for the repurchase of such country's shares by the Bank as a part of the settlement of accounts with such country in accordance with the provisions of paragraphs 3 and 4 of Article 43. | https://en.wikipedia.org/wiki?curid=2512 |
Aswan
Aswan (, also ; ; ) is a city in the south of Egypt, and is the capital of the Aswan Governorate.
Aswan is a busy market and tourist centre located just north of the Aswan Dam on the east bank of the Nile at the first cataract. The modern city has expanded and includes the formerly separate community on the island of Elephantine.
The city is part of the UNESCO Creative Cities Network in the category of craft and folk art.
Aswan is a large tourist city where the current population is 1,568.000.
Aswan was formerly spelled Assuan or Assouan. Names in other languages include (; Ancient Egyptian: ; ; ). The Nubians also call the city "Dib" which means ""fortress, palace"" and is derived from the Old Nubian name ⲇⲡ̅ⲡⲓ.
Aswan is the ancient city of Swenett, later known as Syene, which in antiquity was the frontier town of Ancient Egypt facing the south. Swenett is supposed to have derived its name from an Egyptian goddess with the same name. This goddess later was identified as Eileithyia by the Greeks and Lucina by the Romans during their occupation of Ancient Egypt because of the similar association of their goddesses with childbirth, and of which the import is "the opener". The ancient name of the city also is said to be derived from the Egyptian symbol for "trade", or "market".
Because the Ancient Egyptians oriented themselves toward the origin of the life-giving waters of the Nile in the south, and as Swenett was the southernmost town in the country, Egypt always was conceived to "open" or begin at Swenett. The city stood upon a peninsula on the right (east) bank of the Nile, immediately below (and north of) the first cataract of the flowing waters, which extend to it from Philae. Navigation to the delta was possible from this location without encountering a barrier.
The stone quarries of ancient Egypt located here were celebrated for their stone, and especially for the granitic rock called Syenite. They furnished the colossal statues, obelisks, and monolithal shrines that are found throughout Egypt, including the pyramids; and the traces of the quarrymen who worked in these 3,000 years ago are still visible in the native rock. They lie on either bank of the Nile, and a road, in length, was cut beside them from Syene to Philae.
Swenett was equally important as a military station as a place of traffic. Under every dynasty it was a garrison town; and here tolls and customs were levied on all boats passing southwards and northwards. Around 330, the legion stationed here received a bishop from Alexandria; this later became the Coptic Diocese of Syene. The city is mentioned by numerous ancient writers, including Herodotus, Strabo, Stephanus of Byzantium, Ptolemy, Pliny the Elder, Vitruvius, and it appears on the Antonine Itinerary. It may also be mentioned in the Book of Ezekiel and the Book of Isaiah.
The latitude of the city that would become Aswan – located at 24° 5′ 23″ – was an object of great interest to the ancient geographers. They believed that it was seated immediately under the tropic, and that on the day of the summer solstice, a vertical staff cast no shadow. They noted that the sun's disc was reflected in a well at noon. This statement is only approximately correct; at the summer solstice, the shadow was only of the staff, and so could scarcely be discerned, and the northern limb of the Sun's disc would be nearly vertical.
The Nile is nearly wide above Aswan. From this frontier town to the northern extremity of Egypt, the river flows for more than without bar or cataract. The voyage from Aswan to Alexandria usually took 21 to 28 days in favourable weather.
Archaeologists have discovered 35 mummified remains of Egyptians in a tomb in Aswan in 2019. Italian archaeologist Patrizia Piacentini, professor of Egyptology at the University of Milan, and Khaled El-Enany, the Egyptian minister of antiquities reported that the tomb where the remains of ancient men, women and children were found, dates back to the Greco-Roman period between 332 BC and 395 AD. While the findings assumed belonging to a mother and a child were well preserved, others had suffered major destruction. Beside the mummies, artefacts including painted funerary masks, vases of bitumen used in mummification, pottery and wooden figurines were revealed. Thanks to the hieroglyphics on the tomb, it was detected that the tomb belongs to a tradesman named Tjit.
“It’s a very important discovery because we added something to the history of Aswan that was missing. We knew about tombs and necropoli dating back to the second and third millennium, but we didn’t know where the people who lived in the last part of the Pharaoh era were. Aswan, on the southern border of Egypt, was also a very important trading city” Piacentini said.
Aswan has a hot desert climate (Köppen climate classification "BWh") like the rest of Egypt. Aswan and Luxor have the hottest summer days of any city in Egypt. Aswan is one of the hottest, sunniest and driest cities in the world. Average high temperatures are consistently above during summer (June, July, August and also September) while average low temperatures remain above . Summers are long, prolonged and extremely hot. Average high temperatures remain above during the coldest month of the year while average low temperatures remain above . Winters are short, brief and extremely warm. Wintertime is very pleasant and enjoyable while summertime is unbearably hot with blazing sunshine although desert heat is dry.
The climate of Aswan is extremely dry year-round, with less than of average annual precipitation. The desert city is one of the driest ones in the world, and rainfall doesn't occur every year, as of early 2001, the last rain there was seven years earlier. Aswan is one of the least humid cities on the planet, with an average relative humidity of only 26%, with a maximum mean of 42% during winter and a minimum mean of 16% during summer.
The weather of Aswan is extremely clear, bright and sunny year-round, in all seasons, with a low seasonal variation, with almost 4,000 hours of annual sunshine, very close to the maximum theoretical sunshine duration. Aswan is one of the sunniest places on Earth.
The highest record temperature was on July 4, 1918, and the lowest record temperature was on January 6, 1989.
In 1999, South Valley University was inaugurated and it has three branches; Aswan, Qena and Hurghada. The university grew steadily and now it is firmly established as a major institution of higher education in Upper Egypt. Aswan branch of Assiut University began in 1973 with the Faculty of Education and in 1975 the Faculty of Science was opened. Aswan branch has five faculties namely; Science, Education, Engineering, Arts, Social Works and Institute of Energy. The Faculty of Science in Aswan has six departments. Each department has one educational programme: Chemistry, Geology, Physics and Zoology. Except Botany Department, which has three educational programmes: Botany, Environmental Sciences and Microbiology; and Mathematics Department, which has two educational programmes: Mathematics and Computer Science. The Faculty of Science awards the following degrees: Bachelor of Science in nine educational programmes, Higher Diploma, Master of Science and Philosophy Doctor of Science.
Aswan also has Aswan Higher Institute of Social Work that was established in 1975 making it the oldest private higher institute of Social Work in Upper Egypt
Aswan is served by the Aswan International Airport. Train and bus service is also available. Taxi and rickshaw are used for transport here.
Aswan is twinned with: | https://en.wikipedia.org/wiki?curid=2514 |
Adelaide of Italy
Adelaide of Italy (; 93116 December 999 AD), also called Adelaide of Burgundy, was Holy Roman Empress by marriage to Emperor Otto the Great; she was crowned with him by Pope John XII in Rome on 2 February 962. She was regent of the Holy Roman Empire as the guardian of her grandson in 991–995.
Born in Orbe Castle, Orbe, Kingdom of Upper Burgundy (now in modern-day Switzerland), she was the daughter of Rudolf II of Burgundy, a member of the Elder House of Welf, and Bertha of Swabia.
She became involved from the beginning in the complicated fight to control not only Burgundy but also Lombardy. The battle between her father Rudolf II and Berengar I to control northern Italy ended with Berengar's death, and Rudolf could claimed the throne.
However, the inhabitants of Lombardy weren't happy with this decision and called for help of another ally, Hugh of Provence, who considered Rudolf an enemy for a long time. Although Hugh challenged Rudolf for the Burgundian throne, he only succeeded when Adelaide's father died in 937, and in order to be able to control Upper Burgundy he decided to marry his son Lothair II, the nominal King of Italy, with Adelaide (in 947, before June 27) who was fifteen years old.
The marriage produced a daughter, Emma of Italy, born about 948. She became Queen of Western France by marrying King Lothair of France.
The Calendar of Saints states that her first husband was poisoned, on November 22 of 950 in Turin, by the holder of real power, his successor, Berengar II of Italy.
Not only did some people of Lombardy suggest that Adelaide wanted to rule the kingdom by herself, but Berengar attempted to cement his political power by forcing her to marry his son, Adalbert. The young widow refused and fled, taking refuge in the castle of Como. Nevertheless, she was quickly tracked down and imprisoned for four months at Garda.
According to Adelaide's contemporary biographer, Odilo of Cluny, she managed to escape from captivity. After a time spent in the marshes nearby, she was rescued by a priest and taken to a "certain impregnable fortress," likely the fortified town of Canossa Castle near Reggio. She managed to send an emissary to Otto I, and asked the East Frankish king for his protection. The widow met Otto at the old Lombard capital of Pavia and they married on 23 September 951.
A few years later, in 953, Liudolf, Duke of Swabia, Otto's son by his first marriage, made a big revolt against his father that was quelled by the latter. On account of this episode, Otto decided to dispossess Liudolf of his ducal title. This decision favoured the position of Adelaide and her descendants at court. Adelaide also managed to retain her entire territorial dowry.
After returning to Germany with his new wife, Otto cemented the existence of the Holy Roman Empire by defeating the Hungarian invaders at the battle of Lechfeld (August 10, 955). In addition, he extended the boundaries of East Francia beyond the Elbe River, defeating the Obrodites and other Slavs of the Elbe at the battle of Recknitz (October 16, 955).
Adelaide accompanied her husband on his second expedition to Italy, destined to subdue the revolt of Berengar II and to protect Pope John XII. In Rome, Otto the Great was crowned Holy Roman Emperor on February 2, 962 by Pope John XII and breaking tradition, also crowned Adelaide as Holy Roman Empress. Four years later, Adelaide and their eleven-year-old son, Otto II, traveled again with Otto in 966 on his third expedition to Italy, where the Emperor restored the newly elected Pope John XIII to his throne (and executed some of the Roman rioters who had deposed him).
Adelaide remained in Rome for six years while Otto ruled his kingdom from Italy. Their son Otto II was crowned co-emperor in 967, then married the Byzantine princess Theophanu in April 972, resolving the conflict between the two empires in southern Italy, as well as ensuring the imperial succession. Adelaide and her husband then returned to Germany, where Otto died in May 973, at the same Memleben palace where his father had died 37 years earlier.
In the years following Oto's death, Adelaide exerted a powerful influence at court. However, her daughter-in-law, the Byzantine princess Theophanu, turned her husband against her and Adelaide was expelled from court in 978. During her exile, she divided her time living partly in Italy and partly in Arles with her brother Conrad of Burgundy, King of Burgundy, through whom she was finally reconciled with her son; in 983 (shortly before his death) Otto II appointed her his viceroy in Italy.
In 983, her son Otto II died and was succeeded by her grandson Otto III under the regency of his mother Adelaide's daughter-in-law Dowager Empress Theophanu while Adelaide remained in Italy. When Theophanu died in 990, Adelaide assumed regency on behalf of her grandson the Emperor until he reached legal majority four years later. Adelaide resigned as regent when Otto III was declared of legal majority in 995 and then was free to devote herself exclusively to her works of charity, in particular to the foundation and restoration of religious houses: monasteries, churches and abbeys.
Adelaide had long entertained close relations with Cluny, then the center of the movement for ecclesiastical reform, and in particular with its abbots Majolus and Odilo. She retired to a nunnery she had founded in c. 991 at Selz in Alsace.
On her way to Burgundy to support her nephew Rudolf III against a rebellion, she died at Selz Abbey on 16 December 999, days short of the millennium she thought would bring the Second Coming of Christ. She was buried in the Abbey and Pope Urban II canonized her in 1097. Although after serious flooding, which almost completely destroyed it in 1307, the saint's relics, miraculously saved, were moved to the parish church in the town of Seltz, dedicated to Saint Stephen, where they now rest.
Adelaide had constantly devoted herself to the service of the church and peace, and to the empire as guardian of both; she also interested herself in the conversion of the Slavs. She was thus a principal agent—almost an embodiment—of the work of the pre-schism Orthodox Catholic Church at the end of the Early Middle Ages in the construction of the religious culture of Central Europe.
Some of her relics are preserved in a shrine in Hanover. Her feast day, 16 December, is still kept in many German dioceses.
In 947, Adelaide was married to King Lothair II of Italy. The union produced one child:
In 951, Adelaide was married to King Otto I, the future Holy Roman Emperor. The union produced four children:
Adelaide is usually represented in the garb of an empress, with sceptre and crown. Since 14th century, it is also given as an attribute a model church or a ship (with which it is said to have escaped from captivity).
The most famous representation of German art belongs to a group of sandstone figures in the choir of Meissen Cathedral, which was created around 1260. She is shown here with her husband, who was not canonized, since he founded the diocese of Meissen with her.
Operas:
Books and Novels:
Artwork:
Others: | https://en.wikipedia.org/wiki?curid=2519 |
Airbus A300
The Airbus A300 is a wide-body airliner developed and manufactured by Airbus.
In September 1967, aircraft manufacturers in the UK, France, and West Germany signed a Memorandum of Understanding to develop a large airliner.
Germany and France reached an agreement on 29 May 1969 after the British withdrew from the project on 10 April 1969.
European collaborative aerospace manufacturer Airbus Industrie was formally created on 18 December 1970 to develop and produce it.
The prototype first flew on 28 October 1972.
The first twin-engine widebody airliner, the A300 typically seats 247 passengers in two classes over a range of 5,375 to 7,500 km (2,900 to 4,050 nmi).
Initial variants are powered by CF6-50 or JT9D turbofans and have a three-crew flight deck.
The improved A300-600 has a two-crew cockpit and updated GE CF6-80 or PW4000 engines, it made its first flight on 8 July 1983 and entered service later that year.
The A300 is the basis of the smaller A310 (first flight: 1982) and was adapted in a freighter version.
Its cross section was retained for the larger A340 (1991) and A330 (1992).
It is also the basis for the oversize Beluga transport (1994).
Launch customer Air France introduced the type on 23 May 1974.
After limited demand initially, sales took off as the type was proven in early service, beginning three decades of steady orders.
It has a similar capacity to the Boeing 767-300, introduced in 1986, but lacked the 767-300ER range.
During the 1990s, the A300 became popular with cargo aircraft operators, as passenger airliner conversions or as original builds.
Production ceased in July 2007 after 561 deliveries.
During the 1960s, European aircraft manufacturers such as Hawker Siddeley and the British Aircraft Corporation, based in the UK, and Sud Aviation of France, had ambitions to build a new 200-seat airliner for the growing civil aviation market. While studies were performed and considered, such as a stretched twin-engine variant of the Hawker Siddeley Trident and an expanded development of the British Aircraft Corporation BAC One-Eleven, designated the BAC Two-Eleven, it was recognized that if each of the European manufacturers were to launch similar aircraft into the market at the same time, neither would achieve sales volume needed to make them viable. In 1965, a British government study, known as the Plowden Report, had found British aircraft production costs to be between 10% and 20% higher than American counterparts due to shorter production runs, which was in part due to the fractured European market. To overcome this factor, the report recommended the pursuit of multinational collaborative projects between the region's leading aircraft manufacturers.
European manufacturers were keen to explore prospective programs; the proposed 260-seat wide-body "HBN 100" between Hawker Siddeley, Nord Aviation, and Breguet Aviation being one such example. National governments were also keen to support such efforts amid a belief that American manufacturers could dominate the European Economic Community; in particular, Germany had ambitions for a multinational airliner project to invigorate its aircraft industry, which had declined considerably following the Second World War. During the mid-1960s, both Air France and American Airlines had expressed interest in a short-haul twin-engine wide-body aircraft, indicating a market demand for such an aircraft to be produced. In July 1967, during a high-profile meeting between French, German, and British ministers, an agreement was made for greater cooperation between European nations in the field of aviation technology, and "for the joint development and production of an airbus". The word "airbus" at this point was a generic aviation term for a larger commercial aircraft, and was considered acceptable in multiple languages, including French.
Shortly after the July 1967 meeting, French engineer Roger Béteille was appointed as the technical director of what would become the A300 program, while Henri Ziegler, chief operating office of Sud Aviation, was appointed as the general manager of the organization and German politician Franz Josef Strauss became the chairman of the supervisory board. Béteille drew up an initial work share plan for the project, under which French firms would produce the aircraft's cockpit, the control systems, and lower-center portion of the fuselage, Hawker Siddeley would manufacture the wings, while German companies would produce the forward, rear and upper part of the center fuselage sections. Addition work included moving elements of the wings being produced in the Netherlands, and Spain producing the horizontal tail plane.
An early design goal for the A300 that Béteille had stressed the importance of was the incorporation of a high level of technology, which would serve as a decisive advantage over prospective competitors. As such, the A300 would feature the first use of composite materials of any passenger aircraft, the leading and trailing edges of the tail fin being composed of glass fibre reinforced plastic. Béteille opted for English as the working language for the developing aircraft, as well against using Metric instrumentation and measurements, as most airlines already had US-built aircraft. These decisions were partially influenced by feedback from various airlines, such as Air France and Lufthansa, as an emphasis had been placed on determining the specifics of what kind of aircraft that potential operators were seeking. According to Airbus, this cultural approach to market research had been crucial to the company's long-term success.
On 26 September 1967, the British, French, and West German governments signed a Memorandum of Understanding to start development of the 300-seat Airbus A300. At this point, the A300 was only the second major joint aircraft programme in Europe, the first being the Anglo-French Concorde. Under the terms of the memorandum, Britain and France were each to receive a 37.5 per cent work share on the project, while Germany received a 25 per cent share. Sud Aviation was recognized as the lead company for A300, with Hawker Siddeley being selected as the British partner company. At the time, the news of the announcement had been clouded by the British Government's support for the Airbus, which coincided with its refusal to back BAC's proposed competitor, the BAC 2-11, despite a preference for the latter expressed by British European Airways (BEA). Another parameter was the requirement for a new engine to be developed by Rolls-Royce to power the proposed airliner; a derivative of the in-development Rolls-Royce RB211, the triple-spool RB207, capable of producing of 47,500 lbf.
In December 1968, the French and British partner companies (Sud Aviation and Hawker Siddeley) proposed a revised configuration, the 250-seat Airbus A250. It had been feared that the original 300-seat proposal was too large for the market, thus it had been scaled down to produce the A250. The dimensional changes involved in the shrink reduced the length of the fuselage by 5.62 meters and the diameter by 0.8 meters, reducing the overall weight by 25 tonnes. For increased flexibility, the cabin floor was raised so that standard LD3 freight containers could be accommodated side-by-side, allowing more cargo to be carried. Refinements made by Hawker Siddeley to the wing's design provided for greater lift and overall performance; this gave the aircraft the ability to climb faster and attain a level cruising altitude sooner than any other passenger aircraft. It was later renamed the A300B.
Perhaps the most significant change of the A300B was that it would not require new engines to be developed, being of a suitable size to be powered by Rolls-Royce's RB211, or alternatively the American Pratt & Whitney JT9D and General Electric CF6 powerplants; this switch was recognized as considerably reducing the project's development costs. To attract potential customers in the US market, it was decided that General Electric CF6-50 engines would power the A300 in place of the British RB207; these engines would be produced in co-operation with French firm Snecma. By this time, Rolls-Royce had been concentrating their efforts upon developing their RB211 turbofan engine instead and progress on the RB207's development had been slow for some time, the firm having suffered due to funding limitations, both of which had been factors in the engine switch decision.
On 10 April 1969, a few months after the decision to drop the RB207 had been announced, the British government announced that they would withdraw from the Airbus venture. In response, West Germany proposed to France that they would be willing to contribute up to 50% of the project's costs if France was prepared to do the same. Additionally, the managing director of Hawker Siddeley, Sir Arnold Alexander Hall, decided that his company would remain in the project as a favoured sub-contractor, developing and manufacturing the wings for the A300, which would later become pivotal in later versions' impressive performance from short domestic to long intercontinental flights. Hawker Siddeley spent £35 million of its own funds, along with a further £35 million loan from the West German government, on the machine tooling to design and produce the wings.
On 29 May 1969, during the Paris Air Show, French transport minister Jean Chamant and German economics minister Karl Schiller signed an agreement officially launching the Airbus A300, the world's first twin-engine widebody airliner. The intention of the project was to produce an aircraft that was smaller, lighter, and more economical than its three-engine American rivals, the McDonnell Douglas DC-10 and the Lockheed L-1011 TriStar. In order to meet Air France's demands for an aircraft larger than 250-seat A300B, it was decided to stretch the fuselage to create a new variant, designated as the A300B2, which would be offered alongside the original 250-seat A300B, henceforth referred to as the A300B1. On 3 September 1970, Air France signed a letter of intent for six A300s, marking the first order to be won for the new airliner.
In the aftermath of the Paris Air Show agreement, it was decided that, in order to provide effective management of responsibilities, a Groupement d'intérêt économique would be established, allowing the various partners to work together on the project while remaining separate business entities. On 18 December 1970, Airbus Industrie was formally established following an agreement between Aérospatiale (the newly merged Sud Aviation and Nord Aviation) of France and the antecedents to Deutsche Aerospace of Germany, each receiving a 50 per cent stake in the newly formed company. In 1971, the consortium was joined by a third full partner, the Spanish firm CASA, who received a 4.2 per cent stake, the other two members reducing their stakes to 47.9 per cent each. In 1979, Britain joined the Airbus consortium via British Aerospace, which Hawker Siddeley had merged into, which acquired a 20 per cent stake in Airbus Industrie with France and Germany each reducing their stakes to 37.9 per cent.
Airbus Industrie was initially headquartered in Paris, which is where design, development, flight testing, sales, marketing, and customer support activities were centered; the headquarters was relocated to Toulouse in January 1974. The final assembly line for the A300 was located adjacent to Toulouse Blagnac International Airport. The manufacturing process necessitated transporting each aircraft section being produced by the partner companies scattered across Europe to this one location. The combined use of ferries and roads were used for the assembly of the first A300, however this was time-consuming and not viewed as ideal by Felix Kracht, Airbus Industrie's production director. Kracht's solution was to have the various A300 sections brought to Toulouse by a fleet of Boeing 377-derived Aero Spacelines Super Guppy aircraft, by which means none of the manufacturing sites were more than two hours away. Having the sections airlifted in this manner made the A300 the first airliner to use just-in-time manufacturing techniques, and allowed each company to manufacture its sections as fully equipped, ready-to-fly assemblies.
In September 1969, construction of the first prototype A300 began. On 28 September 1972, this first prototype was unveiled to the public, it conducted its maiden flight from Toulouse–Blagnac International Airport on 28 October that year. This maiden flight, which was performed a month ahead of schedule, lasted for one hour and 25 minutes; the captain was Max Fischl and the first officer was Bernard Ziegler, son of Henri Ziegler. On 5 February 1973, the second prototype performed its maiden flight. The flight test program, which involved a total of four aircraft, was relatively problem-free, accumulating 1,580 flight hours throughout. In September 1973, as part of promotional efforts for the A300, the new aircraft was taken on a six-week tour around North America and South America, to demonstrate it to airline executives, pilots, and would-be customers. Amongst the consequences of this expedition, it had allegedly brought the A300 to the attention of Frank Borman of Eastern Airlines, one of the "big four" U.S. airlines.
On 15 March 1974, type certificates were granted for the A300 from both German and French authorities, clearing the way for its entry into revenue service. On 23 May 1974, Federal Aviation Administration (FAA) certification was received. The first production model, the A300B2, entered service in 1974, followed by the A300B4 one year later. Initially, the success of the consortium was poor, in part due to the economic consequences of the 1973 oil crisis, but by 1979 there were 81 A300 passenger liners in service with 14 airlines, alongside 133 firm orders and 88 options. Ten years after the official launch of the A300, the company had achieved a 26 per cent market share in terms of dollar value, enabling Airbus Industries to proceed with the development of its second aircraft, the Airbus A310. It was the launch of the Airbus A320 in 1987 that firmly established Airbus as a major player in the aircraft market – over 400 orders were placed before the narrow-body airliner had flown its first flight, compared to 15 for the A300 in 1972.
The Airbus A300 is a wide-body medium-to-long range airliner; it has the distinction of being the first twin-engine wide-body aircraft in the world. In 1977, the A300 became the first ETOPS-compliant aircraft, due to its high performance and safety standards. Another world-first of the A300 is the use of composite materials on a commercial aircraft, which was used on both secondary and later primary airframe structures, decreasing overall weight and improving cost-effectiveness. Other firsts included the pioneering use of center-of-gravity control, achieved by transferring fuel between various locations across the aircraft, and electrically signaled secondary flight controls.
The A300 is powered by a pair of underwing turbofan engines, either General Electric CF6 or Pratt & Whitney JT9D engines; the sole use of underwing engine pods allowed for any suitable turbofan engine to be more readily used. The lack of a third tail-mounted engine, as per the trijet configuration used by some competing airliners, allowed for the wings to be located further forwards and to reduce the size of the vertical stabilizer and elevator, which had the effect of increasing the aircraft's flight performance and fuel efficiency.
Airbus partners had employed the latest technology, some of which having been derived from Concorde, on the A300. According to Airbus, new technologies adopted for the airliner were selected principally for increased safety, operational capability, and profitability. Upon entry into service in 1974, the A300 was a very advanced plane, which went on to influence later airliner designs. The technological highlights include
advanced wings by de Havilland (later BAE Systems) with
supercritical airfoil sections for economical performance and
advanced aerodynamically efficient flight control surfaces.
The diameter circular fuselage section allows an eight-abreast passenger seating and is wide enough for 2 LD3 cargo containers side-by-side.
Structures are made from metal billets, reducing weight.
It is the first airliner to be fitted with wind shear protection.
Its advanced autopilots are capable of flying the aircraft from climb-out to landing,
and it has an electrically controlled braking system.
Later A300s incorporated other advanced features such as the Forward-Facing Crew Cockpit, which enabled a two-pilot flight crew to fly the aircraft alone without the need for a flight engineer, the functions of which were automated; this two-man cockpit concept was a world-first for a wide-body aircraft. Glass cockpit flight instrumentation, which used cathode ray tube (CRT) monitors to display flight, navigation, and warning information, along with fully digital dual autopilots and digital flight control computers for controlling the spoilers, flaps, and leading-edge slats, were also adopted upon later-built models. Additional composites were also made use of, such as carbon-fiber-reinforced polymer (CFRP), as well as their presence in an increasing proportion of the aircraft's components, including the spoilers, rudder, air brakes, and landing gear doors. Another feature of later aircraft were the addition of wingtip fences, which generated greater aerodynamic performance (first introduced on the A310-300).
In addition to passenger duties, the A300 became widely used by air freight operators; according to Airbus, it is the best selling freight aircraft of all time. Various variants of the A300 were built to meet customer demands, often for diverse roles such as aerial refueling tankers, freighter models (new-build and conversions), combi aircraft, military airlifter, and VIP transport. Perhaps the most visually unique of the variants is the A300-600ST Beluga, an oversize cargo-carrying model operated by Airbus to carry aircraft sections between their manufacturing facilities. The A300 was the basis for, and retained a high level of commonality with, the second airliner produced by Airbus, the smaller Airbus A310.
On 23 May 1974, the first A300 to enter service performed the first commercial flight of the type, flying from Paris to London, for Air France.
Immediately after the launch, sales of the A300 were weak for some years, with most orders going to airlines that had an obligation to favor the domestically made product – notably Air France and Lufthansa, the first two airlines to place orders for the type. Following the appointment of Bernard Lathière as Henri Ziegler's replacement, an aggressive sales approach was adopted. Indian Airlines was the world's first domestic airline to purchase the A300, ordering three aircraft with three options. However, between December 1975 and May 1977, there were no sales for the type. During this period a number of "whitetail" A300s – completed but unsold aircraft – were completed and stored at Toulouse, and production fell to half an aircraft per month amid calls to pause production completely.
During the flight testing of the A300B2, Airbus held a series of talks with Korean Air on the topic of developing a longer-range version of the A300, which would become the A300B4. In September 1974, Korean Air placed an order for 4 A300B4s with options for 2 further aircraft; this sale was viewed as significant as it was the first non-European international airline to order Airbus aircraft. Airbus had viewed South-East Asia as a vital market that was ready to be opened up and believed Korean Air to be the 'key'.
Airlines operating the A300 on short haul routes were forced to reduce frequencies to try and fill the aircraft. As a result, they lost passengers to airlines operating more frequent narrow body flights. Eventually, Airbus had to build its own narrowbody aircraft (the A320) to compete with the Boeing 737 and McDonnell Douglas DC-9/MD-80. The savior of the A300 was the advent of Extended Range Twin Operations (ETOPS), a revised FAA rule which allows twin-engine jets to fly long-distance routes that were previously off-limits to them. This enabled Airbus to develop the aircraft as a medium/long range airliner.
In 1977, US carrier Eastern Air Lines leased four A300s as an in-service trial. Frank Borman, ex-astronaut and the then CEO of the airline, was impressed that the A300 consumed 30% less fuel, even more economical than expected, in contrast to his fleet of Lockheed L-1011 TriStars and proceeded to order 23 A300s, becoming the first U.S. customer for the type. This order is often cited as the point at which Airbus came to be seen as a serious competitor to the large American aircraft-manufacturers Boeing and McDonnell Douglas. Aviation author John Bowen alleged that various concessions, such as loan guarantees from European governments and compensation payments, were a factor in the decision as well. The Eastern Air Lines breakthrough was shortly followed by an order from Pan Am. From then on, the A300 family sold well, eventually reaching a total of 816 delivered aircraft.
In December 1977, Aerocondor Colombia became the first Airbus operator in Latin America, leasing one Airbus A300B4-2C, named "Ciudad de Barranquilla".
During the late 1970s, Airbus adopted a so-called 'Silk Road' strategy, targeting airlines in the Far East. As a result, The aircraft found particular favor with Asian airlines, being bought by Japan Air System, Korean Air, China Eastern Airlines, Thai Airways International, Singapore Airlines, Malaysia Airlines, Philippine Airlines, Garuda Indonesia, China Airlines, Pakistan International Airlines, Indian Airlines, Trans Australia Airlines and many others. As Asia did not have restrictions similar to the FAA 60-minutes rule for twin-engine airliners which existed at the time, Asian airlines used A300s for routes across the Bay of Bengal and South China Sea.
In 1977, the A300B4 became the first ETOPS compliant aircraft, qualifying for Extended Twin Engine Operations over water, providing operators with more versatility in routing. In 1982 Garuda Indonesia became the first airline to fly the A300B4-200FF. By 1981, Airbus was growing rapidly, with over 300 aircraft sold and options for 200 more planes for over forty airlines.
In 1989, Chinese operator China Eastern Airlines received its first A300; by 2006, the airline operated around 18 A300s, making it the largest operator of both the A300 and the A310 at that time. On 31 May 2014, China Eastern officially retired the last A300-600 in its fleet, having begun drawing down the type in 2010.
From 1997 to 2014, a single A300, designated A300 Zero-G, was operated by the European Space Agency (ESA), centre national d'études spatiales (CNES) and the German Aerospace Center (DLR) as a reduced-gravity aircraft for conducting research into microgravity; the A300 is the largest aircraft to ever have been used in this capacity. A typical flight would last for two and a half hours, enabling up to 30 parabolas to be performed per flight.
By the 1990s, the A300 was being heavily promoted as a cargo freighter. The largest freight operator of the A300 is FedEx Express, which has 68 A300 aircraft in service. UPS Airlines also operates 52 freighter versions of the A300. The final version was the A300-600R and is rated for 180-minute ETOPS. The A300 has enjoyed renewed interest in the secondhand market for conversion to freighters; large numbers were being converted during the late 1990s. The freighter versions – either new-build A300-600s or converted ex-passenger A300-600s, A300B2s and B4s – account for most of the world freighter fleet after the Boeing 747 freighter.
The A300 provided Airbus the experience of manufacturing and selling airliners competitively. The basic fuselage of the A300 was later stretched (A330 and A340), shortened (A310), or modified into derivatives (A300-600ST "Beluga" Super Transporter). In March 2006, Airbus announced the impending closure of the A300/A310 final assembly line, making them the first Airbus aircraft to be discontinued. The final production A300, an A300F freighter, performed its initial flight on 18 April 2007, and was delivered to FedEx Express on 12 July 2007. Airbus has announced a support package to keep A300s flying commercially.
Airbus offers the A330-200F freighter as a replacement for the A300 cargo variants.
The useful life of the UPS fleet of 52 A300s delivered from 2000 to 2006 will be extended to 2035 by a flight deck upgrade based around Honeywell Primus Epic avionics : new displays and flight management system (FMS), improved 3-D weather radar, a central maintenance system, and a new version of the current enhanced ground proximity warning system.
With a light usage of only two to three cycles per day, it will not reach the maximum number of cycles by then.
The first modification will be made at Airbus Toulouse in 2019 and certified in 2020.
As of July 2017, there are 211 A300s in service with 22 operators, with the largest operator being FedEx Express with 68 A300-600F aircraft.
Over 200 A300s still operate today.
Only two were built: the first prototype, registered F-WUAB, then F-OCAZ, and a second aircraft, F-WUAC, which was leased in November 1974 to Trans European Airways (TEA) and re-registered OO-TEF. TEA instantly subleased the aircraft for six weeks to Air Algérie, but continued to operate the aircraft until 1990. It had accommodation for 300 passengers (TEA) or 323 passengers (Air Algérie) with a maximum weight of 132 t and two General Electric CF6-50A engines of 220 kN thrust. The A300B1 was five frames shorter than the later production versions, being only in length.
The first production version. Powered by General Electric CF6 or Pratt & Whitney JT9D engines (the same engines that powered the 747 or the DC-10) of between 227 and 236 kN thrust, it entered service with Air France in May 1974. The prototype A300B2 made its first flight on 28 June 1973 and was certificated by the French and German authorities on 15 March 1974 and FAA approval followed on 30 May 1974. The first production A300B2 (A300 number 5) made its maiden flight on 15 April 1974 and was handed over to Air France a few weeks later on 10 May 1974. The A300B2 entered revenue service on 23 May 1974 between Paris and London.
The major production version features a centre fuel tank for increased fuel capacity (47,500 kg) and new wing-root Krüger flaps which were later made available as an option for the B2. Production of the B2 and B4 totalled 248. The first A300B4 (the 9th A300) flew on 25 December 1974 and was certified on 26 March 1975. The first delivery was made to Bavaria Germanair (which later merged into Hapag Lloyd) on 23 May 1975.
Officially designated A300B4-600, this version is nearly the same length as the B2 and B4 but has increased space because it uses the A310 rear fuselage and horizontal tail. It has higher-power CF6-80 or Pratt & Whitney PW4000 engines and uses the Honeywell 331-250 auxiliary power unit (APU). Other changes include an improved wing featuring a recambered trailing edge, the incorporation of simpler single-slotted Fowler flaps, the deletion of slat fences, and the removal of the outboard ailerons after they were deemed unnecessary on the A310. The A300-600 made its first flight on 8 July 1983 and entered service later that year with Saudi Arabian Airlines. A total of 313 A300-600s (all versions) have been sold. The A300-600 also has a similar cockpit to the A310, eliminating the need for a flight engineer. The FAA issues a single type rating which allows operation of both the A310 and A300-600.
Airbus had demand for an aircraft smaller than the A300.
On 7 July 1978, the A310 (initially the A300B10) was launched with orders from Swissair and Lufthansa.
On 3 April 1982, the first prototype conducted its maiden flight and it received its type certification on 11 March 1983.
Keeping the same eight-abreast cross-section, the A310 is shorter than the initial A300 variants, and has a smaller wing, down from .
The A310 introduced a two-crew glass cockpit, later adopted for the A300-600 with a common type rating.
It was powered by the same General Electric CF6-80 or Pratt & Whitney JT9D then PW4000 turbofans.
It can seat 220 passengers in two classes, or 240 in all-economy, and can fly up to .
It has overwing exits between the two main front and rear door pairs.
In April 1983, the aircraft entered revenue service with Swissair and competed with the Boeing 767-200, introduced six months before.
Its longer range and ETOPS regulations allowed it to be operated transatlantic flights.
Until the last delivery in June 1998, 255 aircraft were produced, as it was succeeded by the larger Airbus A330-200.
It has cargo aircraft versions, and was derived into the Airbus A310 MRTT military tanker/transport.
Commonly referred to as the Airbus Beluga or "Airbus Super Transporter," these five airframes are used by Airbus to ferry parts between the company's disparate manufacturing facilities, thus enabling workshare distribution. They replaced the four Aero Spacelines Super Guppys previously used by Airbus.
ICAO code: A3ST
As of October 2016, the A300 has been involved in 75 accidents and incidents, including 35 hull-losses and 1,435 fatalities.
Four A300s are currently preserved:
"Data through end of December 2007." | https://en.wikipedia.org/wiki?curid=2524 |
Agostino Carracci
Agostino Carracci (or Caracci) (16 August 1557 – 22 March 1602) was an Italian painter, printmaker, tapestry designer, and art teacher. He was, together with his brother, Annibale Carracci, and cousin, Ludovico Carracci, one of the founders of the Accademia degli Incamminati (Academy of the Progressives) in Bologna. This teaching academy promoted the Carracci emphasized drawing from life. It promoted progressive tendencies in art and was a reaction to the Mannerist distortion of anatomy and space. The academy helped propel painters of the School of Bologna to prominence.
Agostino Carracci was born in Bologna as the son of a tailor. He was the elder brother of Annibale Carracci and the cousin of Ludovico Carracci. He initially trained as a goldsmith. He later studied painting, first with Prospero Fontana, who had been Lodovico's master, and later with Bartolomeo Passarotti. He traveled to Parma to study the works of Correggio. Accompanied by his brother Annibale, he spent a long time in Venice, where he trained as an engraver under the renowned Cornelis Cort. Starting from 1574 he worked as a reproductive engraver, copying works of 16th century masters such as Federico Barocci, Tintoretto, Antonio Campi, Veronese and Correggio. He also produced some original prints, including two etchings.
He traveled to Venice (1582, 1587–1589) and Parma (1586–1587). Together with Annibale and Ludovico he worked in Bologna on the fresco cycles in Palazzo Fava ("Histories of Jason and Medea", 1584) and Palazzo Magnani ("Histories of Romulus", 1590–1592). In 1592 he also painted the "Communion of St. Jerome", now in the Pinacoteca di Bologna and considered his masterwork. In 1620, Giovanni Lanfranco, a pupil of the Carracci, famously accused another Carracci student, Domenichino, of plagiarizing this painting. From 1586 is his altarpiece of the "Madonna with Child and Saints", in the National Gallery of Parma. In 1598 Carracci joined his brother Annibale in Rome, to collaborate on the decoration of the Gallery in Palazzo Farnese. From 1598–1600 is a "triple Portrait", now in Naples, an example of genre painting. In 1600 he was called to Parma by Duke Ranuccio I Farnese to begin the decoration of the Palazzo del Giardino, but he died before it was finished.
Agostino's son Antonio Carracci was also a painter, and attempted to compete with his father's Academy.
An engraving by Agostino Carraci after the painting "Love in the Golden Age" by the 16th-century Flemish painter Paolo Fiammingo was the inspiration for Matisse's "Le bonheur de vivre" (Joy of Life).
"Oil on canvas unless otherwise noted"
The Carracci | https://en.wikipedia.org/wiki?curid=2526 |
Adenylyl cyclase
Adenylyl cyclase (, also commonly known as adenyl cyclase and adenylate cyclase, abbreviated AC) is an enzyme with key regulatory roles in essentially all cells. It is the most polyphyletic known enzyme: six distinct classes have been described, all catalyzing the same reaction but representing unrelated gene families with no known sequence or structural homology. The best known class of adenylyl cyclases is class III or AC-III (Roman numerals are used for classes). AC-III occurs widely in eukaryotes and has important roles in many human tissues.
All classes of adenylyl cyclase catalyse the conversion of adenosine triphosphate (ATP) to 3',5'-cyclic AMP (cAMP) and pyrophosphate. Magnesium ions are generally required and appear to be closely involved in the enzymatic mechanism. The cAMP produced by AC then serves as a regulatory signal via specific cAMP-binding proteins, either transcription factors, enzymes (e.g., cAMP-dependent kinases), or ion transporters.
The first class of adenylyl cyclases occur in many bacteria including "E. coli" (as CyaA [unrelated to the Class II enzyme]). This was the first class of AC to be characterized. It was observed that "E. coli" deprived of glucose produce cAMP that serves as an internal signal to activate expression of genes for importing and metabolizing other sugars. cAMP exerts this effect by binding the transcription factor CRP, also known as CAP. Class I AC's are large cytosolic enzymes (~100 kDa) with a large regulatory domain (~50 kDa) that indirectly senses glucose levels. , no crystal structure is available for class I AC.
Some indirect structural information is available for this class. It is known that the N-terminal half is the catalytic portion, and that it requires two Mg2+ ions. S103, S113, D114, D116 and W118 are the five absolutely essential residues. The class I catalytic domain () belongs to the same superfamily () as the palm domain of DNA polymerase beta (). Aligning its sequence onto the structure onto a related archaeal CCA tRNA nucleotidyltransferase () allows for assignment of the residues to specific functions: γ-phosphate binding, structural stabilization, DxD motif for metal ion binding, and finally ribose binding.
These adenylyl cyclases are toxins secreted by pathogenic bacteria such as "Bacillus anthracis", "Bordetella pertussis", "Pseudomonas aeruginosa", and "Vibrio vulnificus" during infections. These bacteria also secrete proteins that enable the AC-II to enter host cells, where the exogenous AC activity undermines normal cellular processes. The genes for Class II ACs are known as cyaA, one of which is anthrax toxin. Several crystal structures are known for AC-II enzymes.
These adenylyl cyclases are the most familiar based on extensive study due to their important roles in human health. They are also found in some bacteria, notably "Mycobacterium tuberculosis" where they appear to have a key role in pathogenesis. Most AC-III's are integral membrane proteins involved in transducing extracellular signals into intracellular responses. A Nobel Prize was awarded to Earl Sutherland in 1971 for discovering the key role of AC-III in human liver, where adrenaline indirectly stimulates AC to mobilize stored energy in the "fight or flight" response. The effect of adrenaline is via a G protein signaling cascade, which transmits chemical signals from outside the cell across the membrane to the inside of the cell (cytoplasm). The outside signal (in this case, adrenaline) binds to a receptor, which transmits a signal to the G protein, which transmits a signal to adenylyl cyclase, which transmits a signal by converting adenosine triphosphate to cyclic adenosine monophosphate (cAMP). cAMP is known as a second messenger.
Cyclic AMP is an important molecule in eukaryotic signal transduction, a so-called second messenger. Adenylyl cyclases are often activated or inhibited by G proteins, which are coupled to membrane receptors and thus can respond to hormonal or other stimuli. Following activation of adenylyl cyclase, the resulting cAMP acts as a second messenger by interacting with and regulating other proteins such as protein kinase A and cyclic nucleotide-gated ion channels.
Photoactivated adenylyl cyclase (PAC) was discovered in "Euglena gracilis" and can be expressed in other organisms through genetic manipulation. Shining blue light on a cell containing PAC activates it and abruptly increases the rate of conversion of ATP to cAMP. This is a useful technique for researchers in neuroscience because it allows them to quickly increase the intracellular cAMP levels in particular neurons, and to study the effect of that increase in neural activity on the behavior of the organism. A green-light activated rhodopsin adenylyl cyclase (CaRhAC) has recently been engineered by modifying the nuclecotide binding pocket of rhodopsin guanylyl cyclase.
Most class III adenylyl cyclases are transmembrane proteins with 12 transmembrane segments. The protein is organized with 6 transmembrane segments, then the C1 cytoplasmic domain, then another 6 membrane segments, and then a second cytoplasmic domain called C2. The important parts for function are the N-terminus and the C1 and C2 regions. The C1a and C2a subdomains are homologous and form an intramolecular 'dimer' that forms the active site. In "Mycobacterium tuberculosis" and many other bacterial cases, the AC-III polypeptide is only half as long, comprising one 6-transmembrane domain followed by a cytoplasmic domain, but two of these form a functional homodimer that resembles the mammalian architecture with two active sites. In non-animal class III ACs, the catalytic cytoplasmic domain is seen associated with other (not necessarily transmembrane) domains.
Class III adenylyl cyclase domains can be further divided into four subfamilies, termed class IIIa through IIId. Animal membrane-bound ACs belong to class IIIa.
The reaction happens with two metal cofactors (Mg or Mn) coordinated to the two aspartate residues on C1. They perform a nucleophilic attack of the 3'-OH group of the ribose on the α-phosphoryl group of ATP. The two lysine and aspartate residues on C2 selects ATP over GTP for the substrate, so that the enzyme is not a guanylyl cyclase. A pair of arginine and asparagine residues on C2 stabilizes the transition state. In many proteins, these residues are nevertheless mutated while retaining the adenylyl cyclase activity.
There are ten known isoforms of adenylyl cyclases in mammals:
These are also sometimes called simply AC1, AC2, etc., and, somewhat confusingly, sometimes Roman numerals are used for these isoforms that all belong to the overall AC class III. They differ mainly in how they are regulated, and are differentially expressed in various tissues throughout mammalian development.
Adenylyl cyclase is regulated by G proteins, which can be found in the monomeric form or the heterotrimeric form, consisting of three subunits. Adenylyl cyclase activity is controlled by heterotrimeric G proteins. The inactive or inhibitory form exists when the complex consists of alpha, beta, and gamma subunits, with GDP bound to the alpha subunit. In order to become active, a ligand must bind to the receptor and cause a conformational change. This conformational change causes the alpha subunit to dissociate from the complex and become bound to GTP. This G-alpha-GTP complex then binds to adenylyl cyclase and causes activation and the release of cAMP. Since a good signal requires the help of enzymes, which turn on and off signals quickly, there must also be a mechanism in which adenylyl cyclase deactivates and inhibits cAMP. The deactivation of the active G-alpha-GTP complex is accomplished rapidly by GTP hydrolysis due to the reaction being catalyzed by the intrinsic enzymatic activity of GTPase located in the alpha subunit. It is also regulated by forskolin, as well as other isoform-specific effectors:
In neurons, calcium-sensitive adenylyl cyclases are located next to calcium ion channels for faster reaction to Ca2+ influx; they are suspected of playing an important role in learning processes. This is supported by the fact that adenylyl cyclases are "coincidence detectors", meaning that they are activated only by several different signals occurring together. In peripheral cells and tissues adenylyl cyclases appear to form molecular complexes with specific receptors and other signaling proteins in an isoform-specific manner.
Adenylyl cyclase has been implicated in memory formation, functioning as a coincidence detector.
AC-IV was first reported in the bacterium "Aeromonas hydrophila", and the structure of the AC-IV from "Yersinia pestis" has been reported. These are the smallest of the AC enzyme classes; the AC-IV (CyaB) from "Yersinia" is a dimer of 19 kDa subunits with no known regulatory components (). AC-IV forms a superfamily with mamallian thiamine-triphosphatase called CYTH (CyaB, thiamine triphosphatase).
These forms of AC have been reported in specific bacteria ("Prevotella ruminicola" and "Rhizobium etli" , respectively) and have not been extensively characterized. There are a few extra members (~400 in Pfam) known to be in class VI. Class VI enzymes possess a catalytic core similar to the one in Class III. | https://en.wikipedia.org/wiki?curid=2528 |
Articolo 31
Articolo 31 is a band from Milan, Italy, formed in 1990 by J-Ax and DJ Jad, combining hip hop, funk, pop and traditional Italian musical forms. They are one of the most popular Italian hip hop groups.
Articolo 31 were formed by rapper J-Ax (real name Alessandro Aleotti) and DJ Jad (Vito Luca Perrini).
In the spoken intro of the album "Strade di Città" ("City Streets"), it is stated that the band is named after the article of the Irish constitution guaranteeing freedom of the press, although article 31 of the Irish constitution is not about the freedom of the press. They probably meant the Section 31 of the Broadcasting Authority Act.
Articolo 31 released one of the first Italian hip hop records, "Strade di città", in 1993. Soon, they signed with BMG Ricordi and started to mix rap with pop music - a move that earned them great commercial success but that alienated the underground hip hop scene, who perceived them as traitors.
In 1997, DJ Gruff dissed Articolo 31 in a track titled "1 vs 2" on the first album of the beatmaker Fritz da Cat, starting a feud that would go on for years.
In 2001, Articolo 31 collaborated with the American old school rapper Kurtis Blow on the album "XChé SI!". In the same year, they made the film "Senza filtro" (in English, ""Without filter""). Their producer was Franco Godi, who also produced the music for the "Signor Rossi" animated series.
Their 2002 album "Domani smetto" represented a further departure from hip hop, increasingly relying on the formula of rapping over pop music samples. Several of their songs rotate around the theme of soft drugs legalization in Italy (pointing strongly in favour).
Following their 2003 album "Italiano medio", the band took a break. Both J Ax and DJ Jad have been involved with solo projects. In 2006, the group declared an indefinite hiatus.
Their posse, "Spaghetti Funk", includes other popular performers like Space One and pop rappers Gemelli DiVersi. | https://en.wikipedia.org/wiki?curid=2536 |
Alexander Kerensky
Alexander Fyodorovich Kerensky ( ; , ; original spelling: ; – 11 June 1970) was a Russian lawyer and revolutionary who was a key political figure in the Russian Revolution of 1917. After the February Revolution of 1917, he joined the newly formed Russian Provisional Government, first as Minister of Justice, then as Minister of War, and after July as the government's second Minister-Chairman. A leader of the moderate-socialist Trudovik faction of the Socialist Revolutionary Party, he was also vice-chairman of the powerful Petrograd Soviet. On 7 November, his government was overthrown by the Lenin-led Bolsheviks in the October Revolution. He spent the remainder of his life in exile, in Paris and New York City, and worked for the Hoover Institution.
Alexander Kerensky was born in Simbirsk (now Ulyanovsk) on the Volga River on 4 May 1881 and was the eldest son in the family. His father, Fyodor Mikhailovich Kerensky, was a teacher and director of the local gymnasium and was later promoted to be an inspector of public schools. His maternal grandfather was head of the Topographical Bureau of the Kazan Military District. His mother, Nadezhda Aleksandrovna (née Adler), was the granddaughter of a former serf who had managed to purchase his freedom before serfdom was abolished in 1861. He subsequently embarked upon a mercantile career, in which he prospered. This allowed him to move his business to Moscow, where he continued his success, and became a wealthy Moscow merchant.
Kerensky's father was the teacher of Vladimir Ulyanov (Lenin), and members of the Kerensky and Ulyanov families were friends. In 1889, when Kerensky was eight, the family moved to Tashkent, where his father had been appointed the main inspector of public schools (superintendent). Alexander graduated with honours in 1899. The same year he entered St. Petersburg University, where he studied history and philology. The next year he switched to law. He earned his law degree in 1904 and married Olga Lvovna Baranovskaya, the daughter of a Russian general, the same year. Kerensky joined the Narodnik movement and worked as a legal counsel to victims of the Revolution of 1905. At the end of 1904, he was jailed on suspicion of belonging to a militant group. Afterwards he gained a reputation for his work as a defence lawyer in a number of political trials of revolutionaries.
In 1912, Kerensky became widely known when he visited the goldfields at the Lena River and published material about the Lena Minefields incident. In the same year, Kerensky was elected to the Fourth Duma as a member of the Trudoviks, a moderate, non-Marxist labour party founded by Alexis Aladin that was associated with the Socialist-Revolutionary Party, and joined a Freemason society uniting the anti-monarchy forces that strived for the democratic renewal of Russia. In fact, the Socialist Revolutionary Party bought Kerensky a house, as he otherwise wouldn't be elective for the Duma, according to the Russian property-laws. He then soon became a significant Duma member of the "Progressive Block", which included several Socialist Parties, Mensheviks, and Liberals - but not the Bolsheviks. He was a brilliant orator and skilled parliamentary leader of the socialist opposition to the government of Tsar Nicholas II.
During the 4th Session of the Fourth Duma in spring 1915, Kerensky appealed to Rodzianko with a request from the Council of elders to inform the Tsar that to succeed in war he must:
1) change his domestic policy,
2) proclaim a General Amnesty for political prisoners,
3) restore the Constitution of Finland,
4) declare the autonomy of Poland,
5) provide national minorities autonomy in the field of culture,
6) abolish restrictions against Jews,
7) end religious intolerance,
8) stop the harassment of legal trade union organizations.
Kerensky was an active member of the irregular Freemasonic lodge, the Grand Orient of Russia's Peoples, which derived from the Grand Orient of France. Kerensky was Secretary General of the Grand Orient of Russia's Peoples and stood down following his ascent to government in July 1917. He was succeeded by the Menshevik, Alexander Halpern.
In response to bitter resentments held against the imperial favourite Grigori Rasputin in the midst of Russia's failing effort in World War I, Kerensky, at the opening of the Duma on 2 November 1916, called the imperial ministers "hired assassins" and "cowards", and alleged that they were "guided by the contemptible Grishka Rasputin!" Grand Duke Nikolai Mikhailovich, Prince Lvov, and general Mikhail Alekseyev attempted to persuade the emperor Nicholas II to send away the Empress Alexandra Feodorovna, Rasputin's steadfast patron, either to the Livadia Palace in Yalta or to England. Mikhail Rodzianko, Zinaida Yusupova (the mother of Felix Yusupov), Alexandra's sister Elisabeth, Grand Duchess Victoria and the empress's mother-in-law Maria Feodorovna also tried to influence and pressure the imperial couple to remove Rasputin from his position of influence within the imperial household, but without success. According to Kerensky, Rasputin had terrorised the empress by threatening to return to his native village.
Monarchists murdered Rasputin in December 1916, burying him near the imperial residence in Tsarskoye Selo. Shortly after the February Revolution of 1917, Kerensky ordered soldiers to re-bury the corpse at an unmarked spot in the countryside. However, the truck broke down or was forced to stop because of the snow on Lesnoe Road outside of St. Petersburg. It is likely the corpse was incinerated (between 3 and 7 in the morning) in the cauldrons of the nearby boiler shop of the Saint Petersburg State Polytechnical University, including the coffin, without leaving a single trace.
When the February Revolution broke out in 1917, Kerensky - together with Pavel Milyukov - was one of its most prominent leaders. As one of the Duma's most well-known speakers against the monarchy and as a lawyer and defender of many revolutionaries, Kerensky became a member of the Provisional Committee of the State Duma and was elected vice-chairman of the newly formed Petrograd Soviet. These two bodies, the Duma and the Petrograd Soviet, or - rather - their respective executive committees, soon became each other's antagonists on most matters except regarding the end of the Tsar's autocracy.
The Petrograd Soviet grew to include 3000 to 4000 members, and their meetings could drown in a blur of everlasting orations. At the meeting of to the Executive Committee of the Petrograd Soviet or Ispolkom formed - a self-appointed committee, with (eventually) three members from each of the parties represented in the Soviet. Kerensky became one of the members representing the Social Revolutionary party (the SRs).
On , without any consultation with the government, the Ispolkom of the Soviet issued the infamous Order No. 1, intended only for the 160,000-strong Petrograd garrison, but soon interpreted as applicable to all soldiers at the front. The order stipulated that all military units should form committees like the Petrograd Soviet. This led to confusion and "stripping of officers' authority"; further, "Order No. 3" stipulated that the military was subordinate to Ispolkom in the political hierarchy. The ideas came from a group of Socialists and aimed to limit the officers' power to military affairs. The socialist intellectuals believed the officers to be the most likely counterrevolutionary elements. Kerensky's role in these orders are unclear, but he participated in the decisions. But just as before the revolution he had defended many who disliked the Tsar, he now saved the lives of many of the Tsar's civil servants about to be lynched by mobs.
Additionally, the Duma formed an executive committee which eventually became the so-called Russian Provisional Government. As there was little trust between Ispolkom and this Government (and as he was about to accept the office of Attorney General in the Provisional Government), Kerensky gave a most passionate speech, not just to the Ispolkom, but to the entire Petrograd Soviet. He then swore, as Minister, never to violate democratic values, and ended his speech with the words "I cannot live without the people. In the moment you begin to doubt me, then kill me." The huge majority (workers and soldiers) gave him great applause, and Kerensky now became the first and "the only one" who participated in both the Provisional Government and the Ispolkom. As a link between Ispolkom and the Provisional Government, the quite ambitious Kerensky stood to benefit from this position.
After the first government crisis over Pavel Milyukov's secret note re-committing Russia to its original war-aims on 2–4 May, Kerensky became the Minister of War and the dominant figure in the newly-formed socialist-liberal coalition government. On 10 May (Julian calendar), Kerensky started for the front and visited one division after another, urging the men to do their duty. His speeches were impressive and convincing for the moment, but had little lasting effect. Under Allied pressure to continue the war, he launched what became known as the Kerensky Offensive against the Austro-Hungarian/German South Army on . At first successful, the offensive soon met strong resistance and the Central Powers riposted with a strong counter-attack. The Russian army retreated and suffered heavy losses, and it became clear from the many incidents of desertion, sabotage, and mutiny that the army was no longer willing to attack.
The military heavily criticised Kerensky for his liberal policies, which included stripping officers of their mandates and handing over control to revolutionary-inclined "soldier committees" () instead; the abolition of the death penalty; and allowing revolutionary agitators to be present at the front. Many officers jokingly referred to commander-in-chief Kerensky as the "persuader-in-chief"
On 2 July 1917 the Provisional Government's first coalition collapsed over the question of Ukraine's autonomy. Following the July Days unrest in Petrograd (3–7 July [16–20 July, N.S.] 1917) and the official suppression of the Bolsheviks, Kerensky succeeded Prince Lvov as Russia's Prime Minister on . Following the Kornilov Affair, an attempted military coup d'état at the end of August, and the resignation of the other ministers, he appointed himself Supreme Commander-in-Chief as well.
On 15 September Kerensky proclaimed Russia a republic, which was contrary to the non-socialists' understanding that the Provisional Government should hold power only until a Constituent Assembly should meet to decide Russia's form of government, but which was in line with the long-proclaimed aim of the Socialist Revolutionary Party. He formed a five-member Directory, which consisted of himself, Minister of Foreign Affairs Mikhail Tereshchenko, Minister of War General , Minister of the Navy Admiral Dmitry Verderevsky and Minister of Posts and Telegraphs . He retained his post in the final coalition government in October 1917 until the Bolsheviks overthrew it on .
Kerensky faced a major challenge: three years of participation in World War had exhausted Russia, while the provisional government offered little motivation for a victory outside of continuing Russia's obligations towards its allies. Russia's continued involvement in the war was not popular among the lower and middle classes, and especially not popular among the soldiers. They had all believed that Russia would stop fighting when the Provisional Government took power, and subsequently felt deceived. Furthermore, Vladimir Lenin and his Bolshevik party were promising "peace, land, and bread" under a communist system. The Russian army, war-weary, ill-equipped, dispirited and ill-disciplined, was disintegrating, with soldiers deserting in large numbers. By autumn 1917, an estimated two million men had unofficially left the army.
Kerensky and the other political leaders continued Russia's involvement in World War I, thinking that nothing but a glorious victory was the only road forward, and fearing that the economy, already under huge stress from the war effort, might become increasingly unstable if vital supplies from France and from the United Kingdom ceased flowing. The dilemma of whether to withdraw was a great one, and Kerensky's inconsistent and impractical policies further destabilised the army and the country at large.
Furthermore, Kerensky adopted a policy that isolated the right-wing conservatives, both democratic and monarchist-oriented. His philosophy of "no enemies to the left" greatly empowered the Bolsheviks and gave them a free hand, allowing them to take over the military arm or "voyenka" () of the Petrograd and Moscow Soviets. His arrest of Lavr Kornilov and other officers left him without strong allies against the Bolsheviks, who ended up being Kerensky's strongest and most determined adversaries, as opposed to the right wing, which evolved into the White movement.
During the Kornilov Affair, Kerensky had distributed arms to the Petrograd workers, and by November most of these armed workers had gone over to the Bolsheviks. On 1917, the Bolsheviks launched the second Russian revolution of the year. Kerensky's government in Petrograd had almost no support in the city. Only one small force, a subdivision of the 2nd company of the First Petrograd Women's Battalion, also known as The Women's Death Battalion, was willing to fight for the government against the Bolsheviks, but this force was overwhelmed by the numerically superior pro-Bolshevik forces, defeated, and captured. The Bolsheviks overthrew the government rapidly by seizing governmental buildings and the Winter Palace.
Kerensky escaped the Bolsheviks and fled to Pskov, where he rallied some loyal troops for an attempt to re-take the city. His troops managed to capture Tsarskoe Selo but were beaten the next day at Pulkovo. Kerensky narrowly escaped, and he spent the next few weeks in hiding before fleeing the country, eventually arriving in France. During the Russian Civil War, he supported neither side, as he opposed both the Bolshevik regime and the White Movement.
Kerensky was married to Olga Lvovna Baranovskaya and they had two sons, Oleg and Gleb, who both went on to become engineers. Kerensky's grandson (also named Oleg), according to IMDb.com, played his grandfather's role in the 1981 film "Reds". Kerensky and Olga were divorced in 1939 and soon after he settled in Paris, and while visiting the United States he met and married, in 1939, the Australian former journalist Lydia Ellen "Nell" Tritton (1899–1946). The marriage took place in Martins Creek, Pennsylvania.
When Germany invaded France in 1940, they emigrated to the United States. After the Axis invasion of the Soviet Union in 1941, Kerensky offered his support to Joseph Stalin. When his wife Nell became terminally ill in 1945, Kerensky travelled with her to Brisbane, Australia, and lived there with her family. She suffered a stroke in February 1946, and he remained there until her death on 10 April 1946. Kerensky then returned to the United States, where he spent the rest of his life.
Kerensky eventually settled in New York City, living on the Upper East Side on 91st Street near Central Park but spent much of his time at the Hoover Institution at Stanford University in California, where he both used and contributed to the Institution's huge archive on Russian history, and where he taught graduate courses. He wrote and broadcast extensively on Russian politics and history.
Kerensky died of arteriosclerotic heart disease at St. Luke's Hospital in New York City in 1970, one of the last surviving major participants in the turbulent events of 1917. The local Russian Orthodox Churches in New York City refused to grant Kerensky burial because of his association with Freemasonry, and because they saw him as largely responsible for the Bolsheviks seizing power. A Serbian Orthodox Church also refused burial. Kerensky's body was flown to London, where he was buried at the non-denominational Putney Vale Cemetery. | https://en.wikipedia.org/wiki?curid=2543 |
Ansgar
Ansgar (8 September 801 – 3 February 865), also known as Anskar, Saint Ansgar, Saint Anschar or Oscar, was Archbishop of Hamburg-Bremen in the northern part of the Kingdom of the East Franks. Ansgar became known as the "Apostle of the North" because of his travels and the See of Hamburg received the missionary mandate to bring Christianity to Northern Europe.
Ansgar was the son of a noble Frankish family, born near Amiens (present day France). After his mother's early death, Ansgar was brought up in Corbie Abbey, and was educated at the Benedictine monastery in Picardy. According to the "Vita Ansgarii" ("Life of Ansgar"), when the little boy learned in a vision that his mother was in the company of Mary, mother of Jesus, his careless attitude toward spiritual matters changed to seriousness. His pupil, successor, and eventual biographer Rimbert considered the visions (of which this was the first) to have been Ansgar's main life motivator.
Ansgar was a product of the phase of Christianization of Saxony (present day Northern Germany) begun by Charlemagne and continued by his son and successor, Louis the Pious. In 822 Ansgar became one of many missionaries sent to found the abbey of Corvey (New Corbie) in Westphalia, where he became a teacher and preacher. A group of monks including Ansgar were sent further north to Jutland with the king Harald Klak, who had received baptism during his exile. With Harald's downfall in 827 and Ansgar's companion Autbert having died, their school for the sons of courtiers closed and Ansgar returned to Germany. Then in 829, after the Swedish king Björn at Hauge requested missionaries for his Swedes, King Louis sent Ansgar, now accompanied by friar Witmar from New Corbie as his assistant. Ansgar preached and made converts, particularly during six months at Birka, on Lake Mälaren, where the wealthy widow Mor Frideborg extended hospitality. Ansgar organized a small congregation with her and the king's steward, Hergeir, as its most prominent members.
In 831 Ansgar returned to Louis' court at Worms and was appointed to the Archbishopric of Hamburg-Bremen. This was a new archbishopric, incorporating the bishoprics of Bremen and Verden and with the right to send missions into all the northern lands, as well as to consecrate bishops for them. Ansgar received the mission of evangelizing pagan Denmark, Norway and Sweden. The King of Sweden decided to cast lots as to whether to admit the Christian missionaries into his kingdom. Ansgar recommended the issue to the care of God, and the lot was favorable. Ansgar was consecrated as a bishop in November 831, with the approval of Gregory IV. Before traveling north once again, Ansgar traveled to Rome to receive the pallium directly from the pope's hands, and was formally named legate for the northern lands. Ebbo, Archbishop of Reims had previously received a similar commission, but would be deposed twice before his death in 851, and never actually traveled so far north, so the jurisdiction was divided by agreement, with Ebbo retaining Sweden for himself. For a time Ansgar devoted himself to the needs of his own diocese, which was still missionary territory and had few churches. He founded a monastery and a school in Hamburg. Although intended to serve the Danish mission further north, it accomplished little.
After Louis the Pious died in 840, his empire was divided and Ansgar lost the abbey of Turholt, which Louis had given to endow Ansgar's work. Then in 845, the Danes unexpectedly raided Hamburg, destroying all the church's treasures and books. Ansgar now had neither see nor revenue, and many helpers deserted him. The new king, Louis' third son, Louis the German, did not re-endow Turholt to Ansgar, but in 847 he named the missionary to the vacant diocese of Bremen, where Ansgar moved in 848. However, since Bremen had been suffragan to the Bishop of Cologne, combining the sees of Bremen and Hamburg presented canonical difficulties. After prolonged negotiations, Pope Nicholas I would approve the union of the two dioceses in 864.
Through this political turmoil, Ansgar continued his northern mission. The Danish civil war compelled him to establish good relations with two kings, Horik the Elder and his son, Horik II. Both assisted him until his death; Ansgar was able to secure permission to build a church in Sleswick north of Hamburg and recognition of Christianity as a tolerated religion. Ansgar did not forget the Swedish mission, and spent two years there in person (848–850), averting a threatened pagan reaction. In 854, Ansgar returned to Sweden when king Olof ruled in Birka. According to Rimbert, he was well disposed to Christianity. On a Viking raid to Apuole (current village in Lithuania) in Courland, the Swedes plundered the Curonians.
Ansgar was buried in Bremen in 865. His successor as archbishop, Rimbert, wrote the "Vita Ansgarii". He noted that Ansgar wore a rough hair shirt, lived on bread and water, and showed great charity to the poor. Adam of Bremen attributed the "Vita et miracula of Willehad" (first bishop of Bremen) to Ansgar in "Gesta Hammenburgensis ecclesiæ"; Ansgar is also the reputed author of a collection of brief prayers "Pigmenta" (ed. J. M. Lappenberg, Hamburg, 1844). Pope Nicholas I declared Ansgar a saint shortly after the missionary's death. The first actual missionary in Sweden and the Nordic countries (and organizer of the Catholic church therein), Ansgar was later declared "Patron of Scandinavia".
Relics are located in Hamburg in two places: St. Mary's Cathedral (Ger.: Domkirche St. Marien) and St. Ansgar's and St. Bernard's Church (Ger.: St. Ansgar und St. Bernhard Kirche).
Statues of Bishop Ansgar stand in Hamburg, Copenhagen and Ribe, as well as a stone cross at Birka. A crater on the Moon, Ansgarius, has been named for him. His feast day is 3 February.
Although a historical document and primary source written by a man whose existence can be proven historically, the "Vita Ansgarii" ("The Life of Ansgar") aims above all to demonstrate Ansgar's sanctity. It is partly concerned with Ansgar's visions, which, according to the author Rimbert, encouraged and assisted Ansgar's remarkable missionary feats.
Through the course of this work, Ansgar repeatedly embarks on a new stage in his career following a vision. According to Rimbert, his early studies and ensuing devotion to the ascetic life of a monk were inspired by a vision of his mother in the presence of Mary, mother of Jesus. Again, when the Swedish people were left without a priest for some time, he begged King Horik to help him with this problem; then after receiving his consent, consulted with Bishop Gautbert to find a suitable man. The two together sought the approval of King Louis, which he granted when he learned that they were in agreement on the issue. Ansgar was convinced he was commanded by heaven to undertake this mission and was influenced by a vision he received when he was concerned about the journey, in which he met a man who reassured him of his purpose and informed him of a prophet that he would meet, the abbot Adalhard, who would instruct him in what was to happen. In the vision, he searched for and found Adalhard, who commanded, "Islands, listen to me, pay attention, remotest peoples", which Ansgar interpreted as God's will that he go to the Scandinavian countries as "most of that country consisted of islands, and also, when 'I will make you the light of the nations so that my salvation may reach to the ends of the earth' was added, since the end of the world in the north was in Swedish territory". | https://en.wikipedia.org/wiki?curid=2544 |
Automated theorem proving
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.
While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's "Begriffsschrift" (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His "Foundations of Arithmetic", published 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential "Principia Mathematica", first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems.
In 1929, Mojżesz Presburger showed that the theory of natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false.
However, shortly after this positive result, Kurt Gödel published "On Formally Undecidable Propositions of Principia Mathematica and Related Systems" (1931), showing that in any sufficiently strong axiomatic system there are true statements which cannot be proved in the system. This topic was further developed in the 1930s by Alonzo Church and Alan Turing, who on the one hand gave two independent but equivalent definitions of computability, and on the other gave concrete examples for undecidable questions.
Shortly after World War II, the first general purpose computers became available. In 1954, Martin Davis programmed Presburger's algorithm for a JOHNNIAC vacuum tube computer at the Princeton Institute for Advanced Study. According to Davis, "Its great triumph was to prove that the sum of two even numbers is even". More ambitious was the Logic Theory Machine in 1956, a deduction system for the propositional logic of the "Principia Mathematica", developed by Allen Newell, Herbert A. Simon and J. C. Shaw. Also running on a JOHNNIAC, the Logic Theory Machine constructed proofs from a small set of propositional axioms and three deduction rules: modus ponens, (propositional) variable substitution, and the replacement of formulas by their definition. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the "Principia".
The "heuristic" approach of the Logic Theory Machine tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, completeness for first-order logic. Initial approaches relied on the results of Herbrand and Skolem to convert a first-order formula into successively larger sets of propositional formulae by instantiating variables with terms from the Herbrand universe. The propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmore's program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious.
Depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of propositional logic, the problem is decidable but co-NP-complete, and hence only exponential-time algorithms are believed to exist for general proof tasks. For a first order predicate calculus, Gödel's completeness theorem states that the theorems (provable statements) are exactly the logically valid well-formed formulas, so identifying valid formulas is recursively enumerable: given unbounded resources, any valid formula can eventually be proven. However, "invalid" formulas (those that are "not" entailed by a given theory), cannot always be recognized.
The above applies to first order theories, such as Peano arithmetic. However, for a specific model that may be described by a first order theory, some statements may be true but undecidable in the theory used to describe the model. For example, by Gödel's incompleteness theorem, we know that any theory whose proper axioms are true for the natural numbers cannot prove all first order statements true for the natural numbers, even if the list of proper axioms is allowed to be infinite enumerable. It follows that an automated theorem prover will fail to terminate while searching for a proof precisely when the statement being investigated is undecidable in the theory being used, even if it is true in the model of interest. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, even in models that are not fully described by any first order theory (such as the integers).
A simpler, but related, problem is "proof verification", where an existing proof for a theorem is certified valid. For this, it is generally required that each individual proof step can be verified by a primitive recursive function or program, and hence the problem is always decidable.
Since the proofs generated by automated theorem provers are typically very large, the problem of proof compression is crucial and various techniques aiming at making the prover's output smaller, and consequently more easily understandable and checkable, have been developed.
Proof assistants require a human user to give hints to the system. Depending on the degree of automation, the prover can essentially be reduced to a proof checker, with the user providing the proof in a formal way, or significant proof tasks can be performed automatically. Interactive provers are used for a variety of tasks, but even fully automatic systems have proved a number of interesting and hard theorems, including at least one that has eluded human mathematicians for a long time, namely the Robbins conjecture. However, these successes are sporadic, and work on hard problems usually requires a proficient user.
Another distinction is sometimes drawn between theorem proving and other techniques, where a process is considered to be theorem proving if it consists of a traditional proof, starting with axioms and producing new inference steps using rules of inference. Other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states (although the actual implementation of model checkers requires much cleverness, and does not simply reduce to brute force).
There are hybrid theorem proving systems which use model checking as an inference rule. There are also programs which were written to prove a particular theorem, with a (usually informal) proof that if the program finishes with a certain result, then the theorem is true. A good example of this was the machine-aided proof of the four color theorem, which was very controversial as the first claimed mathematical proof which was essentially impossible to verify by humans due to the enormous size of the program's calculation (such proofs are called non-surveyable proofs). Another example of a program-assisted proof is the one that shows that the game of Connect Four can always be won by first player.
Commercial use of automated theorem proving is mostly concentrated in integrated circuit design and verification. Since the Pentium FDIV bug, the complicated floating point units of modern microprocessors have been designed with extra scrutiny. AMD, Intel and others use automated theorem proving to verify that division and other operations are correctly implemented in their processors.
In the late 1960s agencies funding research in automated deduction began to emphasize the need for practical applications. One of the first fruitful areas was that of program verification whereby first-order theorem provers were applied to the problem of verifying the correctness of computer programs in languages such as Pascal, Ada, etc. Notable among early program verification systems was the Stanford Pascal Verifier developed by David Luckham at Stanford University. This was based on the Stanford Resolution Prover also developed at Stanford using John Alan Robinson's resolution principle. This was the first automated deduction system to demonstrate an ability to solve mathematical problems that were announced in the Notices of the American Mathematical Society before solutions were formally published.
First-order theorem proving is one of the most mature subfields of automated theorem proving. The logic is expressive enough to allow the specification of arbitrary problems, often in a reasonably natural and intuitive way. On the other hand, it is still semi-decidable, and a number of sound and complete calculi have been developed, enabling "fully" automated systems. More expressive logics, such as Higher-order logics, allow the convenient expression of a wider range of problems than first order logic, but theorem proving for these logics is less well developed.
The quality of implemented systems has benefited from the existence of a large library of standard benchmark examples — the Thousands of Problems for Theorem Provers (TPTP) Problem Library — as well as from the CADE ATP System Competition (CASC), a yearly competition of first-order systems for many important classes of first-order problems.
Some important systems (all have won at least one CASC competition division) are listed below.
The Theorem Prover Museum is an initiative to conserve the sources of theorem prover systems for future analysis, since they are important cultural/scientific artefacts. It has the sources of many of the systems mentioned above. | https://en.wikipedia.org/wiki?curid=2546 |
Agent Orange
Agent Orange is a herbicide and defoliant chemical, one of the "tactical use" Rainbow Herbicides. It is widely known for its use by the U.S. military as part of its chemical warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. It is a mixture of equal parts of two herbicides, 2,4,5-T and 2,4-D. In addition to its damaging environmental effects, traces of dioxin (mainly TCDD, the most toxic of its type) found in the mixture have caused major health problems for many individuals who were exposed.
Up to four million people in Vietnam were exposed to the defoliant. The government of Vietnam says as many as three million people have suffered illness because of Agent Orange, and the Red Cross of Vietnam estimates that up to one million people are disabled or have health problems as a result of Agent Orange contamination. The United States government has described these figures as unreliable, while documenting higher cases of leukemia, Hodgkin's lymphoma, and various kinds of cancer in exposed US military veterans. An epidemiological study done by the Centers for Disease Control and Prevention showed that there was an increase in the rate of birth defects of the children of military personnel as a result of Agent Orange. Agent Orange has also caused enormous environmental damage in Vietnam. Over 3,100,000 hectares (31,000 km2 or 11,969 mi2) of forest were defoliated. Defoliants eroded tree cover and seedling forest stock, making reforestation difficult in numerous areas. Animal species diversity sharply reduced in contrast with unsprayed areas.
The use of Agent Orange in Vietnam resulted in numerous legal actions. The United Nations ratified United Nations General Assembly Resolution 31/72 and the Environmental Modification Convention. Lawsuits filed on behalf of both U.S. and Vietnamese veterans sought compensation for damages.
Agent Orange was first used by the British Armed Forces in Malaya during the Malayan Emergency. It was also used by the US military in Laos and Cambodia during the Vietnam War because forests near the border with Vietnam were used by the Viet Cong. The herbicide was also used in Brazil to clear out sections of land for agriculture.
The active ingredient of Agent Orange was an equal mixture of two phenoxy herbicides – 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) – in iso-octyl ester form, which contained traces of the dioxin 2,3,7,8-tetrachlorodibenzo-"p"-dioxin (TCDD). TCDD was a trace (typically 2-3 ppm, ranging from 50 ppb to 50 ppm) - but significant - contaminant of Agent Orange.
TCDD is the most toxic of the dioxins and is classified as a human carcinogen by the U.S. Environmental Protection Agency (EPA). The fat-soluble nature of TCDD causes it to readily enter the body through physical contact or ingestion. Dioxin easily accumulates in the food chain. Dioxin enters the body by attaching to a protein called the aryl hydrocarbon receptor (AhR), a transcription factor. When TCDD binds to AhR, the protein moves to the nucleus, where it influences gene expression.
According to U.S. government reports, if not bound chemically to a biological surface such as soil, leaves or grass, Agent Orange dries quickly after spraying and breaks down within hours to days when exposed to sunlight and is no longer harmful.
Several herbicides were developed as part of efforts by the United States and Great Britain to create herbicidal weapons for use during World War II. These included 2,4-D, 2,4,5-T, MCPA (2-methyl-4-chlorophenoxyacetic acid, 1414B and 1414A, recoded LN-8 and LN-32), and isopropyl phenylcarbamate (1313, recoded LN-33).
In 1943, the United States Department of the Army contracted botanist and bioethicist Arthur Galston, who discovered the defoliants later used in Agent Orange, and his employer University of Illinois at Urbana–Champaign to study the effects of 2,4-D and 2,4,5-T on cereal grains (including rice) and broadleaf crops. While a graduate and post-graduate student at the University of Illinois, Galston's research and dissertation focused on finding a chemical means to make soybeans flower and fruit earlier. He discovered both that 2,3,5-triiodobenzoic acid (TIBA) would speed up the flowering of soybeans and that in higher concentrations it would defoliate the soybeans. From these studies arose the concept of using aerial applications of herbicides to destroy enemy crops to disrupt their food supply. In early 1945, the U.S. Army ran tests of various 2,4-D and 2,4,5-T mixtures at the Bushnell Army Airfield in Florida. As a result, the U.S. began a full-scale production of 2,4-D and 2,4,5-T and would have used it against Japan in 1946 during Operation Downfall if the war had continued.
In the years after the war, the U.S. tested 1,100 compounds, and field trials of the more promising ones were done at British stations in India and Australia, in order to establish their effects in tropical conditions, as well as at the U.S.'s testing ground in Florida. Between 1950 and 1952, trials were conducted in Tanganyika, at Kikore and Stunyansa, to test arboricides and defoliants under tropical conditions. The chemicals involved were 2,4-D, 2,4,5-T, and endothall (3,6-endoxohexahydrophthalic acid). During 1952–53, the unit supervised the aerial spraying of 2,4,5-T in Kenya to assess the value of defoliants in the eradication of tsetse fly.
During the Malayan Emergency (1948–1960), Britain was the first nation to employ the use of herbicides and defoliants to destroy bushes, trees, and vegetation to deprive insurgents of concealment and target food crops as part of a starvation campaign in the early 1950s. A detailed account of how the British experimented with the spraying of herbicides was written by two scientists, E.K. Woodford of Agricultural Research Council's Unit of Experimental Agronomy and H.G.H. Kearns of the University of Bristol.
After the Malayan conflict ended in 1960, the U.S. considered the British precedent in deciding that the use of defoliants was a legal tactic of warfare. Secretary of State Dean Rusk advised President John F. Kennedy that the British had established a precedent for warfare with herbicides in Malaya.
In mid-1961, President Ngo Dinh Diem of South Vietnam asked the United States to conduct aerial herbicide spraying in his country. In August of that year, the Republic of Vietnam Air Force conducted herbicide operations with American help. Diem's request launched a policy debate in the White House and the State and Defense Departments. However, U.S. officials considered using it, pointing out that the British had already used herbicides and defoliants during the Malayan Emergency in the 1950s. In November 1961, Kennedy authorized the start of Operation Ranch Hand, the codename for the United States Air Force's herbicide program in Vietnam.
During the Vietnam War, between 1962 and 1971, the United States military sprayed nearly of various chemicals – the "rainbow herbicides" and defoliants – in Vietnam, eastern Laos, and parts of Cambodia as part of Operation Ranch Hand, reaching its peak from 1967 to 1969. For comparison purposes, an olympic size pool holds approximately . As the British did in Malaya, the goal of the U.S. was to defoliate rural/forested land, depriving guerrillas of food and concealment and clearing sensitive areas such as around base perimeters. The program was also a part of a general policy of forced draft urbanization, which aimed to destroy the ability of peasants to support themselves in the countryside, forcing them to flee to the U.S.-dominated cities, depriving the guerrillas of their rural support base. Agent Orange was usually sprayed from helicopters or from low-flying C-123 Provider aircraft, fitted with sprayers and "MC-1 Hourglass" pump systems and chemical tanks. Spray runs were also conducted from trucks, boats, and backpack sprayers.
The first batch of herbicides was unloaded at Tan Son Nhut Air Base in South Vietnam, on January 9, 1962. U.S. Air Force records show at least 6,542 spraying missions took place over the course of Operation Ranch Hand. By 1971, 12 percent of the total area of South Vietnam had been sprayed with defoliating chemicals, at an average concentration of 13 times the recommended U.S. Department of Agriculture application rate for domestic use. In South Vietnam alone, an estimated of agricultural land was ultimately destroyed. In some areas, TCDD concentrations in soil and water were hundreds of times greater than the levels considered safe by the EPA.
The campaign destroyed of upland and mangrove forests and thousands of square kilometres of crops. Overall, more than 20% of South Vietnam's forests were sprayed at least once over the nine-year period.
In 1965, members of the U.S. Congress were told "crop destruction is understood to be the more important purpose ... but the emphasis is usually given to the jungle defoliation in public mention of the program." Military personnel were told they were destroying crops because they were going to be used to feed guerrillas. They later discovered nearly all of the food they had been destroying was not being produced for guerrillas; it was, in reality, only being grown to support the local civilian population. For example, in Quang Ngai province, 85% of the crop lands were scheduled to be destroyed in 1970 alone. This contributed to widespread famine, leaving hundreds of thousands of people malnourished or starving.
The U.S. military began targeting food crops in October 1962, primarily using Agent Blue; the American public was not made aware of the crop destruction programs until 1965 (and it was then believed that crop spraying had begun that spring). In 1965, 42% of all herbicide spraying was dedicated to food crops. The first official acknowledgement of the programs came from the State Department in March 1966.
Many experts at the time, including Arthur Galston, opposed herbicidal warfare because of concerns about the side effects to humans and the environment by indiscriminately spraying the chemical over a wide area. As early as 1966, resolutions were introduced to the United Nations charging that the U.S. was violating the 1925 Geneva Protocol, which regulated the use of chemical and biological weapons. The U.S. defeated most of the resolutions, arguing that Agent Orange was not a chemical or a biological weapon as it was considered a herbicide and a defoliant and it was used in effort to destroy plant crops and to deprive the enemy of concealment and not meant to target human beings. The U.S. delegation argued that a weapon, by definition, is any device used to injure, defeat, or destroy living beings, structures, or systems, and Agent Orange did not qualify under that definition. It also argued that if the U.S. were to be charged for using Agent Orange, then Britain and its Commonwealth nations should be charged since they also used it widely during the Malayan Emergency in the 1950s. In 1969, Britain commented on the draft Resolution 2603 (XXIV): "The evidence seems to us to be notably inadequate for the assertion that the use in war of chemical substances specifically toxic to plants is prohibited by international law."
The government of Vietnam says that 4 million of its citizens were exposed to Agent Orange, and as many as 3 million have suffered illnesses because of it; these figures include their children who were exposed. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to contaminated Agent Orange. The United States government has challenged these figures as being unreliable.
According to a study by Dr. Nguyen Viet Nhan, children in the areas where Agent Orange was used have been affected and have multiple health problems, including cleft palate, mental disabilities, hernias, and extra fingers and toes. In the 1970s, high levels of dioxin were found in the breast milk of South Vietnamese women, and in the blood of U.S. military personnel who had served in Vietnam. The most affected zones are the mountainous area along Truong Son (Long Mountains) and the border between Vietnam and Cambodia. The affected residents are living in substandard conditions with many genetic diseases.
In 2006, Anh Duc Ngo and colleagues of the University of Texas Health Science Center published a meta-analysis that exposed a large amount of heterogeneity (different findings) between studies, a finding consistent with a lack of consensus on the issue. Despite this, statistical analysis of the studies they examined resulted in data that the increase in birth defects/relative risk (RR) from exposure to agent orange/dioxin "appears" to be on the order of 3 in Vietnamese-funded studies, but 1.29 in the rest of the world. There is data near the threshold of statistical significance suggesting Agent Orange contributes to still-births, cleft palate, and neural tube defects, with spina bifida being the most statistically significant defect. The large discrepancy in RR between Vietnamese studies and those in the rest of the world has been ascribed to bias in the Vietnamese studies.
Twenty-eight of the former U.S. military bases in Vietnam where the herbicides were stored and loaded onto airplanes may still have high levels of dioxins in the soil, posing a health threat to the surrounding communities. Extensive testing for dioxin contamination has been conducted at the former U.S. airbases in Da Nang, Phù Cát District and Biên Hòa. Some of the soil and sediment on the bases have extremely high levels of dioxin requiring remediation. The Da Nang Air Base has dioxin contamination up to 350 times higher than international recommendations for action. The contaminated soil and sediment continue to affect the citizens of Vietnam, poisoning their food chain and causing illnesses, serious skin diseases and a variety of cancers in the lungs, larynx, and prostate.
While in Vietnam, the veterans were told not to worry and were persuaded the chemical was harmless. After returning home, Vietnam veterans began to suspect their ill health or the instances of their wives having miscarriages or children born with birth defects might be related to Agent Orange and the other toxic herbicides to which they had been exposed in Vietnam. Veterans began to file claims in 1977 to the Department of Veterans Affairs for disability payments for health care for conditions they believed were associated with exposure to Agent Orange, or more specifically, dioxin, but their claims were denied unless they could prove the condition began when they were in the service or within one year of their discharge.
In order to qualify for compensation, veterans must have served on or near the perimeters of military bases in Thailand during the Vietnam Era, where herbicides were tested and stored outside of Vietnam, veterans who were crew members on C-123 planes flown after the Vietnam War, or were associated with Department of Defense (DoD) projects to test, dispose of, or store herbicides in the U.S.
By April 1993, the Department of Veterans Affairs had compensated only 486 victims, although it had received disability claims from 39,419 soldiers who had been exposed to Agent Orange while serving in Vietnam.
In a November 2004 Zogby International poll of 987 people, 79% of respondents thought the U.S. chemical companies which produced Agent Orange defoliant should compensate U.S. soldiers who were affected by the toxic chemical used during the war in Vietnam. Also, 51% said they supported compensation for Vietnamese Agent Orange victims.
Starting in the early 1990s, the federal government directed the Institute of Medicine (IOM), now known as the National Academy of Medicine, to issue reports every 2 years on the health effects of Agent Orange and similar herbicides. First published in 1994 and titled Veterans and Agent Orange, the IOM reports assess the risk of both cancer and non-cancer health effects. Each health effect is categorized by evidence of association based on available research data. The last update was published in 2016, entitled "Veterans and Agent Orange: Update 2014."
The report shows sufficient evidence of an association with soft tissue sarcoma; non-Hodgkin lymphoma (NHL); Hodgkin disease; Chronic lymphocytic leukemia (CLL); including hairy cell leukemia and other chronic B-cell leukemias. Limited or suggested evidence of an association was linked with respiratory cancers (lung, bronchus, trachea, larynx); prostate cancer; multiple myeloma; and bladder cancer. Numerous other cancers were determined to have inadequate or insufficient evidence of links to Agent Orange.
The National Academy of Medicine has repeatedly concluded that any evidence suggestive of an association between Agent Orange and prostate cancer is, "limited because chance, bias, and confounding could not be ruled out with confidence."
At the request of the Veterans Administration, the Institute Of Medicine evaluated whether service in these C-123 aircraft could have plausibly exposed soldiers and been detrimental to their health. Their report "Post-Vietnam Dioxin Exposure in Agent Orange-Contaminated C-123 Aircraft" confirmed it.
Publications by the United States Public Health Service have shown that Vietnam veterans, overall, have increased rates of cancer, and nerve, digestive, skin, and respiratory disorders. The Centers for Disease Control and Prevention notes that in particular, there are higher rates of acute/chronic leukemia, Hodgkin's lymphoma and non-Hodgkin's lymphoma, throat cancer, prostate cancer, lung cancer, colon cancer, Ischemic heart disease, soft tissue sarcoma, and liver cancer. With the exception of liver cancer, these are the same conditions the U.S. Veterans Administration has determined may be associated with exposure to Agent Orange/dioxin and are on the list of conditions eligible for compensation and treatment.
Military personnel who were involved in storage, mixture and transportation (including aircraft mechanics), and actual use of the chemicals were probably among those who received the heaviest exposures. Military members who served on Okinawa also claim to have been exposed to the chemical, but there is no verifiable evidence to corroborate these claims.
Some studies have suggested that veterans exposed to Agent Orange may be more at risk of developing prostate cancer and potentially more than twice as likely to develop higher-grade, more lethal prostate cancers. However, a critical analysis of these studies and 35 others consistently found that there was no significant increase in prostate cancer incidence or mortality in those exposed to Agent Orange or 2,3,7,8-tetracholorodibenzo-"p"-dioxin.
The United States fought secret wars in Laos and Cambodia, dropping large quantities of Agent Orange in each of those countries. According to one estimate, the U.S. dropped 475,500 gallons of Agent Orange in Laos and 40,900 in Cambodia. Because Laos and Cambodia were neutral during the Vietnam War, the U.S. attempted to keep secret its wars, including its bombing campaigns against those countries, from the American population and has largely avoided compensating American veterans and CIA personnel stationed in Cambodia and Laos who suffered permanent injuries as a result of exposure to Agent Orange there.
About 17.8%——of the total forested area of Vietnam was sprayed during the war, which disrupted the ecological equilibrium. The persistent nature of dioxins, erosion caused by loss of tree cover, and loss of seedling forest stock meant that reforestation was difficult (or impossible) in many areas. Many defoliated forest areas were quickly invaded by aggressive pioneer species (such as bamboo and cogon grass), making forest regeneration difficult and unlikely. Animal species diversity was also impacted; in one study a Harvard biologist found 24 species of birds and 5 species of mammals in a sprayed forest, while in two adjacent sections of unsprayed forest there were 145 and 170 species of birds and 30 and 55 species of mammals.
Dioxins from Agent Orange have persisted in the Vietnamese environment since the war, settling in the soil and sediment and entering the food chain through animals and fish which feed in the contaminated areas. The movement of dioxins through the food web has resulted in bioconcentration and biomagnification. The areas most heavily contaminated with dioxins are former U.S. air bases.
American policy during the Vietnam War was to destroy crops, accepting the sociopolitical impact that that would have. The RAND Corporation's "Memorandum 5446-ISA/ARPA" states: "the fact that the VC [the Vietcong] obtain most of their food from the neutral rural population dictates the destruction of civilian crops ... if they are to be hampered by the crop destruction program, it will be necessary to destroy large portions of the rural economy – probably 50% or more". Crops were deliberately sprayed with Agent Orange, areas were bulldozed clear of vegetation, and the rural population was subjected to bombing and artillery fire. In consequence, the urban population in South Vietnam nearly tripled, growing from 2.8 million people in 1958 to 8 million by 1971. The rapid flow of people led to a fast-paced and uncontrolled urbanization; an estimated 1.5 million people were living in Saigon slums because of the overcrowding.
The extensive environmental damage that resulted from usage of the herbicide prompted the United Nations to pass Resolution 31/72 and ratify the Environmental Modification Convention. Many states do not regard this as a complete ban on the use of herbicides and defoliants in warfare, but it does require case-by-case consideration. In the Conference on Disarmament, Article 2(4) Protocol III of the weaponry convention contains "The Jungle Exception", which prohibits states from attacking forests or jungles "except if such natural elements are used to cover, conceal or camouflage combatants or military objectives or are military objectives themselves". This exception voids any protection of any military and civilian personnel from a napalm attack or something like Agent Orange and is clear that it was designed to cover situations like U.S. tactics in Vietnam.
Since at least 1978, several lawsuits have been filed against the companies which produced Agent Orange, among them Dow Chemical, Monsanto, and Diamond Shamrock. Attorney Hy Mayerson was an early pioneer in Agent Orange litigation, working with environmental attorney Victor Yannacone in 1980 on the first class-action suits against wartime manufacturers of Agent Orange. In meeting Dr. Ronald A. Codario, one of the first civilian doctors to see affected patients, Mayerson, so impressed by the fact a physician would show so much interest in a Vietnam veteran, forwarded more than a thousand pages of information on Agent Orange and the effects of dioxin on animals and humans to Codario's office the day after he was first contacted by the doctor. The corporate defendants sought to escape culpability by blaming everything on the U.S. government.
In 1980, Mayerson, with Sgt. Charles E. Hartz as their principal client, filed the first U.S. Agent Orange class-action lawsuit in Pennsylvania, for the injuries military personnel in Vietnam suffered through exposure to toxic dioxins in the defoliant. Attorney Mayerson co-wrote the brief that certified the Agent Orange Product Liability action as a class action, the largest ever filed as of its filing. Hartz's deposition was one of the first ever taken in America, and the first for an Agent Orange trial, for the purpose of preserving testimony at trial, as it was understood that Hartz would not live to see the trial because of a brain tumor that began to develop while he was a member of Tiger Force, special forces, and LRRPs in Vietnam. The firm also located and supplied critical research to the veterans' lead expert, Dr. Codario, including about 100 articles from toxicology journals dating back more than a decade, as well as data about where herbicides had been sprayed, what the effects of dioxin had been on animals and humans, and every accident in factories where herbicides were produced or dioxin was a contaminant of some chemical reaction.
The chemical companies involved denied that there was a link between Agent Orange and the veterans' medical problems. However, on May 7, 1984, seven chemical companies settled the class-action suit out of court just hours before jury selection was to begin. The companies agreed to pay $180 million as compensation if the veterans dropped all claims against them. Slightly over 45% of the sum was ordered to be paid by Monsanto alone. Many veterans who were victims of Agent Orange exposure were outraged the case had been settled instead of going to court and felt they had been betrayed by the lawyers. "Fairness Hearings" were held in five major American cities, where veterans and their families discussed their reactions to the settlement and condemned the actions of the lawyers and courts, demanding the case be heard before a jury of their peers. Federal Judge Jack B. Weinstein refused the appeals, claiming the settlement was "fair and just". By 1989, the veterans' fears were confirmed when it was decided how the money from the settlement would be paid out. A totally disabled Vietnam veteran would receive a maximum of $12,000 spread out over the course of 10 years. Furthermore, by accepting the settlement payments, disabled veterans would become ineligible for many state benefits that provided far more monetary support than the settlement, such as food stamps, public assistance, and government pensions. A widow of a Vietnam veteran who died of Agent Orange exposure would receive $3,700.
In 2004, Monsanto spokesman Jill Montgomery said Monsanto should not be liable at all for injuries or deaths caused by Agent Orange, saying: "We are sympathetic with people who believe they have been injured and understand their concern to find the cause, but reliable scientific evidence indicates that Agent Orange is not the cause of serious long-term health effects."
In 1980, New Jersey created the New Jersey Agent Orange Commission, the first state commission created to study its effects. The commission's research project in association with Rutgers University was called "The Pointman Project". It was disbanded by Governor Christine Todd Whitman in 1996. During the first phase of the project, commission researchers devised ways to determine small dioxin levels in blood. Prior to this, such levels could only be found in the adipose (fat) tissue. The project studied dioxin (TCDD) levels in blood as well as in adipose tissue in a small group of Vietnam veterans who had been exposed to Agent Orange and compared them to those of a matched control group; the levels were found to be higher in the former group. The second phase of the project continued to examine and compare dioxin levels in various groups of Vietnam veterans, including Army, Marines and brown water riverboat Navy personnel.
In 1991, Congress enacted the Agent Orange Act, giving the Department of Veterans Affairs the authority to declare certain conditions "presumptive" to exposure to Agent Orange/dioxin, making these veterans who served in Vietnam eligible to receive treatment and compensation for these conditions. The same law required the National Academy of Sciences to periodically review the science on dioxin and herbicides used in Vietnam to inform the Secretary of Veterans Affairs about the strength of the scientific evidence showing association between exposure to Agent Orange/dioxin and certain conditions. The authority for the National Academy of Sciences reviews and addition of any new diseases to the presumptive list by the VA is expiring in 2015 under the sunset clause of the Agent Orange Act of 1991. Through this process, the list of 'presumptive' conditions has grown since 1991, and currently the U.S. Department of Veterans Affairs has listed prostate cancer, respiratory cancers, multiple myeloma, type II diabetes mellitus, Hodgkin's disease, non-Hodgkin's lymphoma, soft tissue sarcoma, chloracne, porphyria cutanea tarda, peripheral neuropathy, chronic lymphocytic leukemia, and spina bifida in children of veterans exposed to Agent Orange as conditions associated with exposure to the herbicide. This list now includes B cell leukemias, such as hairy cell leukemia, Parkinson's disease and ischemic heart disease, these last three having been added on August 31, 2010. Several highly placed individuals in government are voicing concerns about whether some of the diseases on the list should, in fact, actually have been included.
In 2011, an appraisal of the 20 year long "Air Force Health Study" that began in 1982 indicates that the results of the AFHS as they pertain to Agent Orange, do not provide evidence of disease in the Operation Ranch Hand veterans caused by "their elevated levels of exposure to Agent Orange".
The VA initially denied the applications of post-Vietnam C-123 aircrew veterans because as veterans without "boots on the ground" service in Vietnam, they were not covered under VA's interpretation of "exposed". In June 2015, the Secretary of Veterans Affairs issued an Interim final rule providing presumptive service connection for post-Vietnam C-123 aircrews, maintenance staff and aeromedical evacuation crews. The VA now provides medical care and disability compensation for the recognized list of Agent Orange illnesses.
In 2002, Vietnam and the U.S. held a joint conference on Human Health and Environmental Impacts of Agent Orange. Following the conference, the U.S. National Institute of Environmental Health Sciences (NIEHS) began scientific exchanges between the U.S. and Vietnam, and began discussions for a joint research project on the human health impacts of Agent Orange. These negotiations broke down in 2005, when neither side could agree on the research protocol and the research project was canceled. More progress has been made on the environmental front. In 2005, the first U.S.-Vietnam workshop on remediation of dioxin was held.
Starting in 2005, the EPA began to work with the Vietnamese government to measure the level of dioxin at the Da Nang Air Base. Also in 2005, the Joint Advisory Committee on Agent Orange, made up of representatives of Vietnamese and U.S. government agencies, was established. The committee has been meeting yearly to explore areas of scientific cooperation, technical assistance and environmental remediation of dioxin.
A breakthrough in the diplomatic stalemate on this issue occurred as a result of United States President George W. Bush's state visit to Vietnam in November 2006. In the joint statement, President Bush and President Triet agreed "further joint efforts to address the environmental contamination near former dioxin storage sites would make a valuable contribution to the continued development of their bilateral relationship." On May 25, 2007, President Bush signed the U.S. Troop Readiness, Veterans' Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007 into law for the wars in Iraq and Afghanistan that included an earmark of $3 million specifically for funding for programs for the remediation of dioxin 'hotspots' on former U.S. military bases, and for public health programs for the surrounding communities; some authors consider this to be completely inadequate, pointing out that the Da Nang Airbase alone will cost $14 million to clean up, and that three others are estimated to require $60 million for cleanup. The appropriation was renewed in the fiscal year 2009 and again in FY 2010. An additional $12 million was appropriated in the fiscal year 2010 in the Supplemental Appropriations Act and a total of $18.5 million appropriated for fiscal year 2011.
Secretary of State Hillary Clinton stated during a visit to Hanoi in October 2010 that the U.S. government would begin work on the clean-up of dioxin contamination at the Da Nang Airbase. In June 2011, a ceremony was held at Da Nang airport to mark the start of U.S.-funded decontamination of dioxin hotspots in Vietnam. Thirty-two million dollars has so far been allocated by the U.S. Congress to fund the program. A $43 million project began in the summer of 2012, as Vietnam and the U.S. forge closer ties to boost trade and counter China's rising influence in the disputed South China Sea.
On January 31, 2004, a victim's rights group, the Vietnam Association for Victims of Agent Orange/dioxin (VAVA), filed a lawsuit in the United States District Court for the Eastern District of New York in Brooklyn, against several U.S. companies for liability in causing personal injury, by developing, and producing the chemical, and claimed that the use of Agent Orange violated the 1907 Hague Convention on Land Warfare, 1925 Geneva Protocol, and the 1949 Geneva Conventions. Dow Chemical and Monsanto were the two largest producers of Agent Orange for the U.S. military and were named in the suit, along with the dozens of other companies (Diamond Shamrock, Uniroyal, Thompson Chemicals, Hercules, etc.). On March 10, 2005, Judge Jack B. Weinstein of the Eastern District – who had presided over the 1984 U.S. veterans class-action lawsuit – dismissed the lawsuit, ruling there was no legal basis for the plaintiffs' claims. He concluded Agent Orange was not considered a poison under international law at the time of its use by the U.S.; the U.S. was not prohibited from using it as a herbicide; and the companies which produced the substance were not liable for the method of its use by the government. Weinstein used the British example to help dismiss the claims of people exposed to Agent Orange in their suit against the chemical companies that had supplied it.
George Jackson stated that "if the Americans were guilty of war crimes for using Agent Orange in Vietnam, then the British would be also guilty of war crimes as well since they were the first nation to deploy the use of herbicides and defoliants in warfare and used them on a large scale throughout the Malayan Emergency. Not only was there no outcry by other states in response to Britain's use, but the U.S. viewed it as establishing a precedent for the use of herbicides and defoliants in jungle warfare." The U.S. government was also not a party in the lawsuit because of sovereign immunity, and the court ruled the chemical companies, as contractors of the U.S. government, shared the same immunity.
The case was appealed and heard by the Second Circuit Court of Appeals in Manhattan on June 18, 2007. Three judges on the court upheld Weinstein's ruling to dismiss the case. They ruled that, though the herbicides contained a dioxin (a known poison), they were not intended to be used as a poison on humans. Therefore, they were not considered a chemical weapon and thus not a violation of international law. A further review of the case by the entire panel of judges of the Court of Appeals also confirmed this decision. The lawyers for the Vietnamese filed a petition to the U.S. Supreme Court to hear the case. On March 2, 2009, the Supreme Court denied certiorari and declined to reconsider the ruling of the Court of Appeals.
To assist those who have been affected by Agent Orange/dioxin, the Vietnamese have established "peace villages", which each host between 50 and 100 victims, giving them medical and psychological help. As of 2006, there were 11 such villages, thus granting some social protection to fewer than a thousand victims. U.S. veterans of the war in Vietnam and individuals who are aware and sympathetic to the impacts of Agent Orange have supported these programs in Vietnam. An international group of veterans from the U.S. and its allies during the Vietnam War working with their former enemy—veterans from the Vietnam Veterans Association—established the Vietnam Friendship Village outside of Hanoi.
The center provides medical care, rehabilitation and vocational training for children and veterans from Vietnam who have been affected by Agent Orange. In 1998, The Vietnam Red Cross established the Vietnam Agent Orange Victims Fund to provide direct assistance to families throughout Vietnam that have been affected. In 2003, the Vietnam Association of Victims of Agent Orange (VAVA) was formed. In addition to filing the lawsuit against the chemical companies, VAVA provides medical care, rehabilitation services and financial assistance to those injured by Agent Orange.
The Vietnamese government provides small monthly stipends to more than 200,000 Vietnamese believed affected by the herbicides; this totaled $40.8 million in 2008. The Vietnam Red Cross has raised more than $22 million to assist the ill or disabled, and several U.S. foundations, United Nations agencies, European governments and nongovernmental organizations have given a total of about $23 million for site cleanup, reforestation, health care and other services to those in need.
Vuong Mo of the Vietnam News Agency described one of the centers:
May is 13, but she knows nothing, is unable to talk fluently, nor walk with ease due to for her bandy legs. Her father is dead and she has four elder brothers, all mentally retarded ... The students are all disabled, retarded and of different ages. Teaching them is a hard job. They are of the 3rd grade but many of them find it hard to do the reading. Only a few of them can. Their pronunciation is distorted due to their twisted lips and their memory is quite short. They easily forget what they've learned ... In the Village, it is quite hard to tell the kids' exact ages. Some in their twenties have a physical statures as small as the 7- or 8-years-old. They find it difficult to feed themselves, much less have mental ability or physical capacity for work. No one can hold back the tears when seeing the heads turning round unconsciously, the bandy arms managing to push the spoon of food into the mouths with awful difficulty ... Yet they still keep smiling, singing in their great innocence, at the presence of some visitors, craving for something beautiful.
On June 16, 2010, members of the U.S.-Vietnam Dialogue Group on Agent Orange/Dioxin unveiled a comprehensive 10-year Declaration and Plan of Action to address the toxic legacy of Agent Orange and other herbicides in Vietnam. The Plan of Action was released as an Aspen Institute publication and calls upon the U.S. and Vietnamese governments to join with other governments, foundations, businesses, and nonprofits in a partnership to clean up dioxin "hot spots" in Vietnam and to expand humanitarian services for people with disabilities there. On September 16, 2010, Senator Patrick Leahy acknowledged the work of the Dialogue Group by releasing a statement on the floor of the United States Senate. The statement urges the U.S. government to take the Plan of Action's recommendations into account in developing a multi-year plan of activities to address the Agent Orange/dioxin legacy.
In 2008, Australian researcher Jean Williams claimed that cancer rates in Innisfail, Queensland, were 10 times higher than the state average because of secret testing of Agent Orange by the Australian military scientists during the Vietnam War. Williams, who had won the Order of Australia medal for her research on the effects of chemicals on U.S. war veterans, based her allegations on Australian government reports found in the Australian War Memorial's archives. A former soldier, Ted Bosworth, backed up the claims, saying that he had been involved in the secret testing. Neither Williams nor Bosworth have produced verifiable evidence to support their claims. The Queensland health department determined that cancer rates in Innisfail were no higher than those in other parts of the state.
The U.S. military, with the permission of the Canadian government, tested herbicides, including Agent Orange, in the forests near Canadian Forces Base Gagetown in New Brunswick. In 2007, the government of Canada offered a one-time ex gratia payment of $20,000 as compensation for Agent Orange exposure at CFB Gagetown. On July 12, 2005, Merchant Law Group, on behalf of over 1,100 Canadian veterans and civilians who were living in and around CFB Gagetown, filed a lawsuit to pursue class action litigation concerning Agent Orange and Agent Purple with the Federal Court of Canada. On August 4, 2009, the case was rejected by the court, citing lack of evidence.
In 2007, the Canadian government announced that a research and fact-finding program initiated in 2005 had found the base was safe.
On February 17, 2011, the "Toronto Star" revealed that Agent Orange had been employed to clear extensive plots of Crown land in Northern Ontario. The "Toronto Star" reported that, "records from the 1950s, 1960s and 1970s show forestry workers, often students and junior rangers, spent weeks at a time as human markers holding red, helium-filled balloons on fishing lines while low-flying planes sprayed toxic herbicides including an infamous chemical mixture known as Agent Orange on the brush and the boys below." In response to the "Toronto Star" article, the Ontario provincial government launched a probe into the use of Agent Orange.
An analysis of chemicals present in the island's soil, together with resolutions passed by Guam's legislature, suggest that Agent Orange was among the herbicides routinely used on and around Anderson Air Force Base and Naval Air Station Agana. Despite the evidence, the Department of Defense continues to deny that Agent Orange was stored or used on Guam. Several Guam veterans have collected evidence to assist in their disability claims for direct exposure to dioxin containing herbicides such as 2,4,5-T which are similar to the illness associations and disability coverage that has become standard for those who were harmed by the same chemical contaminant of Agent Orange used in Vietnam.
Agent Orange was used in Korea in the late 1960s. In 1999, about 20,000 South Koreans filed two separated lawsuits against U.S. companies, seeking more than $5 billion in damages. After losing a decision in 2002, they filed an appeal. In January 2006, the South Korean Appeals Court ordered Dow Chemical and Monsanto to pay $62 million in compensation to about 6,800 people. The ruling acknowledged that "the defendants failed to ensure safety as the defoliants manufactured by the defendants had higher levels of dioxins than standard", and, quoting the U.S. National Academy of Science report, declared that there was a "causal relationship" between Agent Orange and a range of diseases, including several cancers. The judges failed to acknowledge "the relationship between the chemical and peripheral neuropathy, the disease most widespread among Agent Orange victims".
In 2011, the United States local press KPHO-TV in Phoenix, Arizona, alleged that in 1978 the United States Army had buried 250 drums of Agent Orange in Camp Carroll, the U.S. Army base in Gyeongsangbuk-do, Korea.
Currently, veterans who provide evidence meeting VA requirements for service in Vietnam and who can medically establish that anytime after this 'presumptive exposure' they developed any medical problems on the list of presumptive diseases, may receive compensation from the VA. Certain veterans who served in Korea and are able to prove they were assigned to certain specified around the DMZ during a specific time frame are afforded similar presumption.
The use of Agent Orange has been controversial in New Zealand, because of the exposure of New Zealand troops in Vietnam and because of the production of Agent Orange for Vietnam and other users at an Ivon Watkins-Dow chemical plant in Paritutu, New Plymouth. There have been continuing claims, as yet unproven, that the suburb of Paritutu has also been polluted.
There are cases of New Zealand soldiers developing cancers such as bone cancer, but none has been scientifically connected to exposure to herbicides.
A controversial television documentary was broadcast in New Zealand on TV3 called "Let us Spray".
Herbicide persistence studies of Agents Orange and White were conducted in the Philippines.
The U.S. Air Force operation to remove Herbicide Orange from Vietnam in 1972 was named Operation Pacer IVY, while the operation to destroy the Agent Orange stored at Johnston Atoll in 1977 was named Operation Pacer HO. Operation Pacer IVY collected Agent Orange in South Vietnam and removed it in 1972 aboard the ship for storage on Johnston Atoll. The EPA reports that of Herbicide Orange was stored at Johnston Island in the Pacific and at Gulfport, Mississippi.
Research and studies were initiated to find a safe method to destroy the materials, and it was discovered they could be incinerated safely under special conditions of temperature and dwell time. However, these herbicides were expensive, and the Air Force wanted to resell its surplus instead of dumping it at sea. Among many methods tested, a possibility of salvaging the herbicides by reprocessing and filtering out the TCDD contaminant with carbonized (charcoaled) coconut fibers. This concept was then tested in 1976 and a pilot plant constructed at Gulfport.
From July to September 1977 during Operation Pacer HO, the entire stock of Agent Orange from both Herbicide Orange storage sites at Gulfport and Johnston Atoll was subsequently incinerated in four separate burns in the vicinity of Johnston Island aboard the Dutch-owned waste incineration ship .
As of 2004, some records of the storage and disposition of Agent Orange at Johnston Atoll have been associated with the historical records of Operation Red Hat.
There have been dozens of reports in the press about use and/or storage of military formulated herbicides on Okinawa that are based upon statements by former U.S. service members that had been stationed on the island, photographs, government records, and unearthed storage barrels. The U.S. Department of Defense has denied these allegations with statements by military officials and spokespersons, as well as a January 2013 report authored by Dr. Alvin Young that was released in April 2013.
In particular, the 2013 report rebuts articles written by journalist Jon Mitchell as well as a statement from "An Ecological Assessment of Johnston Atoll" a 2003 publication produced by the United States Army Chemical Materials Agency that states, "in 1972, the U.S. Air Force also brought about 25,000 200L drums of the chemical, Herbicide Orange (HO) to Johnston Island that originated from Vietnam and was stored on Okinawa." The 2013 report states: "The authors of the [2003] report were not DoD employees, nor were they likely familiar with the issues surrounding Herbicide Orange or its actual history of transport to the Island." and detailed the transport phases and routes of Agent Orange from Vietnam to Johnston Atoll, none of which included Okinawa.
Further official confirmation of restricted (dioxin containing) herbicide storage on Okinawa appeared in a 1971 Fort Detrick report titled "Historical, Logistical, Political and Technical Aspects of the Herbicide/Defoliant Program", which mentions that the environmental statement should consider "Herbicide stockpiles elsewhere in PACOM (Pacific Command) U.S. Government restricted materials Thailand and Okinawa (Kadena AFB)." The 2013 DoD report says that the environmental statement urged by the 1971 report was published in 1974 as "The Department of Air Force Final Environmental Statement", and that the latter did not find Agent Orange was held in either Thailand or Okinawa.
Agent Orange was tested by the United States in Thailand during the Vietnam War. In 1999, buried drums were uncovered and confirmed to be Agent Orange. Workers who uncovered the drums fell ill while upgrading the airport near Hua Hin District, 100 km south of Bangkok. Vietnam-era veterans whose service involved duty on or near the perimeters of military bases in Thailand anytime between February 28, 1961, and May 7, 1975, may have been exposed to herbicides and may qualify for VA benefits.
A declassified Department of Defense report written in 1973, suggests that there was a significant use of herbicides on the fenced-in perimeters of military bases in Thailand to remove foliage that provided cover for enemy forces. In 2013, the VA determined that herbicides used on the Thailand base perimeters may have been tactical and procured from Vietnam, or a strong, commercial type resembling tactical herbicides.
The University of Hawaii has acknowledged extensive testing of Agent Orange on behalf of the United States Department of Defense in Hawaii along with mixtures of Agent Orange on Kaua'i Island in 1967–68 and on Hawaii Island in 1966; testing and storage in other U.S. locations has been documented by the United States Department of Veterans Affairs.
In 1971, the C-123 aircraft used for spraying Agent Orange were returned to the United States and assigned various East Coast USAF Reserve squadrons, and then employed in traditional airlift missions between 1972 and 1982. In 1994, testing by the Air Force identified some former spray aircraft as "heavily contaminated" with dioxin residue. Inquiries by aircrew veterans in 2011 brought a decision by the U.S. Department of Veterans Affairs opining that not enough dioxin residue remained to injure these post-Vietnam War veterans. On 26 January 2012, the U.S. Center For Disease Control's Agency for Toxic Substances and Disease Registry challenged this with their finding that former spray aircraft were indeed contaminated and the aircrews exposed to harmful levels of dioxin. In response to veterans' concerns, the VA in February 2014 referred the C-123 issue to the Institute of Medicine for a special study, with results released on January 9, 2015.
In 1978, the EPA suspended spraying of Agent Orange in National Forests.
Agent Orange was sprayed on thousands of acres of brush in the Tennessee Valley for 15 years before scientists discovered the herbicide was dangerous. Monroe County, Tennessee, is one of the locations known to have been sprayed according to the Tennessee Valley Authority. Forty-four remote acres were doused with Agent Orange along power lines throughout the National Forest.
In 1983, New Jersey declared a Passaic River production site to be a state of emergency. The dioxin pollution in the Passaic River dates back to the Vietnam era, when Diamond Alkali manufactured it in a factory along the river. The tidal river carried dioxin upstream and down, tainting a 17-mile stretch of riverbed in one of New Jersey's most populous areas.
A December 2006 Department of Defense report listed Agent Orange testing, storage, and disposal sites at 32 locations throughout the United States, as well as in Canada, Thailand, Puerto Rico, Korea, and in the Pacific Ocean. The Veteran Administration has also acknowledged that Agent Orange was used domestically by U.S. forces in test sites throughout the United States. Eglin Air Force Base in Florida was one of the primary testing sites throughout the 1960s.
In February 2012, Monsanto agreed to settle a case covering dioxin contamination around a plant in Nitro, West Virginia, that had manufactured Agent Orange. Monsanto agreed to pay up to $9 million for cleanup of affected homes, $84 million for medical monitoring of people affected, and the community's legal fees.
On 9 August 2012, the United States and Vietnam began a cooperative cleaning up of the toxic chemical on part of Danang International Airport, marking the first time the U.S. government has been involved in cleaning up Agent Orange in Vietnam. Danang was the primary storage site of the chemical. Two other cleanup sites the United States and Vietnam are looking at is Biên Hòa, in the southern province of Đồng Nai—a "hotspot" for dioxin—and Phù Cát airport in the central province of Bình Định, says U.S. Ambassador to Vietnam David Shear. According to the Vietnamese newspaper "Nhân Dân", the U.S. government provided $41 million to the project. As of 2017, some 110,000 cubic meters of soil have been "cleaned."
The Seabee's Naval Construction Battalion Center at Gulfport, Mississippi was the largest storage site in the United States for agent orange. It was 30 odd acres in size and was still being cleaned up in 2013.
In 2016, the EPA laid out its plan for cleaning up an 8-mile stretch of the Passaic River in New Jersey, with an estimated cost of $1.4 billion. The contaminants reached to Newark Bay and other waterways, according to the EPA, which has designated the area a Superfund site. Since destruction of the dioxin requires high temperatures over 1,000 °C, the destruction process is energy intensive. | https://en.wikipedia.org/wiki?curid=2547 |
Astronomical year numbering
Astronomical year numbering is based on AD/CE year numbering, but follows normal decimal integer numbering more strictly. Thus, it has a year 0; the years before that are designated with negative numbers and the years after that are designated with positive numbers. Astronomers use the Julian calendar for years before 1582, including the year 0, and the Gregorian calendar for years after 1582, as exemplified by Jacques Cassini (1740), Simon Newcomb (1898) and Fred Espenak (2007).
The prefix AD and the suffixes CE, BC or BCE (Common Era, Before Christ or Before Common Era) are dropped. The year 1 BC/BCE is numbered 0, the year 2 BC is numbered −1, and in general the year "n" BC/BCE is numbered "−("n" − 1)" (a negative number equal to 1 − "n"). The numbers of AD/CE years are not changed and are written with either no sign or a positive sign; thus in general "n" AD/CE is simply "n" or +"n". For normal calculation a number zero is often needed, here most notably when calculating the number of years in a period that spans the epoch; the end years need only be subtracted from each other.
The system is so named due to its use in astronomy. Few other disciplines outside history deal with the time before year 1, some exceptions being dendrochronology, archaeology and geology, the latter two of which use 'years before the present'. Although the absolute numerical values of astronomical and historical years only differ by one before year 1, this difference is critical when calculating astronomical events like eclipses or planetary conjunctions to determine when historical events which mention them occurred.
In his Rudolphine Tables (1627), Johannes Kepler used a prototype of year zero which he labeled "Christi" (Christ's) between years labeled "Ante Christum" (Before Christ) and "Post Christum" (After Christ) on the mean motion tables for the Sun, Moon, Saturn, Jupiter, Mars, Venus and Mercury. In 1702, the French astronomer Philippe de la Hire used a year he labeled at the end of years labeled "ante Christum" (BC), and immediately before years labeled "post Christum" (AD) on the mean motion pages in his "Tabulæ Astronomicæ", thus adding the designation "0" to Kepler's "Christi". Finally, in 1740 the French astronomer Jacques Cassini , who is traditionally credited with the invention of year zero, completed the transition in his "Tables astronomiques", simply labeling this year "0", which he placed at the end of Julian years labeled "avant Jesus-Christ" (before Jesus Christ or BC), and immediately before Julian years labeled "après Jesus-Christ" (after Jesus Christ or AD).
Cassini gave the following reasons for using a year 0:
Fred Espanak of NASA lists 50 phases of the Moon within year 0, showing that it is a full year, not an instant in time. Jean Meeus gives the following explanation:
Although he used the usual French terms "avant J.-C." (before Jesus Christ) and "après J.-C." (after Jesus Christ) to label years elsewhere in his book, the Byzantine historian Venance Grumel used negative years (identified by a minus sign, −) to label BC years and unsigned positive years to label AD years in a table. He did so possibly to save space and put no year 0 between them.
Version 1.0 of the XML Schema language, often used to describe data interchanged between computers in XML, includes built-in primitive datatypes date and dateTime. Although these are defined in terms of ISO 8601 which uses the proleptic Gregorian calendar and therefore should include a year 0, the XML Schema specification states that there is no year zero. Version 1.1 of the defining recommendation realigned the specification with ISO 8601 by including a year zero, despite the problems arising from the lack of backward compatibility. | https://en.wikipedia.org/wiki?curid=2551 |
Adam of Bremen
Adam of Bremen (; ) (Before 1050 – 12 October 1081/1085) was a German medieval chronicler. He lived and worked in the second half of the eleventh century. Adam is most famous for his chronicle "Gesta Hammaburgensis ecclesiae pontificum" ("Deeds of Bishops of the Hamburg Church"). He was "one of the foremost historians and early ethnographers of the medieval period".
Little is known of his life other than hints from his own chronicles. He is believed to have come from Meissen (Latin "Misnia") in Saxony. The dates of his birth and death are uncertain, but he was probably born before 1050 and died on 12 October of an unknown year (possibly 1081, at the latest 1085). From his chronicles it is apparent that he was familiar with a number of authors. The honorary name of "Magister Adam" shows that he had passed through all the stages of a higher education. It is probable that he was taught at the "Magdeburger Domschule".
In 1066 or 1067 he was invited by archbishop Adalbert of Hamburg to join the Church of Bremen. Adam was accepted among the capitulars of Bremen, and by 1069 he appeared as director of the cathedral's school. Soon thereafter he began to write the history of Bremen/Hamburg and of the northern lands in his "Gesta".
His position and the missionary activity of the church of Bremen allowed him to gather information on the history and the geography of Northern Germany. A stay at the court of Svend Estridsen gave him the opportunity to find information about the history and geography of Denmark and the other Scandinavian countries. Among other things he wrote about in Scandinavia were the sailing passages across Øresund such as today's Elsinore to Helsingborg route. | https://en.wikipedia.org/wiki?curid=2552 |
Arapaoa Island
Arapaoa Island, formerly known as Arapawa Island, is a small island located in the Marlborough Sounds, at the north east tip of the South Island of New Zealand.
The island has a land area of . Queen Charlotte Sound defines its western side, while to the south lies Tory Channel, which is on the sea route from Wellington in the North Island to Picton. Cook Strait's narrowest point is between Arapaoa Island's Perano Head and Cape Terawhiti in the North Island.
According to Māori oral tradition, the island was where the great navigator Kupe killed the octopus Te Wheke-a-Muturangi.
It was from a hill on Arapaoa Island in 1770 that Captain James Cook first saw the sea passage from the Pacific Ocean to the Tasman Sea, which was named Cook Strait. This discovery banished the fond notion of geographers that there existed a great southern continent, Terra Australis. A monument at Cook's Lookout was erected in 1970.
From the late 1820s until the mid-1960s, Arapaoa Island was a base for whaling in the Sounds. John Guard established a shore station at Te Awaiti in 1827 for right whales. Later, the station at Perano Head on the east coast of the island was used to hunt humpback whales from 1911 to 1964 (see Whaling in New Zealand). The houses built by the Perano family are now operated as tourist accommodations.
In August 2014, the spelling of the island's name was officially altered from "Arapawa" to "Arapaoa".
The 11,000-volt power lines linking the mainland and Arapaoa Island over Tory Channel was struck by an Air Albatross Cessna 402 commuter aircraft in 1985. The crash was witnessed by many passengers on an inter-island Cook Strait ferry. The ferry immediately stopped to dispatch a rescue lifeboat. Along with the two pilots, one entire family died, and all but a young girl from the other. No bodies were ever found. The sole survivor (Cindy Mosey) was travelling with her family and the other family from Nelson to Wellington to attend a gymnastics competition. The Arapaoa Island crash caused public confidence in Air Albatross to falter, contributing to the company going into liquidation in December of that year.
Parts of the island have been heavily cleared of native vegetation in the past through burning and logging, A number of pine forests were planted on the island. Wilding pines, an invasive species in some parts of New Zealand, are being poisoned on the island to allow the regenerating native vegetation to grow. About at Ruaomoko Point on the south-eastern portion of the island will be killed by drilling holes into the trees and injecting poison.
Arapaoa Island is known for the breeds of pigs, sheep and goats found only on the island. These became established in the 19th century, but the origin of these breeds is uncertain, and a matter of some speculation. Common suggestions are that they are old English breeds introduced by the early whalers, or by Captain Cook or other early explorers. These breeds are now extinct in England, and the goats surviving in a sanctuary on the island are now also bred in other parts of New Zealand and in the northern hemisphere.
The small Brothers Islands, which lie off the northeast coast of Arapaoa Island, are a sanctuary for the rare Brothers Island tuatara. | https://en.wikipedia.org/wiki?curid=2559 |
Administrative law
Administrative law is the body of law that governs the activities of administrative agencies of government. Government agency action can include rule making, adjudication, or the enforcement of a specific regulatory agenda. Administrative law is considered a branch of public law.
Administrative law deals with the decision-making of such administrative units of government as tribunals, boards or commissions that are part of a national regulatory scheme in such areas as police law, international trade, manufacturing, the environment, taxation, broadcasting, immigration and transport.
Administrative law expanded greatly during the twentieth century, as legislative bodies worldwide created more government agencies to regulate the social, economic and political spheres of human interaction.
Civil law countries often have specialized administrative courts that review these decisions.
Unlike most common-law jurisdictions, the majority of civil law jurisdictions have specialized courts or sections to deal with administrative cases which, as a rule, will apply procedural rules specifically designed for such cases and distinct from those applied in private-law proceedings, such as contract or tort claims.
In Brazil, administrative cases are typically heard either by the Federal Courts (in matters concerning the Federal Union) or by the Public Treasury divisions of State Courts (in matters concerning the States). In 1998, a constitutional reform, led by the government of President Fernando Henrique Cardoso, introduced regulatory agencies as a part of the executive branch. Since 1988, Brazilian administrative law has been strongly influenced by the judicial interpretations of the constitutional principles of public administration (art. 37 of Federal Constitution): legality, impersonality, publicity of administrative acts, morality and efficiency.
In Chile the President of the Republic exercises the administrative function, in collaboration with several Ministries or other authorities with "ministerial rank". Each Ministry has one or more under-secretary that performs through public services the actual satisfaction of public needs. There is not a single specialized court to deal with actions against the Administrative entities, but instead there are several specialized courts and procedures of review.
In France, most claims against the national or local governments as well as claims against private bodies providing public services are handled by administrative courts, which use the "Conseil d'État" (Council of State) as a court of last resort for both ordinary and special courts. The main administrative courts are the "tribunaux administratifs" and appeal courts are the "cours administratives d'appel". Special administrative courts include the National Court of Asylum Right as well as military, medical and judicial disciplinary bodies. The French body of administrative law is called ""droit administratif"".
Over the course of their history, France's administrative courts have developed an extensive and coherent case law ("jurisprudence constante") and legal doctrine ("principes généraux du droit" and "principes fondamentaux reconnus par les lois de la République"), often before similar concepts were enshrined in constitutional and legal texts. These principes include:
C.E, Sect., 6 mai 1944, "Dame Veuve Trompier-Gravier" and CE, Ass, 26 octobre 1945, Aramu including for internal disciplinary bodies
C.E, Ass., 17 février 1950, "Ministre de l'agriculture c/ Dame Lamotte"
C.E, Sect., 28 juin 1948, "Société du Journal l'Aurore"
C.E, Ass., 28 mai 1954, "Barrel"
French administrative law, which is the founder of Continental administrative law, has a strong influence on administrative laws in several other countries such as Belgium, Greece, Turkey and Tunisia.
In Germany administrative law is called , which generally rules the relationship between authorities and the citizens. It establishes citizens' rights and obligations against the
authorities. It is a part of the public law, which deals with the organization, the tasks and the acting of the public administration. It also contains rules, regulations, orders and decisions created by and related to administrative agencies, such as federal agencies, federal state authorities, urban administrations, but also admission offices and fiscal authorities etc. Administrative law in Germany follows three basic principles.
Administrative law in Germany can be divided into general administrative law and special administrative law.
The general administration law is basically ruled in the administrative procedures law ("Verwaltungsverfahrensgesetz" [VwVfG]). Other legal sources are the Rules of the Administrative Courts (Verwaltungsgerichtsordnung [VwGO]), the social security code (Sozialgesetzbuch [SGB]) and the general fiscal law (Abgabenordnung [AO]).
The "Verwaltungsverfahrensgesetz" (VwVfG), which was enacted in 1977, regulates the main administrative procedures of the federal government. It serves the purpose to ensure a treatment in accordance with the rule of law by the public authority. Furthermore, it contains the regulations for mass processes and expands the legal protection against the authorities. The VwVfG basically applies for the entire public administrative activities of federal agencies as well as federal state authorities, in case of making federal law. One of the central clause is § 35 VwVfG. It defines the administrative act, the most common form of action in which the public administration occurs against a citizen. The definition in § 35 says, that an administration act is characterized by the following features:
It is an official act of an authority in the field of public law to resolve an individual case with effect to the outside.
§§ 36 – 39, §§ 58 – 59 and § 80 VwV––fG rule the structure and the necessary elements of the
administrative act. § 48 and § 49 VwVfG have a high relevance in practice, as well. In these
paragraphs, the prerequisites for redemption of an unlawful administration act (§ 48 VwVfG ) and
withdrawal of a lawful administration act (§ 49 VwVfG ), are listed.
Administration procedural law (Verwaltungsgerichtsordnung [VwGO]), which was enacted in 1960, rules the court procedures at the administrative court. The VwGO is divided into five parts, which are the constitution of the courts, action, remedies and retrial, costs and enforcement15 and final clauses and temporary arrangements.
In absence of a rule, the VwGO is supplemented by the code of civil procedure (Zivilprozessordnung [ZPO]) and the judicature act (Gerichtsverfassungsgesetz [GVG]). In addition to the regulation of the administrative procedure, the VwVfG also constitutes the legal protection in administrative law beyond the court procedure. § 68 VwVGO rules the preliminary proceeding, called "Vorverfahren" or "Widerspruchsverfahren", which is a stringent prerequisite for the administrative procedure, if an action for rescission or a writ of mandamus against an authority is aimed. The preliminary proceeding gives each citizen, feeling unlawfully mistreated by an authority, the possibility to object and to force a review of an administrative act without going to court. The prerequisites to open the public law remedy are listed in § 40 I VwGO. Therefore, it is necessary to have the existence of a conflict in public law without any constitutional aspects and no assignment to another jurisdiction.
The social security code (Sozialgesetzbuch [SGB]) and the general fiscal law are less important for the administrative law. They supplement the VwVfG and the VwGO in the fields of taxation and social legislation, such as social welfare or financial support for students (BaFÖG) etc.
The special administrative law consists of various laws. Each special sector has its own law. The most important ones are the
In Germany, the highest administrative court for most matters is the federal administrative court Bundesverwaltungsgericht. There are federal courts with special jurisdiction in the fields of social security law (Bundessozialgericht) and tax law (Bundesfinanzhof).
In Italy administrative law is known as ""'Diritto amministrativo"", a branch of public law whose rules govern the organization of the public administration and the activities of the pursuit of the public interest of the public administration and the relationship between this and the citizens.
Its genesis is related to the principle of division of powers of the State. The administrative power, originally called "executive", is to organize resources and people whose function is devolved to achieve the public interest objectives as defined by the law.
In the Netherlands administrative law provisions are usually contained in the various laws about public services and regulations. There is however also a single General Administrative Law Act ("Algemene wet bestuursrecht" or Awb), which is a rather good sample of procedural laws in Europe. It applies both to the making of administrative decisions and the judicial review of these decisions in courts. Another act about judicial procedures in general is the "Algemene termijnenwet" (General time provisions act), with general provisions about time schedules in procedures.
On the basis of the Awb, citizens can oppose a decision ('besluit') made by an administrative agency ('bestuursorgaan') within the administration and apply for judicial review in courts if unsuccessful. Before going to court, citizens must usually first object to the decision with the administrative body who made it. This is called "bezwaar". This procedure allows for the administrative body to correct possible mistakes themselves and is used to filter cases before going to court. Sometimes, instead of bezwaar, a different system is used called "administratief beroep" (administrative appeal). The difference with bezwaar is that administratief beroep is filed with a different administrative body, usually a higher ranking one, than the administrative body that made the primary decision. Administratief beroep is available only if the law on which the primary decision is based specifically provides for it. An example involves objecting to a traffic ticket with the district attorney ("officier van justitie"), after which the decision can be appealed in court.
Unlike France or Germany, there are no special administrative courts of first instance in the Netherlands, but regular courts have an administrative "chamber" which specializes in administrative appeals. The courts of appeal in administrative cases however are specialized depending on the case, but most administrative appeals end up in the judicial section of the Council of State (Raad van State).
Administrative law in the People's Republic of China was virtually non-existent before the economic reform era initiated by Deng Xiaoping. Since the 1980s, the People's Republic of China has constructed a new legal framework for administrative law, establishing control mechanisms for overseeing the bureaucracy and disciplinary committees for the Communist Party of China. However, many have argued that the usefulness of these laws is vastly inadequate in terms of controlling government actions, largely because of institutional and systemic obstacles like a weak judiciary, poorly trained judges and lawyers, and corruption.
In 1990, the Administrative Supervision Regulations (行政检查条例) and the Administrative Reconsideration Regulations (行政复议条例) were passed. The 1993 State Civil Servant Provisional Regulations (国家公务员暂行条例) changed the way government officials were selected and promoted, requiring that they pass exams and yearly appraisals, and introduced a rotation system. The three regulations have been amended and upgraded into laws. In 1994, the State Compensation Law (国家赔偿法) was passed, followed by the Administrative Penalties Law (行政处罚法) in 1996. Administrative Compulsory Law was enforced in 2012. Administrative Litigation Law was amended in 2014. The General Administrative Procedure Law is under way.
In the Republic of China the recently enacted "Constitutional Procedure Act" (憲法訴訟法) in 2019 (former "Constitutional Interpretation Procedure Act, 1993"), the Justices of the Constitutional Court of Judicial Yuan of Taiwan is in charge of judicial interpretation. This council has made 757 interpretations to date.
In Sweden, there is a system of administrative courts that considers only administrative law cases, and is completely separate from the system of general courts. This system has three tiers, with 12 county administrative courts ("förvaltningsrätt") as the first tier, four administrative courts of appeal ("kammarrätt") as the second tier, and the Supreme Administrative Court of Sweden ("Högsta Förvaltningsdomstolen") as the third tier.
Migration cases are handled in a two-tier system, effectively within the system general administrative courts. Three of the administrative courts serve as migration courts ("migrationsdomstol") with the Administrative Court of Appeal in Stockholm serving as the Migration Court of Appeal ("Migrationsöverdomstolen").
In Turkey, the lawsuits against the acts and actions of the national or local governments and public bodies are handled by administrative courts which are the main administrative courts. The decisions of the administrative courts are checked by the Regional Administrative Courts and Council of State. Council of State as a court of last resort is exactly similar to Conseil d'État in France.
Administrative law in the Ukraine is a homogeneous legal substance isolated in a system of jurisprudence characterized as: (1) a branch of law; (2) a science; (3) a discipline.
Generally speaking, most countries that follow the principles of common law have developed procedures for judicial review that limit the reviewability of decisions made by administrative law bodies. Often these procedures are coupled with legislation or other common law doctrines that establish standards for proper rulemaking. Administrative law may also apply to review of decisions of so-called semi-public bodies, such as non-profit corporations, disciplinary boards, and other decision-making bodies that affect the legal rights of members of a particular group or entity.
While administrative decision-making bodies are often controlled by larger governmental units, their decisions could be reviewed by a court of general jurisdiction under some principle of judicial review based upon due process (United States) or fundamental justice (Canada). Judicial review of administrative decisions is different from an administrative appeal. When sitting in review of a decision, the Court will only look at the method in which the decision was arrived at, whereas in an administrative appeal the correctness of the decision itself will be examined, usually by a higher body in the agency. This difference is vital in appreciating administrative law in common law countries.
The scope of judicial review may be limited to certain questions of fairness, or whether the administrative action is "ultra vires". In terms of ultra vires actions in the broad sense, a reviewing court may set aside an administrative decision if it is unreasonable (under Canadian law, following the rejection of the "Patently Unreasonable" standard by the Supreme Court in Dunsmuir v New Brunswick), "Wednesbury" unreasonable (under British law), or arbitrary and capricious (under U.S. Administrative Procedure Act and New York State law). Administrative law, as laid down by the Supreme Court of India, has also recognized two more grounds of judicial review which were recognized but not applied by English Courts, namely legitimate expectation and proportionality.
The powers to review administrative decisions are usually established by statute, but were originally developed from the royal prerogative writs of English law, such as the writ of mandamus and the writ of certiorari. In certain common law jurisdictions, such as India or Pakistan, the power to pass such writs is a Constitutionally guaranteed power. This power is seen as fundamental to the power of judicial review and an aspect of the independent judiciary.
In the United States, many government agencies are organized under the executive branch of government, although a few are part of the judicial or legislative branches.
In the federal government, the executive branch, led by the president, controls the federal executive departments, which are led by secretaries who are members of the United States Cabinet. The many independent agencies of the United States government created by statutes enacted by Congress exist outside of the federal executive departments but are still part of the executive branch.
Congress has also created some special judicial bodies known as Article I tribunals to handle some areas of administrative law.
The actions of executive agencies and independent agencies are the main focus of American administrative law. In response to the rapid creation of new independent agencies in the early twentieth century (see discussion below), Congress enacted the Administrative Procedure Act (APA) in 1946. Many of the independent agencies operate as miniature versions of the tripartite federal government, with the authority to "legislate" (through rulemaking; see Federal Register and Code of Federal Regulations), "adjudicate" (through administrative hearings), and to "execute" administrative goals (through agency enforcement personnel). Because the United States Constitution sets no limits on this tripartite authority of administrative agencies, Congress enacted the APA to establish fair administrative law procedures to comply with the constitutional requirements of due process. Agency procedures are drawn from four sources of authority: the APA, organic statutes, agency rules, and informal agency practice. It is important to note, though, that agencies can only act within their congressionally delegated authority, and must comply with the requirements of the APA.
At state level the first version of the Model State Administrative Procedure Act was promulgated and published in 1946 by the Uniform Law Commission (ULC), in which year the Federal Administrative Procedure Act was drafted. It is incorporated basic principles with only enough elaboration of detail to support essential features, therefore it is a “model”, and not a “uniform”, act. A model act is needed because state administrative law in the states is not uniform, and there are a variety of approaches used in the various states. Later it was modified in 1961 and 1981. The present version is the 2010 Model State Administrative Procedure Act (MSAPA) which maintains the continuity with earlier ones. The reason of the revision is that, in the past two decades state legislatures, dissatisfied with agency rule-making and adjudication, have enacted statutes that modify administrative adjudication and rule-making procedure.
The American Bar Association's official journal concerning administrative law is the "Administrative Law Review", a quarterly publication that is managed and edited by students at the Washington College of Law.
Stephen Breyer, a U.S. Supreme Court Justice since 1994, divides the history of administrative law in the United States into six discrete periods, in his book, "Administrative Law & Regulatory Policy" (3d Ed., 1992):
The agricultural sector is one of the most heavily regulated sectors in the U.S. economy, as it is regulated in various ways at the international, federal, state, and local levels. Consequently, administrative law is a significant component of the discipline of agricultural law. The United States Department of Agriculture and its myriad agencies such as the Agricultural Marketing Service are the primary sources of regulatory activity, although other administrative bodies such as the Environmental Protection Agency play a significant regulatory role as well. | https://en.wikipedia.org/wiki?curid=2560 |
Arthur Phillip
Admiral Arthur Phillip (11 October 1738 – 31 August 1814) was a Royal Navy officer and the first Governor of New South Wales who led the British settlement and colonisation of Australia. He established a British penal colony that later became the city of Sydney, Australia.
After much experience at sea, Phillip led the First Fleet as Governor-designate in the Australian settlement of New South Wales. In January 1788, he selected its location to be Port Jackson (encompassing Sydney Harbour).
Phillip was a far-sighted governor who soon saw that New South Wales would need a civil administration and a system for emancipating the convicts. But his plan to bring skilled tradesmen on the voyage had been rejected, and he faced immense problems of labour, discipline and supply.
The arrival of the Second and Third Fleets placed new pressures on the scarce local resources, but by the time Phillip sailed home in December 1792, the colony was taking shape, with official land-grants and systematic farming and water-supply.
Phillip retired in 1805, but continued to correspond with his friends in New South Wales and to promote the colony's interests.
Captain Arthur Phillip was born on 11 October 1738, the youngest of two children to Jacob Phillip and Elizabeth Breach. His father Jacob was born in Frankfurt, Germany. He was a languages teacher who may also have served in the Royal Navy as an able seaman and purser's steward. His mother Elizabeth was the widow of an ordinary seaman, John Herbert, who had served in Jamaica aboard HMS "Tartar" and died of disease on 13 August 1732. At the time of Arthur Phillip's birth, his family maintained a modest existence as tenants near Cheapside in the City of London.
There are no surviving records of Phillip's early childhood. His father Jacob died in 1739, after which the Phillip family may have fallen on hard times. On 22 June 1751 Arthur was accepted into the Greenwich Hospital School, a charity school for the sons of indigent seafarers. In keeping with the school's curriculum, his education was focused on literacy, arithmetic and navigational skills, including cartography. He was a competent student and something of a perfectionist. His headmaster, Reverend Francis Swinden observed that in personality Phillip was "unassuming, reasonable, business-like to the smallest degree in everything he undertakes".
Phillip remained at the Greenwich School for two and a half years, considerably longer than the average student stay of twelve months. At the end of 1753 he was granted a seven-year indenture as an apprentice aboard "Fortune", a 210-ton whaling vessel commanded by merchant mariner Wiliam Readhead. He left the Greenwich School on 1 December and spent the winter aboard the "Fortune" awaiting the commencement of the 1754 whaling season.
Phillip spent the summer of 1754 hunting whales near Svalbard in the Barents Sea. As an apprentice, his responsibilities included stripping blubber from whale carcasses and helping to pack it into barrels. Food was scarce and "Fortune"s thirty crew members supplemented their diet with bird's eggs, scurvy grass and where possible, reindeer. The ship returned to England on 20 July 1754. The whaling crew were paid off and replaced with twelve sailors for a winter voyage to the Mediterranean. As an apprentice, Phillip remained aboard as "Fortune" undertook an outward trading voyage to Barcelona and Livorno carrying salt and raisins, returning via Rotterdam with a cargo of grains and citrus. The ship returned to England in April 1755 and sailed immediately for Svalbard for that year's whale hunt. Phillip was still a member of the crew, but abandoned his apprenticeship when the ship returned to England on 27 July. On 16 October he enlisted in the Royal Navy and was assigned the rank of ordinary seaman aboard the 68-gun .
As a member of "Buckingham"s crew, Phillip saw action in the Seven Years' War, including the Battle of Minorca in 1756. By 1762 he had transferred to , and was promoted to Lieutenant in recognition of active service in the Battle of Havana. The War ended in 1763 and Phillip returned to England on half pay. In July 1763 he married Margaret Denison, a widow 16 years his senior, and moved to Glasshayes in Lyndhurst, Hampshire, establishing a farm there. The marriage was unhappy, and the couple separated in 1769 when Phillip returned to the Navy. The following year he was posted as second lieutenant aboard , a newly built 74-gun ship of the line.
In 1774 Phillip joined the Portuguese Navy as a captain, serving in the War against Spain. While with the Portuguese Navy, Phillip commanded a frigate, the "Nossa Senhora do Pilar." On this ship he took a detachment of troops from Rio de Janeiro to Colonia do Sacramento on the Río de la Plata (opposite Buenos Aires) to relieve the garrison there. This voyage also conveyed a consignment of convicts assigned to carry out work at Colonia. During a storm encountered in the course of the voyage, the convicts assisted in working the ship and, on arrival at Colonia, Phillip recommended that they be rewarded for saving the ship by remission of their sentences. A garbled version of this eventually found its way into the English press when Phillip was appointed in 1786 to lead the expedition to Sydney. Phillip played a leading part in the capture of the Spanish ship San Agustín, on 19 April 1777, off Santa Catarina. The "San Agustin" was commissioned into the Portuguese Navy as the "Santo Agostinho", and command of her was given to Phillip. The action was reported in the English press:
Madrid, Aug. 28. Letters from Lisbon bring the following Account from Rio Janeiro: That the St. Augustine, of 70 Guns, having been separated from the Squadron of M. Casa Tilly, was attacked by two Portugueze Ships, against which they defended themselves for a Day and a Night, but being next Day surrounded by the Portugueze Fleet, was obliged to surrender.
In 1778 Britain was again at war, and Phillip was recalled to active service, and in 1779 obtained his first command, HMS "Basilisk". He was promoted to post-captain on 30 November 1781 and given command of .
In July 1782, in a change of government, Thomas Townshend became Secretary of State for Home and American Affairs, and assumed responsibility for organising an expedition against Spanish America. Like his predecessor, Lord Germain, he turned for advice to Arthur Phillip. A letter from Phillip to Sandwich of 17 January 1781 records Phillip's loan to Sandwich of his charts of the Plata and Brazilian coasts for use in organising the expedition. Phillip's plan was for a squadron of three ships of the line and a frigate to mount a raid on Buenos Aires and Monte Video, then to proceed to the coasts of Chile, Peru and Mexico to maraud, and ultimately to cross the Pacific to join the British Navy's East India squadron for an attack on Manila. The expedition, consisting of the "Grafton," 70 guns, "Elizabeth," 74 guns, "Europe," 64 guns, and the frigate "Iphigenia", sailed on 16 January 1783, under the command of Commodore Robert Kingsmill. Phillip was given command of the 64-gun , or "Europe". Shortly after sailing, an armistice was concluded between Great Britain and Spain. Phillip learnt of this in April when he put in for storm repairs at Rio de Janeiro. Phillip wrote to Townshend from Rio de Janeiro on 25 April 1783, expressing his disappointment that the ending of the American War had robbed him of the opportunity for naval glory in South America.
After his return to England from India in April 1784, Phillip remained in close contact with Townshend, now Lord Sydney, and the Home Office Under Secretary, Evan Nepean. From October 1784 to September 1786 he was employed by Nepean, who was in charge of the Secret Service relating to the Bourbon Powers, France and Spain, to spy on the French naval arsenals at Toulon and other ports. There was fear that Britain would soon be at war with these powers as a consequence of the Batavian Revolution in the Netherlands.
Portraits of the time depict Phillip as shorter than average, with an olive complexion, dark eyes and a "smooth pear of a skull". His features were dominated by a large and fleshy nose, and by a pronounced lower lip.
At this time, Lord Sandwich, together with the President of the Royal Society, Sir Joseph Banks, was advocating establishment of a British colony in New South Wales. A colony there would be of great assistance to the British Navy in facilitating attacks on the Spanish possessions in Chile and Peru, as Banks's collaborators, James Matra, Captain Sir George Young and Sir John Call pointed out in written proposals on the subject. The British Government took the decision to settle what is now Australia and found the Botany Bay colony in August 1786. Lord Sydney, as Secretary of State for the Home Office, was the minister in charge, and in September 1786 he appointed Phillip commodore of the fleet which was to transport the convicts and soldiers who were to be the new settlers to Botany Bay. Upon arrival there, Phillip was to assume the powers of Captain General and Governor in Chief of the new colony. A subsidiary colony was to be founded on Norfolk Island, as recommended by Sir John Call, to take advantage for naval purposes of that island's native flax (harakeke) and timber.
In October 1786, Phillip was appointed captain of and named Governor-designate of New South Wales, the proposed British colony on the east coast of Australia, by Lord Sydney, the Home Secretary.
Phillip had a very difficult time assembling the fleet which was to make the eight-month sea voyage to Australia. Everything a new colony might need had to be taken, since Phillip had no real idea of what he might find when he got there. There were few funds available for equipping the expedition. His suggestion that people with experience in farming, building and crafts be included was rejected. Most of the 772 convicts (of whom 732 survived the voyage) were petty thieves from the London slums. Phillip was accompanied by a contingent of marines and a handful of other officers who were to administer the colony.
The 11 ships of the First Fleet set sail from Portsmouth on 13 May 1787. The fleet called at Rio de Janeiro for supplies from 6 August to 4 September. The leading ship, reached Botany Bay setting up camp on the Kurnell Peninsula, on 18 January 1788. Phillip soon decided that this site, chosen on the recommendation of Sir Joseph Banks, who had accompanied James Cook in 1770, was not suitable, since it had poor soil, no secure anchorage and no reliable water source. After some exploration Phillip decided to go on to Port Jackson, and on 26 January the marines and convicts landed at Sydney Cove, which Phillip named after Lord Sydney.
Shortly after landing and establishing the settlement at Port Jackson, on 15 February 1788, Phillip sent Lieutenant Philip Gidley King with eight free men and a number of convicts to establish the second British colony in the Pacific at Norfolk Island. This was partly in response to a perceived threat of losing Norfolk Island to the French and partly to establish an alternative food source for the mainland colony.
The early days of the settlement in New South Wales were chaotic and difficult. With limited supplies, the cultivation of food was imperative, but the soils around Sydney were poor, the climate was unfamiliar, and moreover very few of the convicts had any knowledge of agriculture. The colony was on the verge of outright starvation for an extended period. The marines, poorly disciplined themselves in many cases, were not interested in convict discipline. Almost at once, therefore, Phillip had to appoint overseers from among the ranks of the convicts to get the others working. This was the beginning of the process of convict emancipation which was to culminate in the reforms of Lachlan Macquarie after 1811.
Phillip showed in other ways that he recognised that New South Wales could not be run simply as a prison camp. Lord Sydney, often criticised as an ineffectual incompetent, had made one fundamental decision about the settlement that was to influence it beneficially from the start. Instead of just establishing it as a military prison, he provided for a civil administration, with courts of law. Two convicts, Henry and Susannah Kable, sought to sue Duncan Sinclair, the captain of "Alexander", for stealing their possessions during the voyage. Convicts in Britain had no right to sue, and Sinclair had boasted that he could not be sued by them. Despite this, the court found for the plaintiffs and ordered the captain to make restitution for the loss of their possessions.
Further, soon after Lord Sydney appointed him governor of New South Wales, Arthur Phillip drew up a detailed memorandum of his plans for the proposed new colony. In one paragraph he wrote: "The laws of this country [England] will of course, be introduced in [New] South Wales, and there is one that I would wish to take place from the moment his Majesty's forces take possession of the country: That there can be no slavery in a free land, and consequently no slaves." Nevertheless, Phillip believed in severe discipline; floggings and hangings were commonplace, although Phillip commuted many death sentences.
Phillip also had to adopt a policy towards the Eora Aboriginal people, who lived around the waters of Sydney Harbour. Phillip ordered that they must be well treated, and that anyone killing Aboriginal people would be hanged. Phillip befriended an Eora man called Bennelong, and later took him to England. On the beach at Manly, a misunderstanding arose and Phillip was speared in the shoulder: but he ordered his men not to retaliate. Phillip went some way towards winning the trust of the Eora, although they remained wary of the settlers. Soon, a virulent disease, smallpox that was believed to be on account of the white settlers, and other European-introduced epidemics, ravaged the Eora population (see History of smallpox in Australia).
The Governor's main problem was with his own military officers, who wanted large grants of land, which Phillip had not been authorised to grant. Scurvy broke out, and in October 1788 Phillip had to send "Sirius" to Cape Town for supplies, and strict rationing was introduced, with thefts of food punished by hanging. He recorded: "The living conditions need to improve or my men won't work as hard, so I have come to a conclusion that I must hire surgeons to fix the convicts."
Phillip insisted that no retaliation be taken to avenge his own non-fatal spearing. Convict John MacIntyre had been fatally speared during a hunting expedition by unknown Aboriginal people apparently without provocation. MacIntyre swore on his death bed that he had done them no harm, but marine officer Watkin Tench was suspicious of the claim. Tench was sent on a punitive expedition but finding no Aboriginal people other than Bennelong took no action.
Phillip, growing frustrated with the burdens of upholding a colony and his health suffering, resigned soon after this episode.
By 1790 the situation had stabilised. The population of about 2,000 was adequately housed and fresh food was being grown. Phillip assigned a convict, James Ruse, land at Rose Hill (now Parramatta) to establish proper farming, and when Ruse succeeded he received the first land grant in the colony. Other convicts followed his example. "Sirius" was wrecked in March 1790 at the satellite settlement of Norfolk Island, depriving Phillip of vital supplies. In June 1790 the Second Fleet arrived with hundreds more convicts, most of them too sick to work.
By December 1790 Phillip was ready to return to England, but the colony had largely been forgotten in London and no instructions reached him, so he carried on. In 1791 he was advised that the government would send out two convoys of convicts annually, plus adequate supplies. But July, when the vessels of the Third Fleet began to arrive, with 2,000 more convicts, food again ran short, and he had to send the ship "Atlantic" to Calcutta for supplies.
By 1792 the colony was well established, though Sydney remained an unplanned huddle of wooden huts and tents. The whaling industry was established, ships were visiting Sydney to trade, and convicts whose sentences had expired were taking up farming. John Macarthur and other officers were importing sheep and beginning to grow wool. The colony was still very short of skilled farmers, craftsmen and tradesmen, and the convicts continued to work as little as possible, even though they were working mainly to grow their own food.
In late 1792, Phillip, whose health was suffering, relinquished his governorship and sailed for England on the ship "Atlantic", taking with him many specimens of plants and animals. He also took Bennelong and his friend Yemmerrawanne, another young Indigenous Australian who, unlike Bennelong, would succumb to English weather and disease and not live to make the journey home. The European population of New South Wales at his departure was 4,221, of whom 3,099 were convicts. The early years of the colony had been years of struggle and hardship, but the worst was over, and there were no further famines in New South Wales. Phillip arrived in London in May 1793. He tendered his formal resignation and was granted a pension of £500 a year.
Phillip's estranged wife, Margaret, had died in 1792 and was buried in St Beuno's Churchyard, Llanycil, Bala, Merionethshire. In 1794 Phillip married Isabella Whitehead, and lived for a time at Bath. His health gradually recovered and in 1796 he went back to sea, holding a series of commands and responsible posts in the wars against the French. In January 1799 he became a Rear-Admiral. In 1805, aged 67, he retired from the Navy with the rank of Admiral of the Blue, and spent most of the rest of his life in Bath. He continued to correspond with friends in New South Wales and to promote the colony's interests with government officials. He died in Bath in 1814. His Last Will and Testament has been transcribed and is online.
Phillip was buried in St Nicholas's Church, Bathampton. Forgotten for many years, the grave was discovered in 1897 and the Premier of New South Wales, Sir Henry Parkes, had it restored. An annual service of remembrance is held here around Phillip's birthdate by the Britain–Australia Society to commemorate his life.
In 2007, Geoffrey Robertson QC alleged that Phillip's remains are no longer in St Nicholas Church, Bathampton and have been lost: "Captain Arthur Phillip is not where the ledger stone says he is: it may be that he is buried somewhere outside, it may simply be that he is simply lost. But he is not where Australians have been led to believe that he now lies."
A number of places in Australia bear Phillip's name, including Port Phillip, Phillip Island (Victoria), Phillip Island (Norfolk Island), Phillip Street in Sydney, the federal electorate of Phillip (1949–1993), the suburb of Phillip in Canberra, the Governor Phillip Tower building in Sydney, and many streets, parks and schools including a state high school in Parramatta. A monument to Phillip in Bath Abbey Church was unveiled in 1937. Another was unveiled at St Mildred's Church, Bread St, London, in 1932; that church was destroyed in the London Blitz in 1940, but the principal elements of the monument were re-erected at the west end of Watling Street, near Saint Paul's Cathedral, in 1968. A different bust and memorial is inside the nearby church of St Mary-le-Bow. There is a statue of him in the Botanic Gardens, Sydney. There is a portrait of him by Francis Wheatley in the National Portrait Gallery, London.
Percival Serle wrote of Phillip in his "Dictionary of Australian Biography":
Michael Pembroke's biography of Phillip adds that he was also a highly skilled international spy employed by the British government.
As part of a series of events on the bicentenary of his death, a memorial was dedicated in Westminster Abbey on 9 July 2014. In the service the Dean of Westminster, Very Reverend Dr John Hall, described Phillip as: "This modest, yet world-class seaman, linguist, and patriot, whose selfless service laid the secure foundations on which was developed the Commonwealth of Australia, will always be remembered and honoured alongside other pioneers and inventors here in the Nave: David Livingstone, Thomas Cochrane, and Isaac Newton."
A similar memorial was unveiled by the outgoing 37th Governor of New South Wales, Marie Bashir, in St James' Church, Sydney on 31 August 2014.
A bronze bust was installed at the Museum of Sydney, and a full-day symposium planned on his contributions to the founding of modern Australia.
Phillip is a prominent character in Timberlake Wertenbaker's play "Our Country's Good", in which he commissions Lieutenant Ralph Clark to stage a production of "The Recruiting Officer". He is shown as compassionate and just, but receives little support from his fellow officers.
Phillip is referred to in the John Williamson song "Chains Around My Ankle".
Phillip is a prominent character in the 2005 film "The Incredible Journey of Mary Bryant" where he is portrayed by Sam Neill.
Kate Grenville's 2008 novel "The Lieutenant" portrays Phillip through the character Commodore James Gilbert.
Phillip is a prominent character in the 2015 mini-series "Banished" where he is portrayed by David Wenham. | https://en.wikipedia.org/wiki?curid=2563 |
Angus, Scotland
Angus () is one of the 32 local government council areas of Scotland, a registration county and a lieutenancy area. The council area borders Aberdeenshire, Dundee City and Perth and Kinross. Main industries include agriculture and fishing. Global pharmaceuticals company GSK has a significant presence in Montrose in the north of the county.
Angus was historically a county (known officially as Forfarshire from the 18th century until 1928), bordering Kincardineshire to the north-east, Aberdeenshire to the north and Perthshire to the west; southwards it faced Fife across the Firth of Tay; these remain the borders of Angus, minus Dundee which now forms its own small separate council area). Angus remains a registration county and a lieutenancy area. In 1975 some of its administrative functions were transferred to the council district of the Tayside Region, and in 1995 further reform resulted in the establishment of the unitary Angus Council.
The name "Angus" indicates the territory of the eighth-century Pictish king of that name.
The area that now comprises Angus has been occupied since at least the Neolithic period. Material taken from postholes from an enclosure at Douglasmuir, near Friockheim, about five miles north of Arbroath has been radiocarbon dated to around 3500 BC. The function of the enclosure is unknown, but may have been for agriculture or for ceremonial purposes.
Bronze Age archaeology is to be found in abundance in the area. Examples include the short-cist burials found near West Newbigging, about a mile to the North of the town. These burials included pottery urns, a pair of silver discs and a gold armlet. Iron Age archaeology is also well represented, for example in the souterrain nearby Warddykes cemetery and at West Grange of Conan, as well as the better-known examples at Carlungie and Ardestie.
The county is traditionally associated with the Pictish kingdom of Circinn, which is thought to have encompassed Angus and the Mearns. Bordering it were the kingdoms of Ce (Mar and Buchan) to the North, Fotla (Atholl) to the West, and Fib (Fife) to the South.
The most visible remnants of the Pictish age are the numerous sculptured stones that can be found throughout Angus. Of particular note are the collections found at Aberlemno, St Vigeans, Kirriemuir and Monifieth.
Angus is marketed as the birthplace of Scotland. The signing of the Declaration of Arbroath at Arbroath Abbey in 1320 marked Scotland's establishment as an independent nation. It is an area of rich history from Pictish times onwards. Notable historic sites in addition to Arbroath Abbey include Glamis Castle, Arbroath Signal Tower museum and the Bell Rock Light House.
Angus can be split into three geographic areas. To the north and west, the topography is mountainous. This is the area of the Grampian Mountains, Mounth hills and Five Glens of Angus, which is sparsely populated and where the main industry is hill farming. Glas Maol - the highest point in Angus at 1,068 m (3,504 ft) - can be found here, on the tripoint boundary with Perthshire and Aberdeenshire. To the south and east the topography consists of rolling hills (such as the Sidlaws) bordering the sea; this area is well populated, with the larger towns. In between lies Strathmore ("the Great Valley"), which is a fertile agricultural area noted for the growing of potatoes, soft fruit and the raising of Angus cattle. Montrose in the north east of the county is notable for its tidal basin. Angus's coast is fairly regular, the most prominent features being the headlands of Scurdie Ness and Buddon Ness. The main bodies of water in the county are Loch Lee, Loch Brandy, Carlochy, Loch Wharral, Den of Ogil Reservoir, Loch of Forfar, Loch Fithie, Rescobie Loch, Balgavies Loch, Crombie Reservoir, Monikie Reservoirs, Long Loch, Lundie Loch, Loch of Kinnordy, Loch of Lintrathen, Backwater Reservoir, Auchintaple Loch, Loch Shandra, and Loch Esk.
In the 2001 census the population of Angus was recorded as 108,400. 20.14% were under the age of 16, 63.15% were between 16 and 65 and 18.05% were aged 65 or above.
Of the 16 to 74 age group, 32.84% had no formal qualifications, 27.08% were educated to 'O' Grade/Standard Grade level, 14.38% to Higher level, 7.64% to HND or equivalent level and 18.06% to degree level.
The most recent available census results (2001) show that Gaelic is spoken by 0.45% of the Angus population. This, similar to other lowland areas, is lower than the national average of 1.16%. These figures are self-reported and are not broken down into levels of fluency.
Meanwhile, the 2011 census found that 38.4% of the population in Angus can speak Scots, above the Scottish average of 30.1%. This puts Angus as the council area with the sixth highest proficiency in Scots, behind only Shetland, Orkney, Moray, Aberdeenshire, and East Ayrshire.
Historically, the dominant language in Angus was Pictish until the sixth to seventh centuries AD when the area became progressively gaelicised, with Pictish extinct by the mid-ninth century. Gaelic/Middle Irish began to retreat from lowland areas in the late-eleventh century and was absent from the Eastern lowlands by the fourteenth century. It was replaced there by Middle Scots, the contemporary local South Northern dialect of Modern Scots, while Gaelic persisted as a majority language in the highland Glens until the 19th century.
Angus Council are planning to raise the status of Gaelic in the county by adopting a series of measures, including bilingual road signage, communications, vehicle livery and staffing.
Angus Council is one of the 32 local government council areas of Scotland. In 1996, the two-tier local government council was abolished and Angus was established as one of the replacement single-tier Council Areas. Prior to 1974 the county had been served by Angus District Council and Tayside Regional Council. As of May 2017 there are 28 seats on the council. From the May 2017 elections the seats are held as follows — Independent 9, SNP 9, Conservative 8, Liberal Democrat 2.
The council's civic head is the Provost of Angus. There have been six Provosts since its establishment in 1996 — Frances Duncan, Bill Middleton, Ruth Leslie-Melville, Helen Oswald and Alex King. On 16 May 2017 Cllr Ronnie Proctor was appointed Provost from the councillors elected in Angus at the 2017 elections. As Angus is a county area the Lord Lieutenant of Angus is separate role.
The council has had four Chief Executives since its formation — Sandy Watson 1996–2006, David Sawers 2006–2011, Richard Stiff 2011–2017 and Margo Williamson 2017 to date. Margo Williamson is the first female Chief Executive since the Council was formed. The council's main offices are located at Angus House at Orchardbank in Forfar and at Bruce House in Arbroath while council meetings are held in the Town and County Hall in central Forfar. The council's offices include the historic County Buildings in Forfar adjacent to the Sheriff Court.
The boundaries of the present council area are the same as those of the historic county minus the City of Dundee.
The council area borders Aberdeenshire, Dundee City and Perth and Kinross.
Angus is represented by three MPs for the UK Parliament.
Angus is represented by two constituency MSPs for the Scottish Parliament.
In addition to the two constituency MSPs, Angus is also represented by MSPs from the North East Scotland Region
The Edinburgh-Aberdeen railway line runs along the coast, through Dundee and the towns of Monifieth, Carnoustie, Arbroath and Montrose.
There is a small airport at Dundee, which at presents operates flights to London and Belfast.
Most common surnames in Angus (Forfarshire) at the time of the United Kingdom Census of 1881: | https://en.wikipedia.org/wiki?curid=2573 |
André the Giant
André René Roussimoff (May 19, 1946 – January 27, 1993), best known as André the Giant, was a French professional wrestler and actor.
Roussimoff stood at over seven feet tall, which was a result of gigantism caused by excess growth hormone, and later resulted in acromegaly. It also led to his being called "The Eighth Wonder of the World". He found success as a fan favorite throughout the 1970s and early 1980s, appearing as an attraction for various professional wrestling promotions. During the 1980s wrestling boom he was paired with the villainous manager Bobby Heenan and feuded with Hulk Hogan in the World Wrestling Federation (WWF, now WWE). The two headlined WrestleMania III in 1987, and in 1988 he defeated Hogan to win the WWF World Heavyweight Championship, his sole world heavyweight championship, on the first episode of "The Main Event". He also held the WWF Tag Team Championship before failing health forced him to retire in 1992.
Outside of wrestling, he was best known for appearing as Fezzik, the giant in "The Princess Bride". After his death in 1993, he became the inaugural inductee into the newly created WWF Hall of Fame. He was later a charter member of the "Wrestling Observer Newsletter" Hall of Fame and the Professional Wrestling Hall of Fame; the latter describes him as being "one of the most recognizable figures in the world both as a professional wrestler and as a pop culture icon."
André René Roussimoff () was born in Coulommiers of Slavic heritage, the third of five children, to Boris and Marianne Roussimoff Stoeff. His parents were immigrants to France; his father was Bulgarian and his mother was Polish. His nickname growing up was Dédé (, ). At birth André weighed 13 pounds and as a child he displayed symptoms of gigantism very early, noted as being "a good head taller than other the kids" and with abnormally long hands. By the time he was 14, André stood and weighed , and at age 15 stood tall.
Roussimoff was an average student, particularly in mathematics, but after finishing school at 14, since he did not think higher education was necessary for a farm labourer, he instead joined the workforce (contrary to popular legend, André did not drop out, as compulsory education in France at the time ended at 14). He spent years working on his father's farm in Molien, where, according to his brother Jacques, he could perform the work of three men. He also completed an apprenticeship in woodworking, and next worked in a factory that manufactured engines for hay balers. None of these occupations, however, brought him any satisfaction.
While growing up in the 1950s, the Irish playwright and Nobel Prize Laureate in Literature, Samuel Beckett was one of several adults who sometimes drove local children to school, including André and his siblings. The Irishman and Bulgarian-descended French boy had a surprising common ground, their love of cricket, with André recalling that the two rarely talked about anything else.
At the age of 18, Roussimoff moved to Paris and was taught professional wrestling by a local promoter, Obert Lageat, who recognized the earning potential of Roussimoff's size. He trained at night and worked as a mover during the day to pay living expenses. Roussimoff was billed as "Géant Ferré", a name based on the Picardian folk hero , and began wrestling in Paris and nearby areas. Canadian promoter and wrestler Frank Valois met Roussimoff in 1966, years later to become his business manager and adviser. Roussimoff began making a name for himself wrestling in the United Kingdom, Germany, Australia, New Zealand, and Africa.
He made his Japanese debut in 1970, billed as "Monster Roussimoff", wrestling for the International Wrestling Enterprise. Wrestling as both a singles and tag-team competitor, he quickly was made the company's tag-team champion alongside Michael Nador. During his time in Japan, doctors first informed Roussimoff that he suffered from acromegaly.
Roussimoff next moved to Montreal, Canada in 1971, where he became an immediate success, regularly selling out the Montreal Forum. However, promoters eventually ran out of plausible opponents for him and, as the novelty of his size wore off, the gate receipts dwindled. Roussimoff was defeated by Adnan Al-Kaissie in Baghdad in 1971, and wrestled numerous times in 1971 for Verne Gagne's American Wrestling Association (AWA) as a special attraction until Valois appealed to Vince McMahon Sr., founder of the World Wide Wrestling Federation (WWWF), for advice. McMahon suggested several changes. He felt Roussimoff should be portrayed as a large, immovable monster, and to enhance the perception of his size, McMahon discouraged Roussimoff from performing maneuvers such as dropkicks (although he was capable of performing such agile maneuvers before his health deteriorated in later life). He also began billing Roussimoff as "André the Giant" and set up a travel-intensive schedule, lending him to wrestling associations around the world, to keep him from becoming overexposed in any area. Promoters had to guarantee Roussimoff a certain amount of money as well as pay McMahon's WWF booking fee.
On March 24, 1973, Roussimoff debuted in the World Wide Wrestling Federation (later World Wrestling Federation) as a fan favorite, defeating Frank Valois and Bull Pometti in a handicap match in Philadelphia. Two days later he made his debut in New York's Madison Square Garden, defeating Buddy Wolfe.
Roussimoff was one of professional wrestling's most beloved "babyfaces" throughout the 1970s and early 1980s. As such, Gorilla Monsoon often stated that Roussimoff had not been defeated in 15 years by pinfall or submission prior to WrestleMania III; however, he had lost in matches outside of the WWF: pinfall losses to Don Leo Jonathan in Montreal in 1972, Ronnie Garvin in Knoxville in 1978, and Canek in Mexico in 1984 and submission losses in Japan to Strong Kobayashi in 1972 and Antonio Inoki in 1986. He also had sixty-minute time-limit draws with the two major world champions of the day, Harley Race and Nick Bockwinkel.
In 1976, Roussimoff fought professional boxer Chuck Wepner in an unscripted boxer-versus-wrestler fight. The wild fight was shown via telecast as part of the undercard of the Muhammad Ali versus Antonio Inoki fight and ended when he threw Wepner over the top rope and outside the ring and won via count-out.
In 1980 he feuded with Hulk Hogan, when, unlike their more famous matches in the late 1980s, Hogan was the villain and Roussimoff was the hero, wrestling him at Shea Stadium's Showdown at Shea and in Pennsylvania, where after Roussimoff pinned Hogan to win the match, Hogan bodyslammed him much like their legendary WrestleMania III match in 1987. The feud continued in Japan in 1982 and 1983 with their roles reversed and with Antonio Inoki also involved.
In 1982, Vince McMahon, Sr. sold the World Wide Wrestling Federation to his son, Vince McMahon, Jr. As McMahon began to expand his newly acquired promotion to the national level, he required his wrestlers to appear exclusively for him. McMahon signed Roussimoff to these terms in 1984, although he still allowed him to work in Japan for New Japan Pro Wrestling (NJPW).
One of Roussimoff's feuds pitted him against the "Mongolian Giant" Killer Khan. According to the storyline, Khan had snapped Roussimoff's ankle during a match on May 2, 1981, in Rochester, New York, by leaping off the top rope and crashing down upon it with his knee-drop. In reality, he had broken his ankle getting out of bed the morning before the match. The injury and subsequent rehabilitation was worked into the existing Roussimoff/Khan storyline. After a stay at Beth Israel Hospital in Boston, Roussimoff returned with payback on his mind. The two battled on July 20, 1981, at Madison Square Garden in a match that resulted in a double disqualification. Their feud continued as fans filled arenas up and down the east coast to witness their matches. On November 14, 1981, at the Philadelphia Spectrum, he decisively defeated Khan in what was billed as a "Mongolian stretcher match", in which the loser must be taken to the dressing room on a stretcher. The same type of match was also held in Toronto. In early 1982 the two also fought in a series of matches in Japan with Arnold Skaaland in Roussimoff's corner.
Another feud involved a man who considered himself to be the "true giant" of wrestling: Big John Studd. Throughout the early to mid-1980s, Roussimoff and Studd fought all over the world, battling to try to determine who the real giant of wrestling was. In 1984, Studd took the feud to a new level when he and partner Ken Patera knocked out Roussimoff during a televised tag-team match and proceeded to cut off his hair. After gaining revenge on Patera, Roussimoff met Studd in a "body slam challenge" at the first WrestleMania, held March 31, 1985, at Madison Square Garden in New York City. Roussimoff slammed Studd to win the match and collect the $15,000 prize, then proceeded to throw cash to the fans before having the bag taken from him by Studd's manager, Bobby "The Brain" Heenan.
The following year, at WrestleMania 2, on April 7, 1986, Roussimoff continued to display his dominance by winning a twenty-man battle royal which featured top National Football League stars and wrestlers. He last eliminated Bret Hart to win the contest.
After WrestleMania 2, Roussimoff continued his feud with Studd and King Kong Bundy. Around this time, Roussimoff requested a leave of absence to tend to his health, effects from his acromegaly that were beginning to take their toll, as well as tour Japan. He had also been cast in the film "The Princess Bride". To explain his absence, a storyline was developed in which Heenan—suggesting that Roussimoff was secretly afraid of Studd and Bundy, whom Heenan bragged were unbeatable—challenged Roussimoff and a partner of his choosing to wrestle Studd and Bundy in a televised tag-team match. When Roussimoff failed to show, WWF president Jack Tunney indefinitely suspended him. Later in the summer of 1986, upon Roussimoff's return to the United States, he began wearing a mask and competing as the "Giant Machine" in a stable known as the Machines. (Big Machine and Super Machine were the other members, (Hulk Hogan (as "Hulk Machine") and Roddy Piper (as "Piper Machine") were also one-time members). The WWF's television announcers sold the Machines—a gimmick that was copied from the New Japan Pro Wrestling character "Super Strong Machine", played by Japanese wrestler Junji Hirata, —as "a new tag-team from Japan" and claimed not to know the identities of the wrestlers, even though it was obvious to fans that it was Roussimoff competing as the Giant Machine. Heenan, Studd, and Bundy complained to Tunney, who eventually told Heenan that if it could be proven that Roussimoff and the Giant Machine were the same person, Roussimoff would be fired. Roussimoff thwarted Heenan, Studd, and Bundy at every turn. Then, in late 1986, the Giant Machine "disappeared," and Roussimoff was reinstated. Foreshadowing Roussimoff's heel turn, Heenan expressed his approval of the reinstatement but did not explain why.
Roussimoff agreed to turn heel in early 1987 to be the counter to the biggest "babyface" in professional wrestling at that time, Hulk Hogan. On an edition of "Piper's Pit" in 1987, Hogan was presented a trophy for being the WWF World Heavyweight Champion for three years; Roussimoff came out to congratulate him, shaking Hogan's hand with a strong grip, which surprised the Hulkster. On the following week's "Piper's Pit", Roussimoff was presented a slightly smaller trophy for being "the only undefeated wrestler in wrestling history." Although he had suffered a handful of countout and disqualification losses in WWF, he had never been pinned or forced to submit in a WWF ring. Hogan came out to congratulate him and ended up being the focal point of the interview. Apparently annoyed, he walked out in the midst of Hogan's speech. A discussion between Roussimoff and Hogan was scheduled, and on a "Piper's Pit" that aired February 7, 1987, the two met. Hogan was introduced first, followed by Roussimoff, who was led by longtime rival Bobby Heenan.
Speaking on behalf of his new protégé, Heenan accused Hogan of being Roussimoff's friend only so he would not have to defend his title against him. Hogan tried to reason with Roussimoff, but his pleas were ignored as he challenged Hogan to a match for the WWF World Heavyweight Championship at WrestleMania III. Hogan was still seemingly in disbelief as to what Roussimoff was doing, prompting Heenan to say "You can't believe it, maybe you'll believe this, Hogan" before Roussimoff ripped off the T-shirt and crucifix from Hogan, with the crucifix scratching Hogan's chest, causing him to bleed.
Following Hogan's acceptance of his challenge on a later edition of "Piper's Pit", the two were part of a 20-man over-the-top-rope battle-royal on March 14 edition of "Saturday Night's Main Event X" at the Joe Louis Arena in Detroit. Although the battle royal was won by Hercules, Roussimoff claimed to have gained a psychological advantage over Hogan when he threw the WWF World Heavyweight Champion over the top rope. The match, which was actually taped on February 21, 1987, aired only two weeks before WrestleMania III to make it seem like Hogan had met his match in André the Giant.
At WrestleMania III, he was billed at , and the stress of such immense weight on his bones and joints resulted in constant pain. After recent back surgery, he was also wearing a brace underneath his wrestling singlet. In front of a record crowd, Hogan won the match after body-slamming Roussimoff (later dubbed "the bodyslam heard around the world"), followed by Hogan's running leg drop finisher. Years later, Hogan claimed that Roussimoff was so heavy, he felt more like , and that he tore his latissimus dorsi muscle when slamming him.
Another myth about the match is that no one, not even WWF owner Vince McMahon, knew until the day of the event whether Roussimoff would lose the match. In reality, he had agreed to lose the match sometime before, mostly for health reasons. Contrary to popular belief, it was not the first time that Hogan had successfully body-slammed him in a WWF match. A then-heel Hogan had slammed a then-face Roussimoff following their match at the Showdown at Shea on August 9, 1980, though Roussimoff was somewhat lighter (around ) and more athletic at the time (Hogan also slammed him in a match in Hamburg, Pennsylvania, a month later). This took place in the territorial days of American wrestling three years before WWF began national expansion, so many of those who watched WrestleMania III had never seen the Giant slammed (Roussimoff had also previously allowed Harley Race, El Canek and Stan Hansen, among others, to slam him).
By the time of WrestleMania III, the WWF had gone national, giving more meaning to the Roussimoff–Hogan match that took place then. The feud between Roussimoff and Hogan simmered during the summer of 1987, as Roussimoff's health declined. The feud began heating up again when wrestlers were named the captains of rival teams at the inaugural Survivor Series event. During their approximately one minute of battling each other during the match, Hogan dominated Roussimoff and was on the brink of knocking him from the ring, but was tripped up by his partners, Bundy and One Man Gang, and would be counted out. Roussimoff went on to be the sole survivor of the match, pinning Bam Bam Bigelow before Hogan returned to the ring to attack André and knock him out of the ring. Roussimoff later got revenge when, after Hogan won a match against Bundy on "Saturday Night's Main Event", he snuck up from behind and began choking Hogan to the brink of unconsciousness, not letting go even after an army of seven face-aligned wrestlers ran to the ring to try to pull him away; it took Hacksaw Jim Duggan breaking a piece of wood over his back (which he no-sold) for him to let go, after which Hogan was pulled to safety. As was the case with the "SNME" battle royal a year earlier, the series of events was one of the pieces that helped build interest in a possible one-on-one rematch between Hogan and Roussimoff, and to make it seem that Roussimoff was certain to win easily when they did meet.
In the meantime, the "Million Dollar Man" Ted DiBiase failed to persuade Hogan to sell him the WWF World Heavyweight Championship. After failing to defeat Hogan in a subsequent series of matches, DiBiase turned to Roussimoff to win it for him. He and DiBiase had teamed several times in the past, including in Japan and in the WWF in the late 1970s and early 1980s when both were faces, but this was not acknowledged during this new storyline. The earlier attack and DiBiase's insertion into the feud set up the Hogan-Roussimoff rematch on "The Main Event", to air February 5, 1988, on a live broadcast on NBC. Acting as his hired gun, Roussimoff won the WWF World Heavyweight Championship from Hogan (his first singles title) in a match where it was later revealed that appointed referee Dave Hebner was "detained backstage", and a replacement (whom Hogan afterwards initially accused of having been paid by DiBiase to get plastic surgery to look like Dave, but was revealed to have been his evil twin brother, Earl Hebner), made a three count on Hogan while his shoulders were off the mat.
After winning, Roussimoff "sold" the title to DiBiase; the transaction was declared invalid by then-WWF president Jack Tunney and the title was declared vacant. This was shown on WWF's NBC program "The Main Event". At WrestleMania IV, Roussimoff and Hulk Hogan fought to a double disqualification in a WWF title tournament match (with the idea in the storyline saying that Roussimoff was again working on DiBiase's behalf in giving DiBiase a clearer path in the tournament). Afterward, Roussimoff and Hogan's feud died down after a steel cage match held at "WrestleFest" on July 31, 1988, in Milwaukee.
At the inaugural SummerSlam pay-per-view held at Madison Square Garden, Roussimoff and DiBiase (billed as The Mega Bucks) faced Hogan and WWF World Heavyweight Champion "Macho Man" Randy Savage (known as The Mega Powers) in the main event, with Jesse "The Body" Ventura as the special guest referee. During the match, the Mega Powers' manager, Miss Elizabeth, distracted the Mega Bucks and Ventura when she climbed up on the ring apron, removed her yellow skirt and walked around in a pair of red panties. This allowed Hogan and Savage time to recover and eventually win the match with Hogan pinning DiBiase. Savage forced Ventura's hand down for the final three-count, due to Ventura's character historically being at odds with Hogan, and his unwillingness to count the fall.
Concurrent with the developing feud with the Mega Powers, Roussimoff was placed in a feud with Jim Duggan, which began after Duggan knocked out Roussimoff with a two-by-four board during a television taping. Despite Duggan's popularity with fans, Roussimoff regularly got the upper hand in the feud.
Roussimoff's next major feud was against Jake "The Snake" Roberts. In this storyline, it was said Roussimoff was afraid of snakes, something Roberts exposed on "Saturday Night's Main Event" when he threw his snake, Damien, on the frightened Roussimoff; as a result, he suffered a kayfabe mild heart attack and vowed revenge. During the next few weeks, Roberts frequently walked to ringside carrying his snake in its bag during Roussimoff's matches, causing the latter to run from the ring in fright. Throughout their feud (which culminated at WrestleMania V), Roberts constantly used Damien to gain a psychological edge over the much larger and stronger Roussimoff.
In 1989, Roussimoff and the returning Big John Studd briefly reprised their feud, beginning at WrestleMania V, when Studd was the referee in the match with Roberts, this time with Studd as a face and Roussimoff as the heel. During the late summer and Autumn of 1989, he engaged in a brief feud, consisting almost entirely of house shows (non-televised events), with then-Intercontinental Champion The Ultimate Warrior. The younger Warrior, WWF's rising star, regularly squashed the aging Roussimoff in an attempt to showcase his star quality and promote him as the "next big thing".
In late 1989, Roussimoff was joined with fellow Heenan Family member Haku to form a new tag team called the Colossal Connection, in part to fill a void left by the departure of Tully Blanchard and Arn Anderson (the Brain Busters, who were also members of Heenan's stable) from the WWF, and also to continue to keep the aging Roussimoff in the main event spotlight. The Colossal Connection immediately targeted WWF Tag Team Champions Demolition (who had recently won the belts from the Brain Busters). At a television taping on December 13, 1989, the Colossal Connection defeated Demolition to win the titles. Roussimoff and Haku successfully defended their title, mostly against Demolition, until WrestleMania VI on April 1, 1990, when Demolition took advantage of a mistimed move by the champions to regain the belts. After the match, a furious Heenan blamed Roussimoff for the title loss and after shouting at him, slapped him in the face; an angry Roussimoff responded with a slap of his own that sent Heenan staggering from the ring. Roussimoff also caught Haku's kick attempt, sending him reeling from the ring as well, prompting support for Roussimoff and turning him face for the first time since 1987. Due to his ongoing health issues, Roussimoff was not able to wrestle at the time of Wrestlemania VI and Haku actually wrestled the entire match against Demolition without tagging him in.
On weekend television shows following WrestleMania VI, Bobby Heenan vowed to spit in Roussimoff's face when he came crawling back to the Heenan Family. However Roussimoff would wrestle one more time with Haku, teaming up to face Demolition on a house show in Honolulu, HI, on April 10, Roussimoff was knocked out of the ring and The Colossal Connection lost via count-out. After the match, Roussimoff and Haku would fight each other, marking the end of the team. His final match of 1990 came at a combined WWF/All Japan/New Japan show on April 13 in Tokyo, Japan when he teamed with Giant Baba to defeat Demolition in a non-title match. Roussimoff would win by gaining the pinfall on Smash.
Roussimoff returned in the winter of 1990, but it was not to the World Wrestling Federation. Instead, Roussimoff made an interview appearance for Herb Abrams' fledgling Universal Wrestling Federation on October 11 in Reseda, California. (the segment aired in 1991). He appeared in an interview segment with Captain Lou Albano and put over the UWF. The following month, on November 30 at a house show in Miami, Florida, the World Wrestling Federation announced his return as a participant in the 1991 Royal Rumble (to be held in Miami, FL two months later). Roussimoff was also mentioned as a participant on television but would ultimately back out due to a leg injury.
His on-air return finally took place at the WWF's "Super-Stars & Stripes Forever" USA Network special on March 17, 1991, when he came out to shake the hand of The Big Boss Man after an altercation with Mr. Perfect. The following week, at WrestleMania VII, he came to the aid of the Boss Man in his match against Mr. Perfect. Roussimoff finally returned to action on April 26, 1991, in a six-man tag-team matchup when he teamed with the Rockers in a winning effort against Mr. Fuji and the Orient Express at a house show in Belfast, Northern Ireland. On May 10 he participated in a 17-man battle-royal at a house show in Detroit, which was won by Kerry Von Erich. This was Andre's final WWF match, although he was involved in several subsequent storylines. His last major WWF storyline following WrestleMania VII had the major heel managers (Bobby Heenan, Sensational Sherri, Slick, and Mr. Fuji) trying to recruit Roussimoff one-by-one, only to be turned down in various humiliating ways (e.g. Heenan had his hand crushed, Sherri received a spanking, Slick got locked in the trunk of the car he was offering to Roussimoff and Mr. Fuji got a pie in his face). Finally, Jimmy Hart appeared live on "WWF Superstars" to announce that he had successfully signed Roussimoff to tag-team with Earthquake. However, when asked to confirm this by Gene Okerlund, Roussimoff denied the claims. This led to Earthquake's attacking Roussimoff from behind (injuring his knee). Jimmy Hart would later get revenge for the humiliation by secretly signing Tugboat and forming the Natural Disasters. This led to Roussimoff's final major WWF appearance at SummerSlam '91, where he seconded the Bushwhackers in their match against the Disasters. Roussimoff was on crutches at ringside, and after the Disasters won the match, they set out to attack him, but the Legion of Doom made their way to ringside and got in between them and the Giant, who was preparing to defend himself with one of his crutches. The Disasters left the ringside area as they were outnumbered by the Legion of Doom, the Bushwhackers and Roussimoff, who struck both Earthquake and Typhoon (the former Tugboat) with the crutch as they left. His final WWF appearance came at a house show in Paris, France, on October 9. He was in Davey Boy Smith's corner as the Bulldog faced Earthquake. Davey Boy hit Earthquake with Roussimoff's crutch, allowing Smith to win.
His last U.S. television appearance was in a brief interview on World Championship Wrestling's (WCW) "Clash of the Champions XX" special that aired on TBS on September 2, 1992.
After WrestleMania VI, Roussimoff spent the rest of his in-ring career in All Japan Pro Wrestling (AJPW) and Mexico's Universal Wrestling Association (UWA), where he performed under the name "André el Gigante." He toured with AJPW three times per year, from 1990 to 1992, usually teaming with Giant Baba in tag-team matches. He also made a couple of guest appearances for Herb Abrams' Universal Wrestling Federation, in 1991, feuding with Big John Studd, though he never had a match in the promotion. He did his final tour of Mexico in 1992 in a selection of six-man tag matches alongside Bam Bam Bigelow and a variety of Lucha Libre stars facing among others Bad News Allen and future WWF Champions Mick Foley & Yokozuna. Roussimoff wrestled his final match for AJPW in 1992, after which he retired from professional wrestling.
Roussimoff branched out into acting again in the 1970s and 1980s, after a 1967 French boxing film, making his USA acting debut playing a Sasquatch ("Bigfoot") in a two-part episode aired in 1976 on the television series "The Six Million Dollar Man". He appeared in other television shows, including "The Greatest American Hero", "B. J. and the Bear", "The Fall Guy" and 1990's "Zorro".
Towards the end of his career, Roussimoff starred in several films. He had an uncredited appearance in the 1984 film "Conan the Destroyer" as Dagoth, the resurrected horned giant god who is killed by Conan (Arnold Schwarzenegger). That same year, he also made an appearance in "Micki & Maude" (billed as André Rousimmoff). He appeared most notably as Fezzik, his own favorite role, in the 1987 film "The Princess Bride". Both the film and his performance retain a devoted following. In shoot interviews, wrestlers have stated that he was so proud of being in the film that he carried a copy everywhere he went and insisted a VCR be available in his hotel rooms on the road so he could watch it repeatedly.
In his last film, he appeared in a cameo role as a circus giant in the comedy "Trading Mom", which was released in 1994, a year after his death.
Roussimoff was mentioned in the "1974 Guinness Book of World Records" as the highest-paid wrestler in history at that time. He had earned US$400,000 in one year during the early 1970s.
Robin Christensen is Roussimoff's only child. Her mother Jean (who died in 2008) became acquainted with her father through the wrestling business around 1972 or 1973. Christensen had almost no connection with her father and saw him only five times in her life, despite occasional televised and printed news pieces criticizing his absentee fatherhood. While she gave some interviews about the subject in her childhood, Christensen is reportedly reluctant to discuss her father publicly today.
Roussimoff has been unofficially crowned "the greatest drunk on Earth" for once consuming 119 beers (in total, over ) in six hours. On an episode of WWE's "Legends of Wrestling", Mike Graham said Roussimoff once drank 156 beers (over ) in one sitting, which was confirmed by Dusty Rhodes. The Fabulous Moolah wrote in her autobiography that Roussimoff drank 127 beers in a Reading, Pennsylvania, hotel bar and later passed out in the lobby. The staff could not move him and had to leave him there until he awoke. In a shoot interview, Ken Patera recalled an occasion where Roussimoff was challenged by Dick Murdoch to a beer drinking contest. After nine or so hours, Roussimoff had drunk 116 beers. A tale recounted by Cary Elwes in his book about the making of "The Princess Bride" has Roussimoff falling on top of somebody while drunk, after which the NYPD sent an undercover officer to follow Roussimoff around whenever he went out drinking in their city to make sure he did not fall on anyone again. Another story also says prior to his famous WrestleMania III match, Roussimoff drank 14 bottles of wine.
An urban legend exists surrounding Roussimoff's 1987 surgery in which his size made it impossible for the anesthesiologist to estimate a dosage via standard methods; consequently, his alcohol tolerance was used as a guideline instead.
Roussimoff was arrested in 1989 by the sheriff of Linn County, Iowa; and charged with assault after he allegedly attacked a local television cameraman.
William Goldman, the author of the novel and the screenplay of "The Princess Bride", wrote in his nonfiction work "Which Lie Did I Tell?" that Roussimoff was one of the gentlest and most generous people he ever knew. Whenever Roussimoff ate with someone in a restaurant, he would pay, but he would also insist on paying when he was a guest. On one occasion, after Roussimoff attended a dinner with Arnold Schwarzenegger and Wilt Chamberlain, Schwarzenegger had quietly moved to the cashier to pay before Roussimoff could, but then found himself being physically lifted, carried from his table and deposited on top of his car by Roussimoff and Chamberlain.
Roussimoff owned a ranch in Ellerbe, North Carolina, looked after by two of his close friends. When he was not on the road, he loved spending time at the ranch tending to his cattle, playing with his dogs and entertaining company. While there were custom-made chairs and a few other modifications in his home to account for his size, tales that everything in his home was custom-made for a large man are said to be exaggerated. Since Roussimoff could not easily go shopping due to his fame and size, he was known to spend hours watching QVC and made frequent purchases from the shopping channel. Roussimoff had a passion for card games, mainly cribbage.
Roussimoff died in his sleep of congestive heart failure on the night of January 27, 1993, in a Paris hotel room. He was found by his chauffeur. He was in Paris to attend his father's funeral. While there, Roussimoff decided to stay in France longer to be with his mother on her birthday. He spent the day before his death visiting and playing cards with some of his oldest friends in Molien.
In his will, Roussimoff specified that his remains be cremated and "disposed of". Upon his death in Paris, his family in France held a funeral for him, intending to bury him near his father. When they learned of his wish to be cremated, his body was flown to the United States, where he was cremated according to his wishes. His ashes were scattered at his ranch () in Ellerbe, North Carolina. In addition, in accordance with his will, he left his estate to his sole beneficiary: his daughter Robin.
Roussimoff made numerous appearances as himself in video games, starting with "WWF WrestleMania". He also appears posthumously in "Virtual Pro Wrestling 64", "WWF No Mercy", "Legends of Wrestling", "Legends of Wrestling II", "", "WWE SmackDown! vs. Raw", "WWE SmackDown! vs. Raw 2006", "WWE Legends of WrestleMania", "WWE All Stars", "WWE 2K14", "WWE 2K15", "WWE 2K16", "WWE 2K17", "WWE 2K18", "WWE 2K19", "WWE 2K20" and many others.
In January 2005, WWE released "André The Giant", a DVD focusing on the life and career of Roussimoff. The DVD is a reissue of the out-of-print "André The Giant" VHS made by Coliseum Video in 1985, with commentary by Michael Cole and Tazz replacing Gorilla Monsoon and Jesse Ventura's commentary on his WrestleMania match with Big John Studd. The video is hosted by Lord Alfred Hayes. Later matches, including his battles against Hulk Hogan while a heel, are not included on this VHS.
On May 9, 2016, it was announced that a movie based on the 2015 authorized graphic novel biography "André the Giant: Closer to Heaven" was in the plans made by Lion Forge Comics along with producers Scott Steindorff, Dylan Russell and consulted by Roussimoff's daughter, Robin Christensen-Roussimoff.
On April 10, 2018, HBO aired a documentary film called "André the Giant". | https://en.wikipedia.org/wiki?curid=2575 |
Apache HTTP Server
The Apache HTTP Server, colloquially called Apache ( ), is a free and open-source cross-platform web server software, released under the terms of Apache License 2.0. Apache is developed and maintained by an open community of developers under the auspices of the Apache Software Foundation.
The vast majority of Apache HTTP Server instances run on a Linux distribution, but current versions also run on Microsoft Windows and a wide variety of Unix-like systems. Past versions also ran on OpenVMS, NetWare, OS/2 and other operating systems, including ports to mainframes.
Originally based on the NCSA HTTPd server, development of Apache began in early 1995 after work on the NCSA code stalled. Apache played a key role in the initial growth of the World Wide Web, quickly overtaking NCSA HTTPd as the dominant HTTP server, and has remained most popular since April 1996. In 2009, it became the first web server software to serve more than 100 million websites. , Netcraft estimated that Apache served 29.12% of the million busiest websites, while Nginx served 25.54%; according to W3Techs, Apache served 39.5% of the top 10 million sites and Nginx served 31.7%.
A number of explanations for the origin of the Apache name have been offered over the years.
From the inception of the Apache project in 1995 the official documentation stated:
In an April 2000 interview, Brian Behlendorf, one of the creators of Apache said:
Since 2013 the Apache Foundation has explained the origin of the name as:
When Apache is running under Unix, its process name is , which is short for "HTTP daemon".
Apache supports a variety of features, many implemented as compiled modules which extend the core functionality. These can range from authentication schemes to supporting server-side programming languages such as Perl, Python, Tcl and PHP. Popular authentication modules include mod_access, mod_auth, mod_digest, and mod_auth_digest, the successor to mod_digest. A sample of other features include Secure Sockets Layer and Transport Layer Security support (mod_ssl), a proxy module (mod_proxy), a URL rewriting module (mod_rewrite), custom log files (mod_log_config), and filtering support (mod_include and mod_ext_filter).
Popular compression methods on Apache include the external extension module, mod_gzip, implemented to help with reduction of the size (weight) of web pages served over HTTP. ModSecurity is an open source intrusion detection and prevention engine for Web applications. Apache logs can be analyzed through a Web browser using free scripts, such as AWStats/W3Perl or Visitors.
Virtual hosting allows one Apache installation to serve many different websites. For example, one computer with one Apache installation could simultaneously serve codice_1, codice_2, codice_3, etc.
Apache features configurable error messages, DBMS-based authentication databases, content negotiation and supports several graphical user interfaces (GUIs).
It supports password authentication and digital certificate authentication. Because the source code is freely available, anyone can adapt the server for specific needs, and there is a large public library of Apache add-ons.
A more detailed list of features is provided below:
Instead of implementing a single architecture, Apache provides a variety of MultiProcessing Modules (MPMs), which allow it to run in either a process-based mode, a hybrid (process and thread) mode, or an event-hybrid mode, in order to better match the demands of each particular infrastructure. Choice of MPM and configuration is therefore important. Where compromises in performance must be made, Apache is designed to reduce latency and increase throughput relative to simply handling more requests, thus ensuring consistent and reliable processing of requests within reasonable time-frames.
For delivering static pages, Apache 2.2 series was considered significantly slower than nginx and varnish. To address this issue, the Apache developers created the Event MPM, which mixes the use of several processes and several threads per process in an asynchronous event-based loop. This architecture as implemented in the Apache 2.4 series performs at least as well as event-based web servers, according to Jim Jagielski and other independent sources. However, some independent but significantly outdated benchmarks show that it is still half as fast as nginx, e.g.
The Apache HTTP Server codebase was relicensed to the Apache 2.0 License (from the previous 1.1 license) in January 2004, and Apache HTTP Server 1.3.31 and 2.0.49 were the first releases using the new license.
The OpenBSD project did not like the change and continued the use of pre-2.0 Apache versions, effectively forking Apache 1.3.x for its purposes. They initially replaced it with Nginx, and soon after made their own replacement, OpenBSD Httpd, based on the relayd project.
Version 1.1:
The Apache License 1.1 was approved by the ASF in 2000: The primary change from the 1.0 license is in the 'advertising clause' (section 3 of the 1.0 license); derived products are no longer required to include attribution in their advertising materials, only in their documentation.
Version 2.0:
The ASF adopted the Apache License 2.0 in January 2004. The stated goals of the license included making the license easier for non-ASF projects to use, improving compatibility with GPL-based software, allowing the license to be included by reference instead of listed in every file, clarifying the license on contributions, and requiring a patent license on contributions that necessarily infringe a contributor's own patents.
The Apache HTTP Server Project is a collaborative software development effort aimed at creating a robust, commercial-grade, feature-rich and freely available source code implementation of an HTTP (Web) server. The project is jointly managed by a group of volunteers located around the world, using the Internet and the Web to communicate, plan, and develop the server and its related documentation. This project is part of the Apache Software Foundation. In addition, hundreds of users have contributed ideas, code, and documentation to the project.
Apache 2.4 dropped support for BeOS, TPF and even older platforms. | https://en.wikipedia.org/wiki?curid=2581 |
Arbroath Abbey
Arbroath Abbey, in the Scottish town of Arbroath, was founded in 1178 by King William the Lion for a group of Tironensian Benedictine monks from Kelso Abbey. It was consecrated in 1197 with a dedication to the deceased Saint Thomas Becket, whom the king had met at the English court. It was William's only personal foundation — he was buried before the high altar of the church in 1214.
The last Abbot was Cardinal David Beaton, who in 1522 succeeded his uncle James to become Archbishop of St Andrews. The Abbey is cared for by Historic Environment Scotland and is open to the public throughout the year (entrance charge). The distinctive red sandstone ruins stand at the top of the High Street in Arbroath.
King William gave the Abbey independence from its mother church and endowed it generously, including income from 24 parishes, land in every royal burgh and more. The Abbey's monks were allowed to run a market and build a harbour. King John of England gave the Abbey permission to buy and sell goods anywhere in England (except London) toll-free.
The Abbey, which was the richest in Scotland, is most famous for its association with the 1320 Declaration of Scottish Independence believed to have been drafted by Abbot Bernard, who was the Chancellor of Scotland under King Robert I.
The Abbey fell into ruin after the Reformation. From 1590 onward, its stones were raided for buildings in the town of Arbroath. This continued until 1815 when steps were taken to preserve the remaining ruins.
On Christmas Day 1950, the Stone of Destiny was recovered from Westminster Abbey. On April 11, 1951, the missing stone was found lying on the site of the Abbey's altar.
Since 1947, a major historical re-enactment commemorating the Declaration's signing has been held within the roofless remains of the Abbey church. The celebration is run by the local Arbroath Abbey Pageant Society, and tells the story of the events which led up to the signing. This is not an annual event (most recent performance 2005; next August 2009). However, a special event to mark the signing is held every year on the 6th of April and involves a street procession and short piece of street theatre.
In 2005 The Arbroath Abbey campaign was launched. The campaign seeks to gain World Heritage Status for the iconinc Angus landmark that was the birthplace of one of Scotland's most significant document, The Declaration of Arbroath. Campaigners believe that the Abbey's historical pronouncement makes it a prime candidate to achieve World Heritage Status. MSP Alex Johnstone wrote "Clearly, the Declaration of Arbroath is a literary work of outstanding universal significance by any stretch of the imagination" In 2008, the Campaign Group Chairman, Councillor Jim Millar launched a public petition to reinforce the bid explaining "We're simply asking people to, local people especially, to sign up to the campaign to have the Declaration of Arbroath and Arbroath Abbey recognised by the United Nations. Essentially we need local people to sign up to this campaign simply because the United Nations demand it."
The Abbey was built over some sixty years using local red sandstone, but gives the impression of a single coherent, mainly 'Early English' architectural design, though the round-arched processional doorway in the western front looks back to late Norman or transitional work. The triforium (open arcade) above the door is unique in Scottish medieval architecture. It is flanked by twin towers decorated with blind arcading. The cruciform church measured long by wide. What remains of it today are the sacristy, added by Abbot Paniter in the 15th century, the southern transept, which features Scotland's largest lancet windows, part of the choir and presbytery, the southern half of the nave, parts of the western towers and the western doorway.
The church originally had a central tower and (probably) a spire. These would once have been visible for many miles over the surrounding countryside, and no doubt once acted as a sea-mark for ships. The soft sandstone of the walls was originally protected by plaster internally and render externally. These coatings are long gone and much of the architectural detail is sadly eroded, though detached fragments found in the ruins during consolidation give an impression of the original refined, rather austere, architectural effect.
The distinctive round window high in the south transept was originally lit up at night as a beacon for mariners. It is known locally as the 'Round O', and from this tradition inhabitants of Arbroath are colloquially known as 'Reid Lichties' (Scots reid = red).
Little remains of the claustral buildings of the Abbey except for the impressive gatehouse, which stretches between the south-west corner of the church and a defensive tower on the High Street, and the still complete Abbot's House, a building of the 13th, 15th and 16th centuries, which is the best-preserved of its type in Scotland.
In the summer of 2001 a new visitors' centre was opened to the public beside the Abbey's west front. This red sandstone-clad building, with its distinctive 'wave-shaped' organic roof, planted with sedum, houses displays on the history of the Abbey and some of the best surviving stonework and other relics. The upper storey features a scale model of the Abbey complex, a computer-generated 'fly-through' reconstruction of the church as it was when complete, and a viewing gallery with excellent views of the ruins. The centre won the 2002 Angus Design Award. An archaeological investigation of the site of the visitors' centre before building started revealed the foundations of the medieval precinct wall, with a gateway, and stonework discarded during manufacture, showing that the area was the site of the masons' yard while the Abbey was being built. | https://en.wikipedia.org/wiki?curid=2583 |
Accounting
Accounting or accountancy is the measurement, processing, and communication of financial and non financial information about economic entities such as businesses and corporations. Accounting, which has been called the "language of business", measures the results of an organization's economic activities and conveys this information to a variety of users, including investors, creditors, management, and regulators. Practitioners of accounting are known as accountants. The terms "accounting" and "financial reporting" are often used as synonyms.
Accounting can be divided into several fields including financial accounting, management accounting, external auditing, tax accounting and cost accounting. Accounting information systems are designed to support accounting functions and related activities. Financial accounting focuses on the reporting of an organization's financial information, including the preparation of financial statements, to the external users of the information, such as investors, regulators and suppliers; and management accounting focuses on the measurement, analysis and reporting of information for internal use by management. The recording of financial transactions, so that summaries of the financials may be presented in financial reports, is known as bookkeeping, of which double-entry bookkeeping is the most common system.
Even though accounting has existed in various forms and levels sophistication throughout many human societies, the double-entry accounting system in use today was developed in medieval Europe, particularly in Venice, and is usually attributed to the Italian mathematician and Franciscan friar Luca Pacioli. Today, accounting is facilitated by such as standard-setters, accounting firms and professional bodies. Financial statements are usually audited by accounting firms, and are prepared in accordance with generally accepted accounting principles (GAAP). GAAP is set by various standard-setting organizations such as the Financial Accounting Standards Board (FASB) in the United States and the Financial Reporting Council in the United Kingdom. As of 2012, "all major economies" have plans to converge towards or adopt the International Financial Reporting Standards (IFRS).
The history of accounting is thousands of years old and can be traced to ancient civilizations. The early development of accounting dates back to ancient Mesopotamia, and is closely related to developments in writing, counting and money; there is also evidence of early forms of bookkeeping in ancient Iran, and early auditing systems by the ancient Egyptians and Babylonians. By the time of Emperor Augustus, the Roman government had access to detailed financial information.
Double-entry bookkeeping was pioneered in the Jewish community of the early-medieval Middle East and was further refined in medieval Europe. With the development of joint-stock companies, accounting split into financial accounting and management accounting.
The first published work on a double-entry bookkeeping system was the "Summa de arithmetica", published in Italy in 1494 by Luca Pacioli (the "Father of Accounting"). Accounting began to transition into an organized profession in the nineteenth century, with local professional bodies in England merging to form the Institute of Chartered Accountants in England and Wales in 1880.
Both the words accounting and accountancy were in use in Great Britain by the mid-1800s, and are derived from the words "accompting" and "accountantship" used in the 18th century. In Middle English (used roughly between the 12th and the late 15th century) the verb "to account" had the form "accounten", which was derived from the Old French word "aconter", which is in turn related to the Vulgar Latin word "computare", meaning "to reckon". The base of "computare" is "putare", which "variously meant to prune, to purify, to correct an account, hence, to count or calculate, as well as to think."
The word "accountant" is derived from the French word , which is also derived from the Italian and Latin word . The word was formerly written in English as "accomptant", but in process of time the word, which was always pronounced by dropping the "p", became gradually changed both in pronunciation and in orthography to its present form.
Accounting has variously been defined as the keeping or preparation of the financial records of an entity, the analysis, verification and reporting of such records and "the principles and procedures of accounting"; it also refers to the job of being an accountant.
Accountancy refers to the occupation or profession of an accountant, particularly in British English.
Accounting has several subfields or subject areas, including financial accounting, management accounting, auditing, taxation and accounting information systems.
Financial accounting focuses on the reporting of an organization's financial information to external users of the information, such as investors, potential investors and creditors. It calculates and records business transactions and prepares financial statements for the external users in accordance with generally accepted accounting principles (GAAP). GAAP, in turn, arises from the wide agreement between accounting theory and practice, and change over time to meet the needs of decision-makers.
Financial accounting produces past-oriented reports—for example the financial statements prepared in 2006 reports on performance in 2005—on an annual or quarterly basis, generally about the organization as a whole.
This branch of accounting is also studied as part of the board exams for qualifying as an actuary. These two types of professionals, accountants and actuaries, have created a culture of being archrivals.
Management accounting focuses on the measurement, analysis and reporting of information that can help managers in making decisions to fulfill the goals of an organization. In management accounting, internal measures and reports are based on cost-benefit analysis, and are not required to follow the generally accepted accounting principle (GAAP). In 2014 CIMA created the Global Management Accounting Principles (GMAPs). The result of research from across 20 countries in five continents, the principles aim to guide best practice in the discipline.
Management accounting produces future-oriented reports—for example the budget for 2006 is prepared in 2005—and the time span of reports varies widely. Such reports may include both financial and non financial information, and may, for example, focus on specific products and departments.
Auditing is the verification of assertions made by others regarding a payoff, and in the context of accounting it is the "unbiased examination and evaluation of the financial statements of an organization". Audit is a professional service that is systematic and conventional.
An audit of financial statements aims to express or disclaim an independent opinion on the financial statements. The auditor expresses an independent opinion on the fairness with which the financial statements presents the financial position, results of operations, and cash flows of an entity, in accordance with the generally acceptable accounting principle (GAAP) and "in all material respects". An auditor is also required to identify circumstances in which the generally acceptable accounting principles (GAAP) has not been consistently observed.
An accounting information system is a part of an organization's information system that focuses on processing accounting data.
Many corporations use artificial intelligence-based information systems. Banking and finance industry is using AI as fraud detection. Retail industry is using AI for customer services. AI is also used in cybersecurity industry. It involves computer hardware and software systems and using statistics and modeling.
Tax accounting in the United States concentrates on the preparation, analysis and presentation of tax payments and tax returns. The U.S. tax system requires the use of specialised accounting principles for tax purposes which can differ from the generally accepted accounting principles (GAAP) for financial reporting. U.S. tax law covers four basic forms of business ownership: sole proprietorship, partnership, corporation, and limited liability company. Corporate and personal income are taxed at different rates, both varying according to income levels and including varying marginal rates (taxed on each additional dollar of income) and average rates (set as a percentage of overall income).
Forensic accounting is a specialty practice area of accounting that describes engagements that result from actual or anticipated disputes or litigation. "Forensic" means "suitable for use in a court of law," and it is to that standard and potential outcome that forensic accountants generally have to work.
Professional accounting bodies include the American Institute of Certified Public Accountants (AICPA) and the other 179 members of the International Federation of Accountants (IFAC), including Institute of Chartered Accountants of Scotland (ICAS), CPA Australia, Institute of Chartered Accountants of India, Association of Chartered Certified Accountants (ACCA) and Institute of Chartered Accountants in England and Wales (ICAEW). Professional bodies for subfields of the accounting professions also exist, for example the Chartered Institute of Management Accountants (CIMA) in the UK and Institute of management accountants in the United States. Many of these professional bodies offer education and training including qualification and administration for various accounting designations, such as certified public accountant (AICPA) and chartered accountant.
Depending on its size, a company may be legally required to have their financial statements audited by a qualified auditor, and audits are usually carried out by accounting firms.
Accounting firms grew in the United States and Europe in the late nineteenth and early twentieth century, and through several mergers there were large international accounting firms by the mid-twentieth century. Further large mergers in the late twentieth century led to the dominance of the auditing market by the "Big Five" accounting firms: Arthur Andersen, Deloitte, Ernst & Young, KPMG and PricewaterhouseCoopers. The demise of Arthur Andersen following the Enron scandal reduced the Big Five to the Big Four.
Generally accepted accounting principles (GAAP) are accounting standards issued by national regulatory bodies. In addition, the International Accounting Standards Board (IASB) issues the International Financial Reporting Standards (IFRS) implemented by 147 countries. While standards for international audit and assurance, ethics, education, and public sector accounting are all set by independent standard settings boards supported by IFAC. The International Auditing and Assurance Standards Board sets international standards for auditing, assurance, and quality control; the International Ethics Standards Board for Accountants (IESBA) sets the internationally appropriate principles- based "Code of Ethics for Professional Accounts" the International Accounting Education Standards Board (IAESB) sets professional accounting education standards; International Public Sector Accounting Standards Board (IPSASB) sets accrual-based international public sector accounting standards
Organizations in individual countries may issue accounting standards unique to the countries. For example, in the United States the Financial Accounting Standards Board (FASB) issues the Statements of Financial Accounting Standards, which form the basis of US GAAP, and in the United Kingdom the Financial Reporting Council (FRC) sets accounting standards. However, as of 2012 "all major economies" have plans to converge towards or adopt the IFRS.
At least a bachelor's degree in accounting or a related field is required for most accountant and auditor job positions, and some employers prefer applicants with a master's degree. A degree in accounting may also be required for, or may be used to fulfill the requirements for, membership to professional accounting bodies. For example, the education during an accounting degree can be used to fulfill the American Institute of CPA's (AICPA) 150 semester hour requirement, and associate membership with the Certified Public Accountants Association of the UK is available after gaining a degree in finance or accounting.
A doctorate is required in order to pursue a career in accounting academia, for example to work as a university professor in accounting. The Doctor of Philosophy (PhD) and the Doctor of Business Administration (DBA) are the most popular degrees. The PhD is the most common degree for those wishing to pursue a career in academia, while DBA programs generally focus on equipping business executives for business or public careers requiring research skills and qualifications.
Professional accounting qualifications include the Chartered Accountant designations and other qualifications including certificates and diplomas.
In Scotland, chartered accountants of ICAS undergo Continuous Professional Development and abide by the ICAS code of ethics. In England and Wales, chartered accountants of the ICAEW undergo annual training, and are bound by the ICAEW's code of ethics and subject to its disciplinary procedures.
In the United States, the requirements for joining the AICPA as a Certified Public Accountant are set by the Board of Accountancy of each state, and members agree to abide by the AICPA's Code of Professional Conduct and Bylaws.
The ACCA is the largest global accountancy body with over 320,000 members and the organisation provides an ‘IFRS stream’ and a ‘UK stream’. Students must pass a total of 14 exams, which are arranged across three papers.
Accounting research is research in the effects of economic events on the process of accounting, the effects of reported information on economic events, and the roles of accounting in organizations and society.. It encompasses a broad range of research areas including financial accounting, management accounting, auditing and taxation.
Accounting research is carried out both by academic researchers and practicing accountants. Methodologies in academic accounting research include archival research, which examines "objective data collected from repositories"; experimental research, which examines data "the researcher gathered by administering treatments to subjects"; analytical research, which is "based on the act of formally modeling theories or substantiating ideas in mathematical terms"; interpretive research, which emphasizes the role of language, interpretation and understanding in accounting practice, "highlighting the symbolic structures and taken-for-granted themes which pattern the world in distinct ways"; critical research, which emphasizes the role of power and conflict in accounting practice; case studies; computer simulation; and field research.
Empirical studies document that leading accounting journals publish in total fewer research articles than comparable journals in economics and other business disciplines, and consequently, accounting scholars are relatively less successful in academic publishing than their business school peers. Due to different publication rates between accounting and other business disciplines, a recent study based on academic author rankings concludes that the competitive value of a single publication in a top-ranked journal is highest in accounting and lowest in marketing.
Many accounting practices have been simplified with the help of accounting computer-based software. An Enterprise resource planning (ERP) system is commonly used for a large organisation and it provides a comprehensive, centralized, integrated source of information that companies can use to manage all major business processes, from purchasing to manufacturing to human resources.
Accounting information systems have reduced the cost of accumulating, storing, and reporting managerial accounting information and have made it possible to produce a more detailed account of all data that is entered into any given system.
The year 2001 witnessed a series of financial information frauds involving Enron, auditing firm Arthur Andersen, the telecommunications company WorldCom, Qwest and Sunbeam, among other well-known corporations. These problems highlighted the need to review the effectiveness of accounting standards, auditing regulations and corporate governance principles. In some cases, management manipulated the figures shown in financial reports to indicate a better economic performance. In others, tax and regulatory incentives encouraged over-leveraging of companies and decisions to bear extraordinary and unjustified risk.
The Enron scandal deeply influenced the development of new regulations to improve the reliability of financial reporting, and increased public awareness about the importance of having accounting standards that show the financial reality of companies and the objectivity and independence of auditing firms.
In addition to being the largest bankruptcy reorganization in American history, the Enron scandal undoubtedly is the biggest audit failure. It involved a financial scandal of Enron Corporation and their auditors Arthur Andersen, which was revealed in late 2001. The scandal caused the dissolution of Arthur Andersen, which at the time was one of the five largest accounting firms in the world. After a series of revelations involving irregular accounting procedures conducted throughout the 1990s, Enron filed for Chapter 11 bankruptcy protection in December 2001.
One consequence of these events was the passage of Sarbanes–Oxley Act in the United States 2002, as a result of the first admissions of fraudulent behavior made by Enron. The act significantly raises criminal penalties for securities fraud, for destroying, altering or fabricating records in federal investigations or any scheme or attempt to defraud shareholders.
An accounting error is an unintentional error in an accounting entry, often immediately fixed when spotted. An accounting error should not be confused with fraud, which is an intentional act to hide or alter entries. | https://en.wikipedia.org/wiki?curid=2593 |
Ant
Ants are eusocial insects of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants appear in the fossil record across the globe in considerable diversity during the latest Early Cretaceous and early Late Cretaceous, suggesting an earlier origin. Ants evolved from vespoid wasp ancestors in the Cretaceous period, and diversified after the rise of flowering plants. More than 12,500 of an estimated total of 22,000 species have been classified. They are easily identified by their elbowed antennae and the distinctive node-like structure that forms their slender waists.
Ants form colonies that range in size from a few dozen predatory individuals living in small natural cavities to highly organised colonies that may occupy large territories and consist of millions of individuals. Larger colonies consist of various castes of sterile, wingless females, most of which are workers (ergates), as well as soldiers (dinergates) and other specialised groups. Nearly all ant colonies also have some fertile males called "drones" (aner) and one or more fertile females called "queens" (gynes). The colonies are described as superorganisms because the ants appear to operate as a unified entity, collectively working together to support the colony.
Ants have colonised almost every landmass on Earth. The only places lacking indigenous ants are Antarctica and a few remote or inhospitable islands. Ants thrive in most ecosystems and may form 15–25% of the terrestrial animal biomass. Their success in so many environments has been attributed to their social organisation and their ability to modify habitats, tap resources, and defend themselves. Their long co-evolution with other species has led to mimetic, commensal, parasitic, and mutualistic relationships.
Ant societies have division of labour, communication between individuals, and an ability to solve complex problems. These parallels with human societies have long been an inspiration and subject of study. Many human cultures make use of ants in cuisine, medication, and rituals. Some species are valued in their role as biological pest control agents. Their ability to exploit resources may bring ants into conflict with humans, however, as they can damage crops and invade buildings. Some species, such as the red imported fire ant ("Solenopsis invicta"), are regarded as invasive species, establishing themselves in areas where they have been introduced accidentally.
The word "ant" and its chiefly dialectal form "emmet" come from ', ' of Middle English, which come from ' of Old English, and these are all related to the dialectal Dutch ' and the Old High German ', from which comes the modern German '. All of these words come from West Germanic "*", and the original meaning of the word was "the biter" (from Proto-Germanic ', "off, away" + ' "cut"). The family name Formicidae is derived from the Latin ' ("ant") from which the words in other Romance languages, such as the Portuguese ', Italian ', Spanish ', Romanian ', and French ' are derived. It has been hypothesised that a Proto-Indo-European word *morwi- was used, cf. Sanskrit vamrah, Latin formīca, Greek μύρμηξ "mýrmēx", Old Church Slavonic "mraviji", Old Irish "moirb", Old Norse "maurr", Dutch "mier".
The family Formicidae belongs to the order Hymenoptera, which also includes sawflies, bees, and wasps. Ants evolved from a lineage within the stinging wasps, and a 2013 study suggests that they are a sister group of the Apoidea. In 1966, E. O. Wilson and his colleagues identified the fossil remains of an ant ("Sphecomyrma") that lived in the Cretaceous period. The specimen, trapped in amber dating back to around 92 million years ago, has features found in some wasps, but not found in modern ants. "Sphecomyrma" was possibly a ground forager, while "Haidomyrmex" and "Haidomyrmodes", related genera in subfamily Sphecomyrminae, are reconstructed as active arboreal predators. Older ants in the genus "Sphecomyrmodes" have been found in 99 million year-old amber from Myanmar. A 2006 study suggested that ants arose tens of millions of years earlier than previously thought, up to 168 million years ago. After the rise of flowering plants about 100 million years ago they diversified and assumed ecological dominance around 60 million years ago. Some groups, such as the Leptanillinae and Martialinae, are suggested to have diversified from early primitive ants that were likely to have been predators underneath the surface of the soil.
During the Cretaceous period, a few species of primitive ants ranged widely on the Laurasian supercontinent (the Northern Hemisphere). They were scarce in comparison to the populations of other insects, representing only about 1% of the entire insect population. Ants became dominant after adaptive radiation at the beginning of the Paleogene period. By the Oligocene and Miocene, ants had come to represent 20–40% of all insects found in major fossil deposits. Of the species that lived in the Eocene epoch, around one in 10 genera survive to the present. Genera surviving today comprise 56% of the genera in Baltic amber fossils (early Oligocene), and 92% of the genera in Dominican amber fossils (apparently early Miocene).
Termites live in colonies and are sometimes called ‘white ants’, but termites are not ants. They are the sub-order Isoptera, and together with cockroaches they form the order Blattodea. Blattodeans are related to mantids, crickets, and other winged insects that do not undergo full metamorphosis. Like ants, termites are eusocial, with sterile workers, but they differ greatly in the genetics of reproduction. The similarity of their social structure to that of ants is attributed to convergent evolution. Velvet ants look like large ants, but are wingless female wasps.
Ants are found on all continents except Antarctica, and only a few large islands, such as Greenland, Iceland, parts of Polynesia and the Hawaiian Islands lack native ant species. Ants occupy a wide range of ecological niches and exploit many different food resources as direct or indirect herbivores, predators and scavengers. Most ant species are omnivorous generalists, but a few are specialist feeders. Their ecological dominance is demonstrated by their biomass: ants are estimated to contribute 15–20 % (on average and nearly 25% in the tropics) of terrestrial animal biomass, exceeding that of the vertebrates.
Ants range in size from , the largest species being the fossil "Titanomyrma giganteum", the queen of which was long with a wingspan of . Ants vary in colour; most ants are red or black, but a few species are green and some tropical species have a metallic lustre. More than 12,000 species are currently known (with upper estimates of the potential existence of about 22,000) (see the article List of ant genera), with the greatest diversity in the tropics. Taxonomic studies continue to resolve the classification and systematics of ants. Online databases of ant species, including AntBase and the Hymenoptera Name Server, help to keep track of the known and newly described species. The relative ease with which ants may be sampled and studied in ecosystems has made them useful as indicator species in biodiversity studies.
Ants are distinct in their morphology from other insects in having elbowed antennae, metapleural glands, and a strong constriction of their second abdominal segment into a node-like petiole. The head, mesosoma, and metasoma are the three distinct body segments (formally tagmata). The petiole forms a narrow waist between their mesosoma (thorax plus the first abdominal segment, which is fused to it) and gaster (abdomen less the abdominal segments in the petiole). The petiole may be formed by one or two nodes (the second alone, or the second and third abdominal segments).
Like other insects, ants have an exoskeleton, an external covering that provides a protective casing around the body and a point of attachment for muscles, in contrast to the internal skeletons of humans and other vertebrates. Insects do not have lungs; oxygen and other gases, such as carbon dioxide, pass through their exoskeleton via tiny valves called spiracles. Insects also lack closed blood vessels; instead, they have a long, thin, perforated tube along the top of the body (called the "dorsal aorta") that functions like a heart, and pumps haemolymph toward the head, thus driving the circulation of the internal fluids. The nervous system consists of a ventral nerve cord that runs the length of the body, with several ganglia and branches along the way reaching into the extremities of the appendages.
An ant's head contains many sensory organs. Like most insects, ants have compound eyes made from numerous tiny lenses attached together. Ant eyes are good for acute movement detection, but do not offer a high resolution image. They also have three small ocelli (simple eyes) on the top of the head that detect light levels and polarization. Compared to vertebrates, ants tend to have blurrier eyesight, particularly in smaller species, and a few subterranean taxa are completely blind. However, some ants, such as Australia's bulldog ant, have excellent vision and are capable of discriminating the distance and size of objects moving nearly a metre away.
Two antennae ("feelers") are attached to the head; these organs detect chemicals, air currents, and vibrations; they also are used to transmit and receive signals through touch. The head has two strong jaws, the mandibles, used to carry food, manipulate objects, construct nests, and for defence. In some species, a small pocket (infrabuccal chamber) inside the mouth stores food, so it may be passed to other ants or their larvae.
Both the legs and wings of the ant are attached to the mesosoma ("thorax"). The legs terminate in a hooked claw which allows them to hook on and climb surfaces. Only reproductive ants, queens, and males, have wings. Queens shed their wings after the nuptial flight, leaving visible stubs, a distinguishing feature of queens. In a few species, wingless queens (ergatoids) and males occur.
The metasoma (the "abdomen") of the ant houses important internal organs, including those of the reproductive, respiratory (tracheae), and excretory systems. Workers of many species have their egg-laying structures modified into stings that are used for subduing prey and defending their nests.
In the colonies of a few ant species, there are physical castes—workers in distinct size-classes, called minor, median, and major ergates. Often, the larger ants have disproportionately larger heads, and correspondingly stronger mandibles. These are known as macrergates while smaller workers are known as micrergates. Although formally known as dinergates, such individuals are sometimes called "soldier" ants because their stronger mandibles make them more effective in fighting, although they still are workers and their "duties" typically do not vary greatly from the minor or median workers. In a few species, the median workers are absent, creating a sharp divide between the minors and majors. Weaver ants, for example, have a distinct bimodal size distribution. Some other species show continuous variation in the size of workers. The smallest and largest workers in "Pheidologeton diversus" show nearly a 500-fold difference in their dry-weights.
Workers cannot mate; however, because of the haplodiploid sex-determination system in ants, workers of a number of species can lay unfertilised eggs that become fully fertile, haploid males. The role of workers may change with their age and in some species, such as honeypot ants, young workers are fed until their gasters are distended, and act as living food storage vessels. These food storage workers are called "repletes". For instance, these replete workers develop in the North American honeypot ant "Myrmecocystus mexicanus". Usually the largest workers in the colony develop into repletes; and, if repletes are removed from the colony, other workers become repletes, demonstrating the flexibility of this particular polymorphism. This polymorphism in morphology and behaviour of workers initially was thought to be determined by environmental factors such as nutrition and hormones that led to different developmental paths; however, genetic differences between worker castes have been noted in "Acromyrmex" sp. These polymorphisms are caused by relatively small genetic changes; differences in a single gene of "Solenopsis invicta" can decide whether the colony will have single or multiple queens. The Australian jack jumper ant ("Myrmecia pilosula") has only a single pair of chromosomes (with the males having just one chromosome as they are haploid), the lowest number known for any animal, making it an interesting subject for studies in the genetics and developmental biology of social insects.
The life of an ant starts from an egg; if the egg is fertilised, the progeny will be female diploid, if not, it will be male haploid. Ants develop by complete metamorphosis with the larva stages passing through a pupal stage before emerging as an adult. The larva is largely immobile and is fed and cared for by workers. Food is given to the larvae by trophallaxis, a process in which an ant regurgitates liquid food held in its crop. This is also how adults share food, stored in the "social stomach". Larvae, especially in the later stages, may also be provided solid food, such as trophic eggs, pieces of prey, and seeds brought by workers.
The larvae grow through a series of four or five moults and enter the pupal stage. The pupa has the appendages free and not fused to the body as in a butterfly pupa. The differentiation into queens and workers (which are both female), and different castes of workers, is influenced in some species by the nutrition the larvae obtain. Genetic influences and the control of gene expression by the developmental environment are complex and the determination of caste continues to be a subject of research. Winged male ants, called drones, emerge from pupae along with the usually winged breeding females. Some species, such as army ants, have wingless queens. Larvae and pupae need to be kept at fairly constant temperatures to ensure proper development, and so often are moved around among the various brood chambers within the colony.
A new ergate spends the first few days of its adult life caring for the queen and young. She then graduates to digging and other nest work, and later to defending the nest and foraging. These changes are sometimes fairly sudden, and define what are called temporal castes. An explanation for the sequence is suggested by the high casualties involved in foraging, making it an acceptable risk only for ants who are older and are likely to die soon of natural causes.
Ant colonies can be long-lived. The queens can live for up to 30 years, and workers live from 1 to 3 years. Males, however, are more transitory, being quite short-lived and surviving for only a few weeks. Ant queens are estimated to live 100 times as long as solitary insects of a similar size.
Ants are active all year long in the tropics, but, in cooler regions, they survive the winter in a state of dormancy known as hibernation. The forms of inactivity are varied and some temperate species have larvae going into the inactive state (diapause), while in others, the adults alone pass the winter in a state of reduced activity.
A wide range of reproductive strategies have been noted in ant species. Females of many species are known to be capable of reproducing asexually through thelytokous parthenogenesis. Secretions from the male accessory glands in some species can plug the female genital opening and prevent females from re-mating. Most ant species have a system in which only the queen and breeding females have the ability to mate. Contrary to popular belief, some ant nests have multiple queens, while others may exist without queens. Workers with the ability to reproduce are called "gamergates" and colonies that lack queens are then called gamergate colonies; colonies with queens are said to be queen-right.
Drones can also mate with existing queens by entering a foreign colony. When the drone is initially attacked by the workers, it releases a mating pheromone. If recognized as a mate, it will be carried to the queen to mate. Males may also patrol the nest and fight others by grabbing them with their mandibles, piercing their exoskeleton and then marking them with a pheromone. The marked male is interpreted as an invader by worker ants and is killed.
Most ants are univoltine, producing a new generation each year. During the species-specific breeding period, winged females and winged males, known to entomologists as alates, leave the colony in what is called a nuptial flight. The nuptial flight usually takes place in the late spring or early summer when the weather is hot and humid. Heat makes flying easier and freshly fallen rain makes the ground softer for mated queens to dig nests. Males typically take flight before the females. Males then use visual cues to find a common mating ground, for example, a landmark such as a pine tree to which other males in the area converge. Males secrete a mating pheromone that females follow. Males will mount females in the air, but the actual mating process usually takes place on the ground. Females of some species mate with just one male but in others they may mate with as many as ten or more different males, storing the sperm in their spermathecae.
Mated females then seek a suitable place to begin a colony. There, they break off their wings and begin to lay and care for eggs. The females can selectively fertilise future eggs with the sperm stored to produce diploid workers or lay unfertilized haploid eggs to produce drones. The first workers to hatch are known as nanitics, and are weaker and smaller than later workers, but they begin to serve the colony immediately. They enlarge the nest, forage for food, and care for the other eggs. Species that have multiple queens may have a queen leaving the nest along with some workers to found a colony at a new site, a process akin to swarming in honeybees.
Ants communicate with each other using pheromones, sounds, and touch. The use of pheromones as chemical signals is more developed in ants, such as the red harvester ant, than in other hymenopteran groups. Like other insects, ants perceive smells with their long, thin, and mobile antennae. The paired antennae provide information about the direction and intensity of scents. Since most ants live on the ground, they use the soil surface to leave pheromone trails that may be followed by other ants. In species that forage in groups, a forager that finds food marks a trail on the way back to the colony; this trail is followed by other ants, these ants then reinforce the trail when they head back with food to the colony. When the food source is exhausted, no new trails are marked by returning ants and the scent slowly dissipates. This behaviour helps ants deal with changes in their environment. For instance, when an established path to a food source is blocked by an obstacle, the foragers leave the path to explore new routes. If an ant is successful, it leaves a new trail marking the shortest route on its return. Successful trails are followed by more ants, reinforcing better routes and gradually identifying the best path.
Ants use pheromones for more than just making trails. A crushed ant emits an alarm pheromone that sends nearby ants into an attack frenzy and attracts more ants from farther away. Several ant species even use "propaganda pheromones" to confuse enemy ants and make them fight among themselves. Pheromones are produced by a wide range of structures including Dufour's glands, poison glands and glands on the hindgut, pygidium, rectum, sternum, and hind tibia. Pheromones also are exchanged, mixed with food, and passed by trophallaxis, transferring information within the colony. This allows other ants to detect what task group (e.g., foraging or nest maintenance) other colony members belong to. In ant species with queen castes, when the dominant queen stops producing a specific pheromone, workers begin to raise new queens in the colony.
Some ants produce sounds by stridulation, using the gaster segments and their mandibles. Sounds may be used to communicate with colony members or with other species.
Ants attack and defend themselves by biting and, in many species, by stinging, often injecting or spraying chemicals, such as formic acid in the case of formicine ants, alkaloids and piperidines in fire ants, and a variety of protein components in other ants. Bullet ants ("Paraponera"), located in Central and South America, are considered to have the most painful sting of any insect, although it is usually not fatal to humans. This sting is given the highest rating on the Schmidt sting pain index.
The sting of jack jumper ants can be fatal, and an antivenom has been developed for it.
Fire ants, "Solenopsis" spp., are unique in having a venom sac containing piperidine alkaloids. Their stings are painful and can be dangerous to hypersensitive people.
Trap-jaw ants of the genus "Odontomachus" are equipped with mandibles called trap-jaws, which snap shut faster than any other predatory appendages within the animal kingdom. One study of "Odontomachus bauri" recorded peak speeds of between , with the jaws closing within 130 microseconds on average.
The ants were also observed to use their jaws as a catapult to eject intruders or fling themselves backward to escape a threat. Before striking, the ant opens its mandibles extremely widely and locks them in this position by an internal mechanism. Energy is stored in a thick band of muscle and explosively released when triggered by the stimulation of sensory organs resembling hairs on the inside of the mandibles. The mandibles also permit slow and fine movements for other tasks. Trap-jaws also are seen in the following genera: "Anochetus", "Orectognathus", and "Strumigenys", plus some members of the Dacetini tribe, which are viewed as examples of convergent evolution.
A Malaysian species of ant in the "Camponotus" "cylindricus" group has enlarged mandibular glands that extend into their gaster. If combat takes a turn for the worse, a worker may perform a final act of suicidal altruism by rupturing the membrane of its gaster, causing the content of its mandibular glands to burst from the anterior region of its head, spraying a poisonous, corrosive secretion containing acetophenones and other chemicals that immobilise small insect attackers. The worker subsequently dies.
Suicidal defences by workers are also noted in a Brazilian ant, "Forelius pusillus", where a small group of ants leaves the security of the nest after sealing the entrance from the outside each evening.
In addition to defence against predators, ants need to protect their colonies from pathogens. Some worker ants maintain the hygiene of the colony and their activities include undertaking or "necrophory", the disposal of dead nest-mates. Oleic acid has been identified as the compound released from dead ants that triggers necrophoric behaviour in "Atta mexicana" while workers of "Linepithema humile" react to the absence of characteristic chemicals (dolichodial and iridomyrmecin) present on the cuticle of their living nestmates to trigger similar behaviour.
Nests may be protected from physical threats such as flooding and overheating by elaborate nest architecture. Workers of "Cataulacus muticus", an arboreal species that lives in plant hollows, respond to flooding by drinking water inside the nest, and excreting it outside. "Camponotus anderseni", which nests in the cavities of wood in mangrove habitats, deals with submergence under water by switching to anaerobic respiration.
Many animals can learn behaviours by imitation, but ants may be the only group apart from mammals where interactive teaching has been observed. A knowledgeable forager of "Temnothorax albipennis" can lead a naïve nest-mate to newly discovered food by the process of tandem running. The follower obtains knowledge through its leading tutor. The leader is acutely sensitive to the progress of the follower and slows down when the follower lags and speeds up when the follower gets too close.
Controlled experiments with colonies of "Cerapachys biroi" suggest that an individual may choose nest roles based on her previous experience. An entire generation of identical workers was divided into two groups whose outcome in food foraging was controlled. One group was continually rewarded with prey, while it was made certain that the other failed. As a result, members of the successful group intensified their foraging attempts while the unsuccessful group ventured out fewer and fewer times. A month later, the successful foragers continued in their role while the others had moved to specialise in brood care.
Complex nests are built by many ant species, but other species are nomadic and do not build permanent structures. Ants may form subterranean nests or build them on trees. These nests may be found in the ground, under stones or logs, inside logs, hollow stems, or even acorns. The materials used for construction include soil and plant matter, and ants carefully select their nest sites; "Temnothorax albipennis" will avoid sites with dead ants, as these may indicate the presence of pests or disease. They are quick to abandon established nests at the first sign of threats.
The army ants of South America, such as the "Eciton burchellii" species, and the driver ants of Africa do not build permanent nests, but instead, alternate between nomadism and stages where the workers form a temporary nest (bivouac) from their own bodies, by holding each other together.
Weaver ant ("Oecophylla" spp.) workers build nests in trees by attaching leaves together, first pulling them together with bridges of workers and then inducing their larvae to produce silk as they are moved along the leaf edges. Similar forms of nest construction are seen in some species of "Polyrhachis".
"Formica polyctena", among other ant species, constructs nests that maintain a relatively constant interior temperature that aids in the development of larvae. The ants maintain the nest temperature by choosing the location, nest materials, controlling ventilation and maintaining the heat from solar radiation, worker activity and metabolism, and in some moist nests, microbial activity in the nest materials.
Some ant species, such as those that use natural cavities, can be opportunistic and make use of the controlled micro-climate provided inside human dwellings and other artificial structures to house their colonies and nest structures.
Most ants are generalist predators, scavengers, and indirect herbivores, but a few have evolved specialised ways of obtaining nutrition. It is believed that many ant species that engage in indirect herbivory rely on specialized symbiosis with their gut microbes to upgrade the nutritional value of the food they collect and allow them to survive in nitrogen poor regions, such as rainforest canopies. Leafcutter ants ("Atta" and "Acromyrmex") feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens. Ergates specialise in related tasks according to their sizes. The largest ants cut stalks, smaller workers chew the leaves and the smallest tend the fungus. Leafcutter ants are sensitive enough to recognise the reaction of the fungus to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is found to be toxic to the fungus, the colony will no longer collect it. The ants feed on structures produced by the fungi called "gongylidia". Symbiotic bacteria on the exterior surface of the ants produce antibiotics that kill bacteria introduced into the nest that may harm the fungi.
Foraging ants travel distances of up to from their nest and scent trails allow them to find their way back even in the dark. In hot and arid regions, day-foraging ants face death by desiccation, so the ability to find the shortest route back to the nest reduces that risk. Diurnal desert ants of the genus "Cataglyphis" such as the Sahara desert ant navigate by keeping track of direction as well as distance travelled. Distances travelled are measured using an internal pedometer that keeps count of the steps taken and also by evaluating the movement of objects in their visual field (optical flow). Directions are measured using the position of the sun.
They integrate this information to find the shortest route back to their nest.
Like all ants, they can also make use of visual landmarks when available as well as olfactory and tactile cues to navigate. Some species of ant are able to use the Earth's magnetic field for navigation. The compound eyes of ants have specialised cells that detect polarised light from the Sun, which is used to determine direction.
These polarization detectors are sensitive in the ultraviolet region of the light spectrum. In some army ant species, a group of foragers who become separated from the main column may sometimes turn back on themselves and form a circular ant mill. The workers may then run around continuously until they die of exhaustion.
The female worker ants do not have wings and reproductive females lose their wings after their mating flights in order to begin their colonies. Therefore, unlike their wasp ancestors, most ants travel by walking. Some species are capable of leaping. For example, Jerdon's jumping ant ("Harpegnathos saltator") is able to jump by synchronising the action of its mid and hind pairs of legs. There are several species of gliding ant including "Cephalotes atratus"; this may be a common trait among arboreal ants with small colonies. Ants with this ability are able to control their horizontal movement so as to catch tree trunks when they fall from atop the forest canopy.
Other species of ants can form chains to bridge gaps over water, underground, or through spaces in vegetation. Some species also form floating rafts that help them survive floods. These rafts may also have a role in allowing ants to colonise islands. "Polyrhachis sokolova", a species of ant found in Australian mangrove swamps, can swim and live in underwater nests. Since they lack gills, they go to trapped pockets of air in the submerged nests to breathe.
Not all ants have the same kind of societies. The Australian bulldog ants are among the biggest and most basal of ants. Like virtually all ants, they are eusocial, but their social behaviour is poorly developed compared to other species. Each individual hunts alone, using her large eyes instead of chemical senses to find prey.
Some species (such as "Tetramorium caespitum") attack and take over neighbouring ant colonies. Others are less expansionist, but just as aggressive; they invade colonies to steal eggs or larvae, which they either eat or raise as workers or slaves. Extreme specialists among these slave-raiding ants, such as the Amazon ants, are incapable of feeding themselves and need captured workers to survive. Captured workers of enslaved "Temnothorax" species have evolved a counter strategy, destroying just the female pupae of the slave-making "Temnothorax americanus", but sparing the males (who don't take part in slave-raiding as adults).
Ants identify kin and nestmates through their scent, which comes from hydrocarbon-laced secretions that coat their exoskeletons. If an ant is separated from its original colony, it will eventually lose the colony scent. Any ant that enters a colony without a matching scent will be attacked. Also, the reason why two separate colonies of ants will attack each other even if they are of the same species is because the genes responsible for pheromone production are different between them. The Argentine ant, however, does not have this characteristic, due to lack of genetic diversity, and has become a global pest because of it.
Parasitic ant species enter the colonies of host ants and establish themselves as social parasites; species such as "Strumigenys xenos" are entirely parasitic and do not have workers, but instead, rely on the food gathered by their "Strumigenys perplexa" hosts. This form of parasitism is seen across many ant genera, but the parasitic ant is usually a species that is closely related to its host. A variety of methods are employed to enter the nest of the host ant. A parasitic queen may enter the host nest before the first brood has hatched, establishing herself prior to development of a colony scent. Other species use pheromones to confuse the host ants or to trick them into carrying the parasitic queen into the nest. Some simply fight their way into the nest.
A conflict between the sexes of a species is seen in some species of ants with these reproducers apparently competing to produce offspring that are as closely related to them as possible. The most extreme form involves the production of clonal offspring. An extreme of sexual conflict is seen in "Wasmannia auropunctata", where the queens produce diploid daughters by thelytokous parthenogenesis and males produce clones by a process whereby a diploid egg loses its maternal contribution to produce haploid males who are clones of the father.
Ants form symbiotic associations with a range of species, including other ant species, other insects, plants, and fungi. They also are preyed on by many animals and even certain fungi. Some arthropod species spend part of their lives within ant nests, either preying on ants, their larvae, and eggs, consuming the food stores of the ants, or avoiding predators. These inquilines may bear a close resemblance to ants. The nature of this ant mimicry (myrmecomorphy) varies, with some cases involving Batesian mimicry, where the mimic reduces the risk of predation. Others show Wasmannian mimicry, a form of mimicry seen only in inquilines.
Aphids and other hemipteran insects secrete a sweet liquid called honeydew, when they feed on plant sap. The sugars in honeydew are a high-energy food source, which many ant species collect. In some cases, the aphids secrete the honeydew in response to ants tapping them with their antennae. The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew. Ants also tend mealybugs to harvest their honeydew. Mealybugs may become a serious pest of pineapples if ants are present to protect mealybugs from their natural enemies.
Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants' nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them. Some caterpillars produce vibrations and sounds that are perceived by the ants. A similar adaptation can be seen in Grizzled skipper butterflies that emit vibrations by expanding their wings in order to communicate with ants, which are natural predators of these butterflies. Other caterpillars have evolved from ant-loving to ant-eating: these myrmecophagous caterpillars secrete a pheromone that makes the ants act as if the caterpillar is one of their own larvae. The caterpillar is then taken into the ant nest where it feeds on the ant larvae. A number of specialized bacterial have been found as endosymbionts in ant guts. Some of the dominant bacteria belong to the order Rhizobiales whose members are known for being nitrogen-fixing symbionts in legumes but the species found in ant lack the ability to fix nitrogen. Fungus-growing ants that make up the tribe Attini, including leafcutter ants, cultivate certain species of fungus in the genera "Leucoagaricus" or "Leucocoprinus" of the family Agaricaceae. In this ant-fungus mutualism, both species depend on each other for survival. The ant "Allomerus decemarticulatus" has evolved a three-way association with the host plant, "Hirtella physophora" (Chrysobalanaceae), and a sticky fungus which is used to trap their insect prey.
Lemon ants make devil's gardens by killing surrounding plants with their stings and leaving a pure patch of lemon ant trees, ("Duroia hirsuta"). This modification of the forest provides the ants with more nesting sites inside the stems of the "Duroia" trees. Although some ants obtain nectar from flowers, pollination by ants is somewhat rare, one example being of the pollination of the orchid "Leporella fimbriata" which induces male "Myrmecia urens" to pseudocopulate with the flowers, transferring pollen in the process. One theory that has been proposed for the rarity of pollination is that the secretions of the metapleural gland inactivate and reduce the viability of pollen. Some plants have special nectar exuding structures, extrafloral nectaries, that provide food for ants, which in turn protect the plant from more damaging herbivorous insects. Species such as the bullhorn acacia ("Acacia cornigera") in Central America have hollow thorns that house colonies of stinging ants ("Pseudomyrmex ferruginea") who defend the tree against insects, browsing mammals, and epiphytic vines. Isotopic labelling studies suggest that plants also obtain nitrogen from the ants. In return, the ants obtain food from protein- and lipid-rich Beltian bodies. In Fiji "Philidris nagasau" (Dolichoderinae) are known to selectively grow species of epiphytic "Squamellaria" (Rubiaceae) which produce large domatia inside which the ant colonies nest. The ants plant the seeds and the domatia of young seedling are immediately occupied and the ant faeces in them contribute to rapid growth. Similar dispersal associations are found with other dolichoderines in the region as well. Another example of this type of ectosymbiosis comes from the "Macaranga" tree, which has stems adapted to house colonies of "Crematogaster" ants.
Many plant species have seeds that are adapted for dispersal by ants. Seed dispersal by ants or myrmecochory is widespread, and new estimates suggest that nearly 9% of all plant species may have such ant associations. Often, seed-dispersing ants perform directed dispersal, depositing the seeds in locations that increase the likelihood of seed survival to reproduction. Some plants in arid, fire-prone systems are particularly dependent on ants for their survival and dispersal as the seeds are transported to safety below the ground. Many ant-dispersed seeds have special external structures, elaiosomes, that are sought after by ants as food.
A convergence, possibly a form of mimicry, is seen in the eggs of stick insects. They have an edible elaiosome-like structure and are taken into the ant nest where the young hatch.
Most ants are predatory and some prey on and obtain food from other social insects including other ants. Some species specialise in preying on termites ("Megaponera" and "Termitopone") while a few Cerapachyinae prey on other ants. Some termites, including "Nasutitermes corniger", form associations with certain ant species to keep away predatory ant species. The tropical wasp "Mischocyttarus drewseni" coats the pedicel of its nest with an ant-repellent chemical. It is suggested that many tropical wasps may build their nests in trees and cover them to protect themselves from ants. Other wasps, such as "A. multipicta", defend against ants by blasting them off the nest with bursts of wing buzzing. Stingless bees ("Trigona" and "Melipona") use chemical defences against ants.
Flies in the Old World genus "Bengalia" (Calliphoridae) prey on ants and are kleptoparasites, snatching prey or brood from the mandibles of adult ants. Wingless and legless females of the Malaysian phorid fly ("Vestigipoda myrmolarvoidea") live in the nests of ants of the genus "Aenictus" and are cared for by the ants.
Fungi in the genera "Cordyceps" and "Ophiocordyceps" infect ants. Ants react to their infection by climbing up plants and sinking their mandibles into plant tissue. The fungus kills the ants, grows on their remains, and produces a fruiting body. It appears that the fungus alters the behaviour of the ant to help disperse its spores in a microhabitat that best suits the fungus. Strepsipteran parasites also manipulate their ant host to climb grass stems, to help the parasite find mates.
A nematode ("Myrmeconema neotropicum") that infects canopy ants ("Cephalotes atratus") causes the black-coloured gasters of workers to turn red. The parasite also alters the behaviour of the ant, causing them to carry their gasters high. The conspicuous red gasters are mistaken by birds for ripe fruits, such as "Hyeronima alchorneoides", and eaten. The droppings of the bird are collected by other ants and fed to their young, leading to further spread of the nematode.
South American poison dart frogs in the genus "Dendrobates" feed mainly on ants, and the toxins in their skin may come from the ants.
Army ants forage in a wide roving column, attacking any animals in that path that are unable to escape. In Central and South America, "Eciton burchellii" is the swarming ant most commonly attended by "ant-following" birds such as antbirds and woodcreepers. This behaviour was once considered mutualistic, but later studies found the birds to be parasitic. Direct kleptoparasitism (birds stealing food from the ants' grasp) is rare and has been noted in Inca doves which pick seeds at nest entrances as they are being transported by species of "Pogonomyrmex". Birds that follow ants eat many prey insects and thus decrease the foraging success of ants. Birds indulge in a peculiar behaviour called anting that, as yet, is not fully understood. Here birds rest on ant nests, or pick and drop ants onto their wings and feathers; this may be a means to remove ectoparasites from the birds.
Anteaters, aardvarks, pangolins, echidnas and numbats have special adaptations for living on a diet of ants. These adaptations include long, sticky tongues to capture ants and strong claws to break into ant nests. Brown bears ("Ursus arctos") have been found to feed on ants. About 12%, 16%, and 4% of their faecal volume in spring, summer, and autumn, respectively, is composed of ants.
Ants perform many ecological roles that are beneficial to humans, including the suppression of pest populations and aeration of the soil. The use of weaver ants in citrus cultivation in southern China is considered one of the oldest known applications of biological control. On the other hand, ants may become nuisances when they invade buildings, or cause economic losses.
In some parts of the world (mainly Africa and South America), large ants, especially army ants, are used as surgical sutures. The wound is pressed together and ants are applied along it. The ant seizes the edges of the wound in its mandibles and locks in place. The body is then cut off and the head and mandibles remain in place to close the wound. The large heads of the dinergates (soldiers) of the leafcutting ant "Atta cephalotes" are also used by native surgeons in closing wounds.
Some ants have toxic venom and are of medical importance. The species include "Paraponera clavata" (tocandira) and "Dinoponera" spp. (false tocandiras) of South America and the "Myrmecia" ants of Australia.
In South Africa, ants are used to help harvest the seeds of rooibos ("Aspalathus linearis"), a plant used to make a herbal tea. The plant disperses its seeds widely, making manual collection difficult. Black ants collect and store these and other seeds in their nest, where humans can gather them "en masse". Up to half a pound (200 g) of seeds may be collected from one ant-heap.
Although most ants survive attempts by humans to eradicate them, a few are highly endangered. These tend to be island species that have evolved specialized traits and risk being displaced by introduced ant species. Examples include the critically endangered Sri Lankan relict ant ("Aneuretus simoni") and "Adetomyrma venatrix" of Madagascar.
It has been estimated by E.O. Wilson that the total number of individual ants alive in the world at any one time is between one and ten quadrillion (short scale) (i.e., between 1015 and 1016). According to this estimate, the total biomass of all the ants in the world is approximately equal to the total biomass of the entire human race. Also, according to this estimate, there are approximately 1 million ants for every human on Earth.
Ants and their larvae are eaten in different parts of the world. The eggs of two species of ants are used in Mexican "escamoles". They are considered a form of insect caviar and can sell for as much as US$40 per pound ($90/kg) because they are seasonal and hard to find. In the Colombian department of Santander, "hormigas culonas" (roughly interpreted as "large-bottomed ants") "Atta laevigata" are toasted alive and eaten.
In areas of India, and throughout Burma and Thailand, a paste of the green weaver ant ("Oecophylla smaragdina") is served as a condiment with curry. Weaver ant eggs and larvae, as well as the ants, may be used in a Thai salad, "yam" (), in a dish called "yam khai mot daeng" () or red ant egg salad, a dish that comes from the Issan or north-eastern region of Thailand. Saville-Kent, in the "Naturalist in Australia" wrote "Beauty, in the case of the green ant, is more than skin-deep. Their attractive, almost sweetmeat-like translucency possibly invited the first essays at their consumption by the human species". Mashed up in water, after the manner of lemon squash, "these ants form a pleasant acid drink which is held in high favor by the natives of North Queensland, and is even appreciated by many European palates".
In his "First Summer in the Sierra", John Muir notes that the Digger Indians of California ate the tickling, acid gasters of the large jet-black carpenter ants. The Mexican Indians eat the replete workers, or living honey-pots, of the honey ant ("Myrmecocystus").
Some ant species are considered as pests, primarily those that occur in human habitations, where their presence is often problematic. For example, the presence of ants would be undesirable in sterile places such as hospitals or kitchens. Some species or genera commonly categorized as pests include the Argentine ant, pavement ant, yellow crazy ant, banded sugar ant, pharaoh ant, red ant, carpenter ant, odorous house ant, red imported fire ant, and European fire ant. Some ants will raid stored food, some will seek water sources, others may damage indoor structures, some may damage agricultural crops directly (or by aiding sucking pests). Some will sting or bite. The adaptive nature of ant colonies make it nearly impossible to eliminate entire colonies and most pest management practices aim to control local populations and tend to be temporary solutions. Ant populations are managed by a combination of approaches that make use of chemical, biological, and physical methods. Chemical methods include the use of insecticidal bait which is gathered by ants as food and brought back to the nest where the poison is inadvertently spread to other colony members through trophallaxis. Management is based on the species and techniques may vary according to the location and circumstance.
Observed by humans since the dawn of history, the behaviour of ants has been documented and the subject of early writings and fables passed from one century to another. Those using scientific methods, myrmecologists, study ants in the laboratory and in their natural conditions. Their complex and variable social structures have made ants ideal model organisms. Ultraviolet vision was first discovered in ants by Sir John Lubbock in 1881. Studies on ants have tested hypotheses in ecology and sociobiology, and have been particularly important in examining the predictions of theories of kin selection and evolutionarily stable strategies. Ant colonies may be studied by rearing or temporarily maintaining them in "formicaria", specially constructed glass framed enclosures. Individuals may be tracked for study by marking them with dots of colours.
The successful techniques used by ant colonies have been studied in computer science and robotics to produce distributed and fault-tolerant systems for solving problems, for example Ant colony optimization and Ant robotics. This area of biomimetics has led to studies of ant locomotion, search engines that make use of "foraging trails", fault-tolerant storage, and networking algorithms.
From the late 1950s through the late 1970s, ant farms were popular educational children's toys in the United States. Some later commercial versions use transparent gel instead of soil, allowing greater visibility at the cost of stressing the ants with unnatural light.
Anthropomorphised ants have often been used in fables and children's stories to represent industriousness and cooperative effort. They also are mentioned in religious texts. In the Book of Proverbs in the Bible, ants are held up as a good example for humans for their hard work and cooperation. Aesop did the same in his fable The Ant and the Grasshopper. In the Quran, Sulayman is said to have heard and understood an ant warning other ants to return home to avoid being accidentally crushed by Sulayman and his marching army. In parts of Africa, ants are considered to be the messengers of the deities. Some Native American mythology, such as the Hopi mythology, considers ants as the very first animals. Ant bites are often said to have curative properties. The sting of some species of "Pseudomyrmex" is claimed to give fever relief. Ant bites are used in the initiation ceremonies of some Amazon Indian cultures as a test of endurance.
Ant society has always fascinated humans and has been written about both humorously and seriously. Mark Twain wrote about ants in his 1880 book "A Tramp Abroad". Some modern authors have used the example of the ants to comment on the relationship between society and the individual. Examples are Robert Frost in his poem "Departmental" and T. H. White in his fantasy novel "The Once and Future King". The plot in French entomologist and writer Bernard Werber's "Les Fourmis" science-fiction trilogy is divided between the worlds of ants and humans; ants and their behaviour is described using contemporary scientific knowledge. H.G. Wells wrote about intelligent ants destroying human settlements in Brazil and threatening human civilization in his 1905 science-fiction short story, "The Empire of the Ants." In more recent times, animated cartoons and 3-D animated films featuring ants have been produced including "Antz", "A Bug's Life", "The Ant Bully", "The Ant and the Aardvark", "Ferdy the Ant" and "Atom Ant." Renowned myrmecologist E. O. Wilson wrote a short story, "Trailhead" in 2010 for "The New Yorker" magazine, which describes the life and death of an ant-queen and the rise and fall of her colony, from an ants' point of view. The French neuroanatomist, psychiatrist and eugenicist Auguste Forel believed that ant societies were models for human society. He published a five volume work from 1921 to 1923 that examined ant biology and society.
In the early 1990s, the video game "SimAnt", which simulated an ant colony, won the 1992 Codie award for "Best Simulation Program".
Ants also are quite popular inspiration for many science-fiction insectoids, such as the Formics of "Ender's Game", the Bugs of "Starship Troopers", the giant ants in the films "Them!" and "Empire of the Ants," Marvel Comics' super hero Ant-Man, and ants mutated into super-intelligence in "Phase IV". In computer strategy games, ant-based species often benefit from increased production rates due to their single-minded focus, such as the Klackons in the "Master of Orion" series of games or the ChCht in "Deadlock II". These characters are often credited with a hive mind, a common misconception about ant colonies. | https://en.wikipedia.org/wiki?curid=2594 |
Arbitration in the United States
Arbitration, in the context of United States law, is a form of alternative dispute resolution. Specifically, arbitration is an alternative to litigation through which the parties to a dispute agree to submit their respective positions (through agreement or hearing) to a neutral third party (the arbitrator(s) or arbiter(s)) for resolution. In practice arbitration is generally used as a substitute for litigation, particularly when the judicial process is perceived as too slow, expensive or biased. In some context, an arbitrator may be described as an umpire.
Agreements to arbitrate were not enforceable at common law, traced back to dictum by Lord Coke in "Vynor’s Case", 8 Co. Rep. 81b, 77 Eng. Rep. 597 (1609), that agreements to arbitrate were revocable by either party.
During the Industrial Revolution, large corporations became increasingly opposed to this policy. They argued that too many valuable business relationships were being destroyed through years of expensive adversarial litigation, in courts whose rules differed significantly from the informal norms and conventions of businesspeople (the private law of commerce, or "jus merchant"). Arbitration was promoted as being faster, less adversarial, and cheaper.
The result was the New York Arbitration Act of 1920, followed by the United States Arbitration Act of 1925 (now known as the Federal Arbitration Act). Both made agreements to arbitrate valid and enforceable (unless one party could show fraud or unconscionability or some other ground for rescission which undermined the validity of the entire contract). Due to the subsequent judicial expansion of the meaning of interstate commerce, the U.S. Supreme Court reinterpreted the FAA in a series of cases in the 1980s and 1990s to cover almost the full scope of interstate commerce. In the process, the Court held that the FAA preempted many state laws covering arbitration, some of which had been passed by state legislatures to protect their consumers against powerful corporations.
Since commercial arbitration is based upon either contract law or the law of treaties, the agreement between the parties to submit their dispute to arbitration is a legally binding contract. All arbitral decisions are considered to be "final and binding." This does not, however, void the requirements of law. Any dispute not excluded from arbitration by virtue of law (for example, criminal proceedings) may be submitted to arbitration.
Furthermore, arbitration agreements can only bind parties who have agreed, expressly or impliedly to arbitrate. Arbitration cannot bind nonsignatories to an arbitration contract, even if those nonsignatories later become involved with a signatory to a contract by accident (usually through the commission of a tort).
Arbitration may be used as a means of resolving labor disputes, an alternative to strikes and lockouts . Labor arbitration comes in two varieties:
Arbitration has also been used as a means of resolving labor disputes for more than a century. Labor organizations in the United States, such as the National Labor Union, called for arbitration as early as 1866 as an alternative to strikes to resolve disputes over the wages, benefits and other rights that workers would enjoy.
Governments have relied on arbitration to resolve particularly large labor disputes, such as the Coal Strike of 1902. This type of arbitration, wherein a neutral arbitrator decides the terms of the collective bargaining agreement, is commonly known as interest arbitration. The United Steelworkers of America adopted an elaborate form of interest arbitration, known as the Experimental Negotiating Agreement, in the 1970s as a means of avoiding the long and costly strikes that had made the industry vulnerable to foreign competition. Major League Baseball uses a variant of interest arbitration, in which an arbitrator chooses between the two sides' final offers, to set the terms for contracts for players who are not eligible for free agency. Interest arbitration is now most frequently used by public employees who have no right to strike (for example, law enforcement and firefighters).
Unions and employers have also employed arbitration to resolve employee and union grievances arising under a collective bargaining agreement. The Amalgamated Clothing Workers of America made arbitration a central element of the "Protocol of Peace" it negotiated with garment manufacturers in the second decade of the twentieth century. Grievance arbitration became even more popular during World War II, when most unions had adopted a no-strike pledge. The War Labor Board, which attempted to mediate disputes over contract terms, pressed for inclusion of grievance arbitration in collective bargaining agreements. The Supreme Court subsequently made labor arbitration a key aspect of federal labor policy in three cases which came to be known as the Steelworkers' Trilogy. The Court held that grievance arbitration was a preferred dispute resolution technique and that courts could not overturn arbitrators' awards unless the award does not draw its essence from the collective bargaining agreement. State and federal statutes may allow vacating an award on narrow grounds ("e.g.", fraud). These protections for arbitrator awards are premised on the union-management system, which provides both parties with due process. Due process in this context means that both parties have experienced representation throughout the process, and that the arbitrators practice only as neutrals. "See" National Academy of Arbitrators.
In the United States securities industry, arbitration has long been the preferred method of resolving disputes between brokerage firms, and between firms and their customers. The arbitration process operates under its own rules, as defined by contract. Securities arbitrations are held primarily by the Financial Industry Regulatory Authority.
The securities industry uses pre-dispute arbitration agreements, through which the parties agree to arbitrate their disputes before any such dispute arises. Those agreements were upheld by the United States Supreme Court in Shearson v. MacMahon, 482 U.S. 220 (1987) and today nearly all disputes involving brokerage firms, other than Securities class action claims, are resolved in arbitration.
The SEC has come under fire from members of the Senate Judiciary Committee for not fulfilling statutory duty to protect individual investors, because all brokers require arbitration, and arbitration does not provide a court-supervised discovery process, require arbitrators to follow rules of evidence or result in written opinions establishing precedence, or case law, or provide the efficiency gains it once did. Arbitrator selection bias, hidden conflicts of interest, and a case where an arbitration panel refused to follow instructions handed down from a judge, were also raised as issues.
Some state court systems have promulgated court-ordered arbitration; family law (particularly child custody) is the most prominent example. Judicial arbitration is often merely advisory dispute resolution technique, serving as the first step toward resolution, but not binding either side and allowing for trial de novo. Litigation attorneys present their side of the case to an independent tertiary lawyer, who issues an opinion on settlement. Should the parties in question decide to continue to dispute resolution process, there can be some sanctions imposed from the initial arbitration per terms of the contract.
Although properly drafted arbitration clauses are generally valid, they are subject to challenge in court for compliance with laws and public policy. Arbitration clauses may potentially be challenged as unconscionable and, therefore, unenforceable.
Typically, the validity of an arbitration clause is decided by a court rather than an arbitrator. However, if the validity of the entire arbitration agreement is in dispute, then the issue may remain subject to arbitration. For example, in "Rent-A-Center, West, Inc. v. Jackson", the Supreme Court of the United States held that "under the FAA, where an agreement to arbitrate includes an agreement that the arbitrator will determine the enforceability of the agreement, if a party challenges specifically the enforceability of that particular agreement, the district court considers the challenge, but if a party challenges the enforceability of the agreement as a whole, the challenge is for the arbitrator."
In other words, the law typically allows federal courts to decide these types of "gateway" or validity questions, but the Supreme Court ruled that since Jackson targeted the entire contract rather than a specific clause, the arbitrator decided the validity. Public Citizen, an advocacy organization opposed to the enforcement of pre-dispute arbitration agreements, characterized the decision negatively: "the court said that companies can write their contracts so that the companies' own arbitrator decides whether it's fair to submit a case to that arbitrator."
In insurance law, arbitration is complicated by the fact that insurance is regulated at the state level under the McCarran–Ferguson Act. From a federal perspective, however, a circuit court ruling has determined that McCarran-Ferguson requires a state statute rather than administrative interpretations. The Missouri Department of Insurance attempted to block a binding arbitration agreement under its state authority, but since this action was based only on a policy of the department and not on a state statute, the United States district court found that the Department of Insurance did not have the authority to invalidate the arbitration agreement.
In "AT&T Mobility v. Concepcion" (2011), the Supreme Court of the United States upheld an arbitration clause in a consumer standard form contract which waived the right to a lawsuit and class action. However, this clause was relatively generous in that the business paid all fees unless the action was determined to be frivolous and a small-claims court action remained available; these types of protections are recommended for the contract to remain enforceable and not unconscionable.
Various bodies of rules have been developed that can be used for arbitration proceedings. The rules to be followed by the arbitrator are specified by the agreement establishing the arbitration.
In some cases, a party may comply with an award voluntarily. However, in other cases a party will have to petition to receive a court judgment for enforcement through various means such as a writ of execution, garnishment, or lien. If the property is in another state, then a sister-state judgment (relying on the Full Faith and Credit Clause) can be received by filing a form in the state where the property is located.
Under the Federal Arbitration Act, courts can only vacate awards for limited reasons set out in statute with similar language in the state model Uniform Arbitration Act.
The court will generally not change the arbitrator's findings of fact but will decide only whether the arbitrator was guilty of malfeasance, or whether the arbitrator exceeded the limits of his or her authority in the arbitral award or whether the award was made in manifest disregard of law or conflicts with well-established public policy.
Arbitrators have wide latitude in crafting remedies in the arbitral decision, with the only real limitation being that they may not exceed the limits of their authority in their award. An example of exceeding arbitral authority might be awarding one party to a dispute the personal automobile of the other party when the dispute concerns the specific performance of a business-related contract.
It is open to the parties to restrict the possible awards that the arbitrator can make. If this restriction requires a straight choice between the position of one party or the position of the other, then it is known as "pendulum arbitration" or "final offer arbitration". It is designed to encourage the parties to moderate their initial positions so as to make it more likely they receive a favorable decision.
No definitive statement can be made concerning the credentials or experience levels of arbitrators, although some jurisdictions have elected to establish standards for arbitrators in certain fields. Some independent organizations, such as the American Arbitration Association offer arbitrator training programs, and arbitrators may cite their completion of that training as a credential. Generally speaking, however, the credibility of an arbitrator rests upon reputation, experience level in arbitrating particular issues, or expertise/experience in a particular field. Arbitrators are generally not required to be members of the legal profession.
To ensure effective arbitration and to increase the general credibility of the arbitral process, arbitrators will sometimes sit as a panel, usually consisting of three arbitrators. Often the three consist of an expert in the legal area within which the dispute falls (such as contract law in the case of a dispute over the terms and conditions of a contract), an expert in the industry within which the dispute falls (such as the construction industry, in the case of a dispute between a homeowner and his general contractor), and an experienced arbitrator.
The "judge shows" that have become popular in many countries, especially the United States, are actually binding arbitration. "The People's Court" and Judge Judy are notable examples.
For the relevant conflict of laws elements, see contract, forum selection clause, choice of law clause, proper law, and "lex loci arbitri" | https://en.wikipedia.org/wiki?curid=2597 |
Adversarial system
The adversarial system or adversary system is a legal system used in the common law countries where two advocates represent their parties' case or position before an impartial person or group of people, usually a judge or jury, who attempt to determine the truth and pass judgment accordingly. It is in contrast to the inquisitorial system used in some civil law systems (i.e. those deriving from Roman law or the Napoleonic code) where a judge investigates the case.
The adversarial system is the two-sided structure under which criminal trial courts operate, putting the prosecution against the defense.
As an accused is not compelled to give evidence in a criminal adversarial proceeding, they may not be questioned by a prosecutor or judge unless they choose to do so. However, should they decide to testify, they are subject to cross-examination and could be found guilty of perjury. As the election to maintain an accused person's right to silence prevents any examination or cross-examination of that person's position, it follows that the decision of counsel as to what evidence will be called is a crucial tactic in any case in the adversarial system and hence it might be said that it is a lawyer's manipulation of the truth. Certainly, it requires the skills of counsel on both sides to be fairly equally pitted and subjected to an impartial judge.
By contrast, while defendants in most civil law systems can be compelled to give a statement, this statement is not subject to cross-examination by the prosecutor and not given under oath. This allows the defendant to explain his side of the case without being subject to cross-examination by a skilled opposition. However, this is mainly because it is not the prosecutor but the judges who question the defendant. The concept of "cross"-examination is entirely due to adversarial structure of the common law.
Judges in an adversarial system are impartial in ensuring the fair play of due process, or fundamental justice. Such judges decide, often when called upon by counsel rather than of their own motion, what evidence is to be admitted when there is a dispute; though in some common law jurisdictions judges play more of a role in deciding what evidence to admit into the record or reject. At worst, abusing judicial discretion would actually pave the way to a biased decision, rendering obsolete the judicial process in question—rule of law being illicitly subordinated by rule of man under such discriminating circumstances.
The rules of evidence are also developed based upon the system of objections of adversaries and on what basis it may tend to prejudice the trier of fact which may be the judge or the jury. In a way the rules of evidence can function to give a judge limited inquisitorial powers as the judge may exclude evidence he/she believes is not trustworthy or irrelevant to the legal issue at hand.
All evidence must be relevant and not hearsay evidence.
Peter Murphy in his "Practical Guide to Evidence" recounts an instructive example. A frustrated judge in an English (adversarial) court finally asked a barrister after witnesses had produced conflicting accounts, 'Am I never to hear the truth?' 'No, my lord, merely the evidence', replied counsel.
The name "adversarial system" may be misleading in that it implies it is only within this type of system in which there are opposing prosecution and defense. This is not the case, and both modern adversarial and inquisitorial systems have the powers of the state separated between a prosecutor and the judge and allow the defendant the right to counsel. Indeed, the European Convention on Human Rights and Fundamental Freedoms in Article 6 requires these features in the legal systems of its signatory states.
The right to counsel in criminal trials was initially not accepted in some adversarial systems. It was believed that the facts should speak for themselves, and that lawyers would just blur the matters. As a consequence, it was only in 1836 that England gave suspects of felonies the formal right to have legal counsel (the Prisoners' Counsel Act 1836), although in practice English courts routinely allowed defendants to be represented by counsel from the mid-18th century. During the second half of the 18th century advocates like Sir William Garrow and Thomas Erskine, 1st Baron Erskine helped usher in the adversarial court system used in most common law countries today. In the United States, however, personally retained counsel have had a right to appear in all federal criminal cases since the adoption of the Constitution and in state cases at least since the end of the Civil War, although nearly all provided this right in their state constitutions or laws much earlier. Appointment of counsel for indigent defendants was nearly universal in federal felony cases, though it varied considerably in state cases. It was not until 1963 that the U.S. Supreme Court declared that legal counsel must be provided at the expense of the state for indigent felony defendants, under the federal Sixth Amendment, in state courts. See "Gideon v. Wainwright", .
One of the most significant differences between the adversarial system and the inquisitorial system occurs when a criminal defendant admits to the crime. In an adversarial system, there is no more controversy and the case proceeds to sentencing; though in many jurisdictions the defendant must have allocution of her or his crime; an obviously false confession will not be accepted even in common law courts. By contrast, in an inquisitorial system, the fact that the defendant has confessed is merely one more fact that is entered into evidence, and a confession by the defendant does not remove the requirement that the prosecution present a full case. This allows for plea bargaining in adversarial systems in a way that is difficult or impossible in inquisitional system, and many felony cases in the United States are handled without trial through such plea bargains.
In some adversarial legislative systems, the court is permitted to make inferences on an accused's failure to face cross-examination or to answer a particular question. This obviously limits the usefulness of silence as a tactic by the defense. In England the Criminal Justice and Public Order Act 1994 allowed such inferences to be made for the first time in England and Wales (it was already possible in Scotland under the rule of criminative circumstances). This change was disparaged by critics as an end to the 'right to silence', though in fact an accused still has the right to remain silent and cannot be compelled to take the stand. The criticism reflects the idea that if the accused can be inferred to be guilty by exercising their right to silence, it no longer confers the protection intended by such a right. In the United States, the Fifth Amendment has been interpreted to prohibit a jury from drawing a negative inference based on the defendant's invocation of his right not to testify, and the jury must be so instructed if the defendant requests.
Lord Devlin in "The Judge" said: "It can also be argued that two prejudiced searchers starting from opposite ends of the field will between them be less likely to miss anything than the impartial searcher starting at the middle."
There are many differences in the way cases are reviewed. It is questionable that the results would be different if cases were conducted under the differing approaches; in fact no statistics exist that can show whether or not these systems would come to the same results. However, these approaches are often a matter of national pride and there are opinions amongst jurists about the merits of the differing approaches and their drawbacks as well.
Proponents of the adversarial system often argue that the system is more fair and less prone to abuse than the inquisitional approach, because it allows less room for the state to be biased against the defendant. It also allows most private litigants to settle their disputes in an amicable manner through discovery and pre-trial settlements in which non-contested facts are agreed upon and not dealt with during the trial process.
In addition, adversarial procedure defenders argue that the inquisitorial court systems are overly institutionalized and removed from the average citizen. The common law trial lawyer has ample opportunity to uncover the truth in the courtroom. Most cases that go to trial are carefully prepared through a discovery process that aids in the review of evidence and testimony before it is presented to judge or jury. The lawyers involved have a very good idea of the scope of agreement and disagreement of the issues to present at trial which develops much in the same way as the role of investigative judges.
Proponents of inquisitorial justice dispute these points. They point out that many cases in adversarial systems, and most cases in the United States, are actually resolved by plea bargain or settlement. Plea bargain as a system does not exist in an inquisitorial system. Many legal cases in adversarial systems, and most in the United States, do not go to trial, which may lead to injustice when the defendant has an unskilled or overworked attorney, which is likely to be the case when the defendant is poor. In addition, proponents of inquisitorial systems argue that the plea bargain system causes the participants in the system to act in perverse ways, in that it encourages prosecutors to bring charges far in excess of what is warranted and defendants to plead guilty even when they believe that they are not. | https://en.wikipedia.org/wiki?curid=2598 |
Abatis
An abatis, abattis, or abbattis is a field fortification consisting of an obstacle formed (in the modern era) of the branches of trees laid in a row, with the sharpened tops directed outwards, towards the enemy. The trees are usually interlaced or tied with wire. Abatis are used alone or in combination with wire entanglements and other obstacles.
There is evidence it was used as early as the Roman Imperial period, and as recently as the American Civil War and the Anglo-Zulu War of 1879.
A classic use of an abatis was at the Battle of Carillon (1758) during the Seven Years' War. The 3,600 French troops defeated a massive army of 16,000 British and Colonial troops by fronting their defensive positions with an extremely dense abatis. The British found the defences almost impossible to breach and were forced to withdraw with some 2,600 casualties. Other uses of an abatis can be found at the Battle of the Chateauguay, 26 October 1813, when approximately 1,300 Canadian Voltigeurs, under the command of Charles-Michel de Salaberry, defeated an American corps of approximately 4,000 men, or at the Battle of Plattsburgh.
An important weakness of abatis, in contrast to barbed wire, is that it can be destroyed by fire. Also, if laced together with rope instead of wire, the rope can be very quickly destroyed by such fires, after which the abatis can be quickly pulled apart by grappling hooks thrown from a safe distance.
An important advantage is that an improvised abatis can be quickly formed in forested areas. This can be done by simply cutting down a row of trees so that they fall with their tops toward the enemy. An alternative is to place explosives so as to blow the trees down.
Abatis are rarely seen nowadays, having been largely replaced by wire obstacles. However, it may be used as a replacement or supplement when barbed wire is in short supply. A form of giant abatis, using whole trees instead of branches, can be used as an improvised anti-tank obstacle.
Though rarely used by modern conventional military units, abatises are still officially maintained in United States Army and Marine Corps training. Current US training instructs engineers or other constructors of such obstacles to fell trees, leaving a stump, in such a manner as the trees fall interlocked pointing at a 45-degree angle towards the direction of approach of the enemy. Furthermore, it is recommended that the trees remain connected to the stumps and the length of roadway covered be at least . US military maps record an abatis by use of an inverted "V" with a short line extending from it to the right. | https://en.wikipedia.org/wiki?curid=2606 |
Antoine Thomson d'Abbadie
Antoine Thomson d'Abbadie d'Arrast (3 January 1810 – 19 March 1897) was an Irish-born French explorer, geographer, ethnologist, linguist and astronomer notable for his travels in Ethiopia during the first half of the 19th century. He was the older brother of Arnaud Michel d'Abbadie, with whom he travelled.
d'Arrast was born a British subject, in Dublin, Ireland, from a partially Basque noble family of the French province of Soule. His father, Michel Abbadie, was born in Arrast-Larrebieu and his mother was Irish. His grandfather Jean-Pierre was a lay abbot and a notary in Soule. The family moved to France in 1818 where the brothers received a careful scientific education. In 1827, Antoine received a bachelor's degree in Toulouse. Starting in 1829, he began his education in Paris, where he studied law.
He married Virginie Vincent de Saint-Bonnet on 21 February 1859, and settled in Hendaye where he purchased 250ha to build a castle, and became the mayor of the city from 1871 to 1875.
Abbadie was a knight of the Legion of Honour, which he received on 27 September 1850, and a member of the French Academy of Sciences. He died in 1897, and bequeathed the Abbadi domain and castle in Hendaye, yielding 40,000 francs a year, to the Academy of Sciences, on condition they produce a catalogue of half a million stars within fifty years.
In 1835 the French Academy sent Antoine on a scientific mission to Brazil, the results being published at a later date (1873) under the title of "Observations relatives à la physique du globe faites au Brésil et en Éthiopie". In 1837, the two brothers started for Ethiopia, landing at Massawa in February 1838. They journeyed throughout Ethiopia, travelling as far south as the Kingdom of Kaffa, sometimes together and sometimes separately. In addition to his studies in the sciences, he delved into the political fray exerting influence in favour of France and the Catholic missionaries. The two brothers returned to France in 1848 with notes on the geography, geology, archaeology, and natural history of the region.
Antoine became involved in various controversies relating both to his geographical results and his political intrigues. He was especially attacked by Charles Tilstone Beke, who impugned his veracity, especially with reference to the journey to Kana. But time and the investigations of subsequent explorers have shown that Abbadie was quite trustworthy as to his facts, though wrong in his assertion — hotly contested by Beke — that the Blue Nile was the main stream. The topographical results of his explorations were published in Paris between 1860 and 1873 in "Géodésie d'Éthiopie", full of the most valuable information and illustrated by ten maps. Of the "Géographie de l'Éthiopie" (Paris, 1890) only one volume was published. In "Un Catalogue raisonné de manuscrits éthiopiens" (Paris, 1859) is a description of 234 Ethiopian manuscripts collected by Antoine. He also compiled various vocabularies, including a "Dictionnaire de la langue amariñña" (Paris, 1881), and prepared an edition of the "Shepherd of Hermas", with the Latin version, in 1860. He published numerous papers dealing with the geography of Ethiopia, Ethiopian coins and ancient inscriptions. Under the title of "Reconnaissances magnétiques" he published in 1890 an account of the magnetic observations made by him in the course of several journeys to the Red Sea and the Levant. The general account of the travels of the two brothers was published by Arnaud in 1868 under the title of "Douze ans dans la Haute Ethiopie".
Antoine was responsible for streamlining techniques in geodesy, along with inventing a new theodolite for measuring angles.
Basque through his father, Abbadie developed a particular interest in the Basque Language after meeting Prince Louis Lucien Bonaparte in London. He started his academic work on Basque in 1852.
A speaker of both Souletin and Lapurdian, a resident of Lapurdi, Abbadie considered himself a Basque from Soule. The popularity of the motto "Zazpiak Bat" is attributed to Abbadie, coined in the framework of the "Lore Jokoak" Basque festivals that he fostered.
Abbadie gave his castle home the name "Abbadia", which is the name still used in Basque. However, in French it is usually referred to as "Chateau d'Abbadie" or "Domaine d'Abbadia", and locally it is not unusual for it to be called "le Chateau d'Antoine d'Abbadie".
The château was built between 1864 and 1879 on a cliff by the Atlantic Ocean, and was designed by Eugène Viollet-le-Duc in the Neo Gothic style. It is considered one of the most important examples of French Gothic Revival Architecture. It is divided in three parts: the observatory and library, the chapel, and the living quarters.
The château still belongs to the Academy of Science to which it was bequeathed in 1895 on condition of its producing a catalogue of half-a-million stars within fifty years' time, with the work to be carried out by members of religious orders.
The château was classified as a protected historical monument by France in 1984. Most of the château property now belongs to the Coastal Protection Agency, and is managed by the city of Hendaye.
Antoine received the French Legion of Honor on 27 September 1850 with the order of chevalier or knight. He was a member of the Bureau des Longitudes and also the French Academy of Sciences. Both brothers received the grand medal of the Paris Geographical Society in 1850. | https://en.wikipedia.org/wiki?curid=2607 |
Abba Mari
Rabbi Abba Mari ben Moses ben Joseph, was a Provençal rabbi, born at Lunel, near Montpellier, towards the end of the 13th century. He is also known as Yarhi from his birthplace (Hebrew "Yerah", i.e. moon, lune), and he further took the name Astruc, Don Astruc or En Astruc of Lunel from the word "astruc" meaning lucky. He is the founder of the Astruc family.
The descendant of men learned in rabbinic lore, Abba Mari devoted himself to the study of theology and philosophy, and made himself acquainted with the writings of Moses Maimonides and Nachmanides as well as with the "Talmud".
In Montpellier, where he lived from 1303 to 1306, he was much distressed by the prevalence of Aristotelian rationalism, which (in his opinion) through the medium of the works of Maimonides, threatened the authority of the Old Testament, obedience to the law, and the belief in miracles and revelation. He therefore, in a series of letters (afterwards collected under the title "Minhat Kenaot", i.e., "Offering of Zealotry") called upon the famous rabbi Solomon ben Aderet of Barcelona to come to the aid of orthodoxy. Ben Aderet, with the approval of other prominent Spanish rabbis, sent a letter to the community at Montpellier proposing to forbid the study of philosophy to those who were less than twenty-five years of age, and, in spite of keen opposition from the liberal section, a decree in this sense was issued by Ben Aderet in 1305. The result was a great schism among the Jews of Spain and southern France, and a new impulse was given to the study of philosophy by the unauthorized interference of the Spanish rabbis.
Upon the expulsion of the Jews from France by Philip IV in 1306, Abba Mari settled at Perpignan, where he published the letters connected with the controversy. His subsequent history is unknown. Beside the letters, he was the author of liturgical poetry and works on civil law.
Leader of the opposition to the rationalism of the Maimonists in the Montpellier controversy of 1303–1306; born at Lunel—hence his name, Yarḥi (from Yeraḥ = Moon = Lune). He was a descendant of Meshullam ben Jacob of Lunel, one of whose five sons was Joseph, the grandfather of Abba Mari, who, like his son Moses, the father of Abba Mari, was highly respected for both his rabbinical learning and his general erudition. Abba Mari moved to Montpellier, where, to his chagrin, he found the study of rabbinical lore greatly neglected by the young, who devoted all of their time and zeal to science and philosophy. The rationalistic method pursued by the new school of Maimonists (including Levi ben Abraham ben Chayyim of Villefranche, near the town of Perpignan, and Jacob Anatolio) especially provoked his indignation; for the sermons preached and the works published by them seemed to resolve the entire Scriptures into allegory and threatened to undermine the Jewish faith and the observance of the Law and tradition. He was not without some philosophical training. He mentions even with reverence the name of Maimonides, whose work he possessed and studied; but he was more inclined toward the mysticism of Nachmanides. Above all, he was a thorough believer in revelation and in a divine providence, and was a sincere, law-observing follower of rabbinical Judaism. He would not allow Aristotle, "the searcher after God among the heathen," to be ranked with Moses.
Abba Mari possessed considerable Talmudic knowledge and some poetical talent; but his zeal for the Law made him an agitator and a persecutor of all the advocates of liberal thought. Being himself without sufficient authority, he appealed in a number of letters, afterward published under the title of "Minḥat Ḳenaot" ("Jealousy Offering"), to Solomon ben Adret of Barcelona, the most influential rabbi of the time, to use his powerful authority to check the source of evil by hurling his anathema against both the study of philosophy and the allegorical interpretations of the Bible, which did away with all belief in miracles. Ben Adret, while reluctant to interfere in the affairs of other congregations, was in perfect accord with Abba Mari as to the danger of the new rationalistic systems, and advised him to organize the conservative forces in defense of the Law. Abba Mari, through Ben Adret's aid, obtained allies eager to take up his cause, among whom were Don Bonafoux Vidal of Barcelona and his brother, Don Crescas Vidal, then in Perpignan. The proposition of the latter to prohibit, under penalty of excommunication, the study of philosophy and any of the sciences except medicine, by one under thirty years of age, met with the approval of Ben Adret. Accordingly, Ben Adret addressed to the congregation of Montpellier a letter, signed by fifteen other rabbis, proposing to issue a decree pronouncing the anathema against all those who should pursue the study of philosophy and science before due maturity in age and in rabbinical knowledge. On a Sabbath in September, 1304, the letter was to be read before the congregation, when Jacob Machir Don Profiat Tibbon, the renowned astronomical and mathematical writer, entered his protest against such unlawful interference by the Barcelona rabbis, and a schism ensued. Twenty-eight members signed Abba Mari's letter of approval; the others, under Tibbon's leadership, addressed another letter to Ben Adret, rebuking him and his colleagues for condemning a whole community without knowledge of the local conditions. Finally, the agitation for and against the liberal ideas brought about a schism in the entire Jewish population in southern France and Spain.
Encouraged, however, by letters signed by the rabbis of Argentière and Lunel, and particularly by the support of Kalonymus ben Todros, the "nasi" of Narbonne, and of the eminent Talmudist Asheri of Toledo, Ben Adret issued a decree, signed by thirty-three rabbis of Barcelona, excommunicating those who should, within the next fifty years, study physics or metaphysics before their thirtieth year of age (basing his action on the principle laid down by Maimonides, "Guide for the Perplexed" part one chapter 34), and had the order promulgated in the synagogue on Sabbath, July 26, 1305. When this heresy-decree, to be made effective, was forwarded to other congregations for approval, the friends of liberal thought, under the leadership of the Tibbonites, issued a counter-ban, and the conflict threatened to assume a serious character, as blind party zeal (this time on the liberal side) did not shrink from asking the civil powers to intervene. But an unlooked-for calamity brought the warfare to an end. The expulsion of the Jews from France by Philip IV ("the Fair"), in, caused the Jews of Montpellier to take refuge, partly in Provence, partly in Perpignan and partly in Majorca. Consequently, Abba Mari removed first to Arles, and, within the same year, to Perpignan, where he finally settled and disappeared from public view. There he published his correspondence with Ben Adret and his colleagues.
Abba Mari collected the correspondence and added to each letter a few explanatory notes. Of this collection, called "Minchat Kenaot", several manuscript copies survive (at Oxford; Paris; Günzburg Libr., Saint Petersburg; Parma; Ramsgate Montefiore College Library; and Turin). Some of these are mere fragments. The printed edition (Presburg, 1838), prepared by M. L. Bislichis, contains: (1) Preface; (2) a treatise of eighteen chapters on the incorporeality of God; (3) correspondence; (4) a treatise, called "Sefer ha-Yarḥi," included also in letter 58; (5) a defense of "The Guide" and its author by Shem-Tob Palquera.
As the three cardinal doctrines of Judaism, Abba Mari accentuates: (1) Recognition of God's existence and of His absolute sovereignty, eternity, unity, and incorporeality, as taught in revelation, especially in the "Ten Commandments"; (2) the world's creation by Him out of nothing, as evidenced particularly by the Sabbath; (3) special Divine providence, as manifested in the Biblical miracles. In the preface, Abba Mari explains his object in collecting the correspondence; and in the treatise which follows he shows that the study of philosophy, useful in itself as a help toward the acquisition of the knowledge of God, requires great caution, lest we be misled by the Aristotelian philosophy or its false interpretation, as regards the principles of creation "ex nihilo" and divine individual providence. The manuscripts include twelve letters which are not included in the printed edition of "Minḥat Ḳenaot."
The correspondence refers mainly to the proposed restriction of the study of the Aristotelian philosophy. Casually, other theological questions are discussed. For example, letters 1, 5, and 8 contain a discussion on the question, whether the use of a piece of metal with the figure of a lion, as a talisman, is permitted by Jewish law for medicinal purposes, or is prohibited as idolatrous. In letter 131, Abba Mari mourns the death of Ben Adret, and in letter 132 he sends words of sympathy to the congregation of Perpignan, on the death of Don Vidal Shlomo (the Meiri) and Rabbi Meshullam. Letter 33 contains the statement of Abba Mari that two letters which he desired to insert could not be discovered by him. MS. Ramsgate, No. 52, has the same statement, but also the two letters missing in the printed copies. In "Sefer haYarchi", Abba Mari refers to the great caution shown by the rabbis of old regarding the teaching of the philosophical mysteries, and recommended by men like the Hai Gaon, Maimonides, and David Kimhi. A response of Abba Mari on a ritual question is contained in MS. Ramsgate, No. 136; and Zunz mentions a "ḳinah" composed by Abba Mari.
"Minchat Kenaot" is instructive reading for the historian because it throws much light upon the deeper problems which agitated Judaism, the question of the relation of religion to the philosophy of the age, which neither the zeal of the fanatic nor the bold attitude of the liberal-minded could solve in any fixed dogmatic form or by any anathema, as the independent spirit of the congregations refused to accord to the rabbis the power possessed by the Church of dictating to the people what they should believe or respect.
At the close of the work are added several eulogies written by Abba Mari on Ben Adret (who died in 1310), and on Don Vidal, Solomon of Perpignan, and Don Bonet Crescas of Lunel. | https://en.wikipedia.org/wiki?curid=2608 |
Abbas II of Egypt
Abbas II Helmy Bey (also known as "‘Abbās Ḥilmī Pasha", ) (14 July 1874 – 19 December 1944) was the last Khedive (Ottoman viceroy) of Egypt and Sudan, ruling from 8 January 1892 to 19 December 1914. In 1914, after the Ottoman Empire joined the Central Powers in World War I, the nationalist Khedive was removed by the British, then ruling Egypt, in favor of his more pro-British uncle, Hussein Kamel, marking the "de jure" end of Egypt's four-century era as a province of the Ottoman Empire, which had begun in 1517.
Abbas II (full name: Abbas Hilmy), the great-great-grandson of Muhammad Ali, was born in Alexandria, Egypt on 14 July 1874. He succeeded his father, Tewfik Pasha, as Khedive of Egypt and Sudan on 8 January 1892. In 1887 he was ceremonially circumcised together with his younger brother Mohammed Ali Tewfik. The festivities lasted for three weeks and were carried out under great pomp. As a boy he visited the United Kingdom, and he had a number of British tutors in Cairo including a governess who taught him English. In a profile of Abbas II, the boys' annual, "Chums", gives a lengthy account of his education. His father established a small school near the Abdin Palace in Cairo where European, Arab and Ottoman masters taught Abbas and his brother Mohammed Ali Tewfik. An American officer in the Egyptian army took charge of his military training. He attended school at Lausanne, Switzerland; then, at the age of twelve he was sent to the Haxius School in Geneva, in preparation for his entry into the Theresianum in Vienna. In addition to Arabic and Ottoman Turkish, he had good conversational knowledge of English, French and German.
He was still in college in Vienna when he assumed the throne of the Khedivate of Egypt upon the sudden death of his father, 8 January 1892. He was barely of age according to Egyptian law; normally, eighteen in cases of succession to the throne. For some time he did not cooperate very cordially with the British, whose army had occupied Egypt in 1882. As he was young and eager to exercise his new power, he resented the interference of the British Agent and Consul General in Cairo, Sir Evelyn Baring, later made Lord Cromer. At the outset of his reign, Khedive Abbas II surrounded himself with a coterie of European advisers who opposed the British occupation of Egypt and Sudan and encouraged the young khedive to challenge Cromer by replacing his ailing prime minister with an Egyptian nationalist. At Cromer's behest, Lord Rosebery, the British foreign secretary, sent Abbas II a letter stating that the Khedive was obliged to consult the British consul on such issues as cabinet appointments. In January 1894 Abbas II made an inspection tour of Sudanese and Egyptian frontier troops stationed near the southern border, the Mahdists being at the time still in control of the Sudan itself. At Wadi Halfa the Khedive made public remarks disparaging the Egyptian army units commanded by British officers. The British commander of the Egyptian army, Sir Herbert Kitchener, immediately threatened to resign. Kitchener further insisted on the dismissal of a nationalist under-secretary of war appointed by Abbas II and that an apology be made for the Khedive's criticism of the army and its officers.
By 1899 he had come to accept British counsels. Also in 1899 British diplomat Alfred Mitchell-Innes was appointed Under-Secretary of State for Finance in Egypt, and in 1900 Abbas II paid a second visit to Britain, during which he said he thought the British had done good work in Egypt, and declared himself ready to cooperate with the British officials administering Egypt and Sudan. He gave his formal approval for the establishment of a sound system of justice for Egyptian nationals, a great reduction in taxation, increased affordable and sound education, the inauguration of the substantial irrigation works such as the Aswan Low Dam and the Assiut Barrage, and the reconquest of Sudan. He displayed more interest in agriculture than in statecraft. His farm of cattle and horses at Qubbah, near Cairo, was a model for agricultural science in Egypt, and he created a similar establishment at Muntazah, just east of downtown Alexandria. He married the Princess Ikbal Hanem and had several children. Muhammad Abdul Mun'im, the heir-apparent, was born on 20 February 1899.
Although Abbas II no longer "publicly" opposed the British, he secretly created, supported, and sustained the Egyptian nationalist movement, which came to be led by Mustafa Kamil. He also funded the anti-British newspaper Al-Mu'ayyad. As Kamil's thrust was increasingly aimed at winning popular support for a National Party, Khedive Abbas publicly distanced himself from the Nationalists. Their demand for a constitutional government in 1906 was rebuffed by Abbas II, and the following year he formed the National Party, led by Mustafa Kamil Pasha, to counter the Ummah Party of the Egyptian moderates. However, in general, he had no real political power. When the Egyptian Army was sent to fight Abd al-Rahman al-Mahdi in Sudan in 1896, he only found out about it because the Austro-Hungarian Archduke Francis Ferdinand was in Egypt and told him after being informed of it by a British Army officer.
His relations with Cromer's successor, Sir Eldon Gorst, however, were excellent, and they co-operated in appointing the cabinets headed by Butrus Ghali in 1908 and Muhammad Sa'id in 1910 and in checking the power of the Nationalist Party. The appointment of Kitchener to succeed Gorst in 1912 displeased Abbas II, and relations between the Khedive and the British deteriorated. Kitchener, who exiled or imprisoned the leaders of the National party, often complained about "that wicked little Khedive" and wanted to depose him.
On 25 July 1914, at the onset of World War I, Abbas II was in Constantinople and was wounded in his hands and cheeks during a failed assassination attempt. On 5 November 1914 when Great Britain declared war on Turkey, he was accused of deserting Egypt by not returning home forthwith. The British also believed that he was plotting against their rule, as he had attempted to appeal to Egyptians and Sudanese to support the Central Powers against the British, so when the Ottoman Empire joined the Central Powers in World War I, the United Kingdom declared Egypt a Sultanate under British protection on 18 December 1914 and deposed Abbas II. During the war, Abbas II supported the Ottomans, including leading an attack on the Suez Canal. He was replaced by the British by his uncle Hussein Kamel from 1914 to 1917, with the title of sultan. Hussein Kamel issued a series of restrictive orders to strip Abbas II of property in Egypt and Sudan and forbade contributions to him. These also barred Abbas from entering Egyptian territory and stripped him of the right to sue in Egyptian courts. This did not prevent his progeny, however, from exercising their rights. Abbas II finally accepted the new order on 12 May 1931 and formally abdicated. He retired to Switzerland, where he wrote "The Anglo-Egyptian Settlement" (1930). He died at Geneva on 19 December 1944, aged 70, 30 years to the day after the end of his reign as khedive.
His first marriage in Cairo on 19 February 1895 was to Ikbal Hanem (Crimean Peninsula, Russian Empire, 22 October 1876Jerusalem, 10 February 1941), and they had six children - two sons and four daughters:
His second marriage in Çubuklu, Turkey on 1 March 1910 was to Hungarian noblewoman Marianna Török de Szendrö, who took the name Zübeyde Cavidan Hanım (Philadelphia, Pennsylvania, U.S., 8 January 1874after 1951). They divorced in 1913 without issue. | https://en.wikipedia.org/wiki?curid=2609 |
George Abbot (bishop)
George Abbot (29 October 15624 August 1633) was an English divine who was Archbishop of Canterbury from 1611 to 1633. He also served as the fourth Chancellor of Trinity College, Dublin, from 1612 to 1633.
"Chambers Biographical Dictionary" describes him as "[a] sincere but narrow-minded Calvinist". Among his five brothers, Robert became Bishop of Salisbury and Maurice became Lord Mayor of London. He was a translator of the King James Version.
Born at Guildford in Surrey, where his father Maurice Abbot (died 1606) was a cloth-worker, he was taught at the Royal Grammar School, Guildford. According to an eighteenth century biographical dictionary, when Abbot's mother was pregnant with him she had a dream in which she was told that if she ate a pike her child would be a son and rise to great prominence. Some time afterwards she accidentally caught a pike while fetching water from the River Wey and it "being reported to some gentlemen in the neighbourhood, they offered to stand sponsors for the child, and afterwards shewed him many marks of favour." He later studied, and then taught under many eminent scholars, including Dr Thomas Holland, at Balliol College, Oxford, was chosen Master of University College in 1597, and appointed Dean of Winchester in 1600. He was three times Vice-Chancellor of the University, and took a leading part in preparing the authorised version of the New Testament. In 1608, he went to Scotland with George Home, 1st Earl of Dunbar to arrange for a union between the churches of England and Scotland. He so pleased King James in this affair that he was made Bishop of Lichfield and Coventry in 1609 and was translated to the see of London a month afterwards.
On 4 March 1611, Abbot was raised to the position of Canterbury. As archbishop, he defended the apostolic succession of the Anglican archbishops and bishops and the validity of the Church's priesthood in 1614. In consequence of the Nag's Head Fable, the archbishop invited certain Roman Catholics to inspect the register in the presence of six of his own episcopal colleagues, the details of which inspection were preserved. It was agreed by all parties that:
In spite of his defence of the catholic nature of the priesthood, his Puritan instincts frequently led him not only into harsh treatment of Roman Catholics, but also into courageous resistance to the royal will, such as when he opposed the scandalous divorce suit of the Lady Frances Howard against the Earl of Essex, and again in 1618 when, at Croydon, he forbade the reading of the Declaration of Sports listing the permitted Sunday recreations.
He was naturally, therefore, a promoter of the match between the king's daughter, Princess Elizabeth, and Frederick V, Elector Palatine, and a firm opponent of the projected marriage of the new Prince of Wales (later Charles I) and the Spanish Infanta, Maria Anna. This policy brought upon the archbishop the hatred of William Laud (with whom he had previously come into collision at Oxford) and the king's court, although the King himself never forsook Abbot.
In July 1621, while hunting in Lord Zouch's park at Bramshill in Hampshire, a bolt from his cross-bow aimed at a deer happened to strike one of the keepers, who died within an hour, and Abbot was so greatly distressed by the event that he fell into a state of settled melancholia.
His enemies maintained that the fatal issue of this accident disqualified him for his office, and argued that, though the homicide was involuntary, the sport of hunting which had led to it was one in which no clerical person could lawfully indulge.
The King had to refer the matter to a commission of ten, though he said that "an angel might have miscarried after this sort."
The commission was equally divided, and the King gave a casting vote in the Archbishop's favour, though signing also a formal pardon or dispensation. Gustavus Paine notes that Abbot was both the "only translator of the 1611 Bible and the only Archbishop of Canterbury ever to kill a human being."
After this the Archbishop seldom appeared at the Council, chiefly on account of his infirmities. In 1625 he attended the King constantly, however, in his last illness, and performed the ceremony of the coronation of King Charles I as king of England. His refusal to license the assize sermon preached by Dr Robert Sibthorp at Northampton on 22 February 1627, in which cheerful obedience was urged to the king's demand for a general loan, and the duty proclaimed of absolute non-resistance even to the most arbitrary royal commands, led Charles to deprive him of his functions as primate, putting them in commission. The need of summoning parliament, however, soon brought about a nominal restoration of the Archbishop's powers. His presence being unwelcome at court, he lived from that time in retirement, leaving Laud and his party in undisputed ascendancy.
He died at Croydon on 4 August 1633, and was buried at Guildford, his native place, where he had endowed a hospital with lands to the value of £300 a year.
Abbot was a conscientious prelate, though narrow in view and often harsh towards both separatists and Roman Catholics. He wrote a large number of works, the most interesting being his discursive "Exposition on the Prophet Jonah" (1600), which was reprinted in 1845. His "Geography, or a Brief Description of the Whole World" (1599), passed through numerous editions. The newest edition, edited by the current Master of the Abbot's Hospital, was published by Goldenford Publishers Ltd on 20 June 2011, to commemorate the 400th anniversary of his enthronement as Archbishop of Canterbury.
Guildford remembers the Archbishop with his hospital, a statue in the High Street, a pub and also a secondary school (George Abbot School) named after him. His tomb can be seen in Holy Trinity Church.
The best account of Abbot is in Samuel Rawson Gardiner's "History of England". | https://en.wikipedia.org/wiki?curid=2613 |
Adware
Adware, or advertising-supported software, is software that generates revenue for its developer by automatically generating online advertisements in the user interface of the software or on a screen presented to the user during the installation process. The software may generate two types of revenue: one is for the display of the advertisement and another on a "pay-per-click" basis, if the user clicks on the advertisement. The software may implement advertisements in a variety of ways, including a static box display, a banner display, full screen, a video, pop-up ad or in some other form.
The 2003 "Microsoft Encyclopedia of Security" and some other sources use the term "adware" differently: "any software that installs itself on your system without your knowledge and displays advertisements when the user browses the Internet", i.e., a form of malware.
Some software developers offer their software free of charge, and rely on revenue from advertising to recoup their expenses and generate income. Some also offer a version of the software at a fee without advertising.
The software's functions may be designed to analyze the user's location and which Internet sites the user visits and to present advertising pertinent to the types of goods or services featured there.
In legitimate software, the advertising functions are integrated into or bundled with the program. Adware is usually seen by the developer as a way to recover development costs, and to generate revenue. In some cases, the developer may provide the software to the user free of charge or at a reduced price. The income derived from presenting advertisements to the user may allow or motivate the developer to continue to develop, maintain and upgrade the software product. The use of advertising-supported software in business is becoming increasingly popular, with a third of IT and business executives in a 2007 survey by McKinsey & Company planning to be using ad-funded software within the following two years. Advertisement-funded software is also one of the business models for open-source software.
Some software is offered in both an advertising-supported mode and a paid, advertisement-free mode. The latter is usually available by an online purchase of a license or registration code for the software that unlocks the mode, or the purchase and download of a separate version of the software.
Some software authors offer advertising-supported versions of their software as an alternative option to business organizations seeking to avoid paying large sums for software licenses, funding the development of the software with higher fees for advertisers.
Examples of advertising-supported software include Adblock Plus ("Acceptable Ads"), the Windows version of the Internet telephony application Skype, and the Amazon Kindle 3 family of e-book readers, which has versions called "Kindle with Special Offers" that display advertisements on the home page and in sleep mode in exchange for substantially lower pricing.
In 2012, Microsoft and its advertising division, Microsoft Advertising, announced that Windows 8, the major release of the Microsoft Windows operating system, would provide built-in methods for software authors to use advertising support as a business model. The idea had been considered since as early as 2005.
Support by advertising is a popular business model of software as a service (SaaS) on the Web. Notable examples include the email service Gmail and other Google Apps (now G Suite) products, and the social network Facebook. Microsoft has also adopted the advertising-supported model for many of its social software SaaS offerings. The Microsoft Office Live service was also available in an advertising-supported mode.
In the view of Federal Trade Commission staff, there appears to be general agreement that software should be considered "spyware" only if it is downloaded or installed on a computer without the user's knowledge and consent. However, unresolved issues remain concerning how, what, and when consumers need to be told about software installed on their computers for consent to be adequate. For instance, distributors often disclose in an end-user license agreement that there is additional software bundled with primary software, but some panelists and commenters did not view such disclosure as sufficient to infer consent to the installation of the bundled software.
The term "adware" is frequently used to describe a form of malware (malicious software) which presents unwanted advertisements to the user of a computer. The advertisements produced by adware are sometimes in the form of a pop-up or sometimes in an "unclosable window".
When the term is used in this way, the severity of its implication varies. While some sources rate adware only as an "irritant", others classify it as an "online threat" or even rate it as seriously as computer viruses and trojans. The precise definition of the term in this context also varies. Adware that observes the computer user's activities without their consent and reports it to the software's author is called spyware. However most adware operates legally and some adware manufacturers have even sued antivirus companies for blocking adware.
Programs have been developed to detect, quarantine, and remove advertisement-displaying malware, including Ad-Aware, Malwarebytes' Anti-Malware, Spyware Doctor and Spybot – Search & Destroy. In addition, almost all commercial antivirus software currently detect adware and spyware, or offer a separate detection module.
A new wrinkle is adware (using stolen certificates) that disables anti-malware and virus protection; technical remedies are available.
Adware has also been discovered in certain low-cost Android devices, particularly those made by small Chinese firms running on Allwinner systems-on-chip. There are even cases where adware code is embedded deep into files stored on the system and boot partitions, to which removal involves extensive (and complex) modifications to the firmware. | https://en.wikipedia.org/wiki?curid=2616 |
Aedicula
In ancient Roman religion, an aedicula (plural aediculae) is a small shrine. The word "aedicula" is the diminutive of the Latin "aedes", a temple building, and can translate into English as "aedicule" or "edicule".
Many aediculae were household shrines that held small altars or statues of the Lares and Penates. The Lares were Roman deities protecting the house and the family household gods. The Penates were originally patron gods (really genii) of the storeroom, later becoming household gods guarding the entire house.
Other aediculae were small shrines within larger temples, usually set on a base, surmounted by a pediment and surrounded by columns. In ancient Roman architecture the aedicula has this representative function in the society. They are installed in public buildings like the triumphal arch, city gate, and thermae. The Library of Celsus in Ephesus (2. c. AD) is a good example. From the 4th century Christianization of the Roman Empire onwards such shrines, or the framework enclosing them, are often called by the Biblical term tabernacle, which becomes extended to any elaborated framework for a niche, window or picture.
As in Classical architecture, in Gothic architecture, too, an aedicula or tabernacle frame is a structural framing device that gives importance to its contents, whether an inscribed plaque, a cult object, a bust or the like, by assuming the tectonic vocabulary of a little building that sets it apart from the wall against which it is placed. A tabernacle frame on a wall serves similar hieratic functions as a free-standing, three-dimensional architectural baldaquin or a ciborium over an altar.
In Late Gothic settings, altarpieces and devotional images were customarily crowned with gables and canopies supported by clustered-column piers, echoing in small the architecture of Gothic churches. Painted ædicules frame figures from sacred history in initial letters of illuminated manuscripts.
Classicizing architectonic structure and decor "all'antica", in the "ancient [Roman] mode", became a fashionable way to frame a painted or bas-relief portrait, or protect an expensive and precious mirror during the High Renaissance; Italian precedents were imitated in France, then in Spain, England and Germany during the later 16th century.
Aedicular door surrounds that are architecturally treated, with pilasters or columns flanking the doorway and an entablature even with a pediment over it came into use with the 16th century. In the neo-Palladian revival in Britain, architectonic aedicular or tabernacle frames, carved and gilded, are favourite schemes for English Palladian mirror frames of the late 1720s through the 1740s, by such designers as William Kent.
Similar small shrines, called "naiskoi", are found in Greek religion, but their use was strictly religious.
Aediculae exist today in Roman cemeteries as a part of funeral architecture.
Presently the most famous aedicule is situated inside the Church of the Holy Sepulchre in city of Jerusalem.
Contemporary American architect Charles Moore (1925–1993) used the concept of aediculae in his work to create spaces within spaces and to evoke the spiritual significance of the home. | https://en.wikipedia.org/wiki?curid=2621 |
Aegean civilization
Aegean civilization is a general term for the Bronze Age civilizations of Greece around the Aegean Sea. There are three distinct but communicating and interacting geographic regions covered by this term: Crete, the Cyclades and the Greek mainland. Crete is associated with the Minoan civilization from the Early Bronze Age. The Cyclades converge with the mainland during the Early Helladic ("Minyan") period and with Crete in the Middle Minoan period. From ca. 1450 BC (Late Helladic, Late Minoan), the Greek Mycenaean civilization spreads to Crete.
The earlier Aegean farming populations brought agriculture to Western Europe already before 5,000 years BC.
Recent DNA studies indicate that agriculture was brought to Western Europe by the Aegean populations, that are known as ‘the Aegean Neolithic farmers’. These Neolithic groups arrived to northern France and Germany already around 5,000 years BC. About 1,000 years later, they arrived in Britain.
When they left the Aegean, these populations quickly split into two groups with somewhat different cultures. One group went north along the Danube, while the other took a southerly route along the Mediterranean and reached Iberia. This latter group then arrived in Britain.
Prior to that, these territories were populated by the hunter-gathererer cultures known as the 'western hunter-gatherers', similar to the Cheddar Man.
Most of the ancestry of the British population after 4,000 years BC (74% on average) is attributable to the Aegean Neolithic farmers. This indicates a substantial shift in ancestry with the transition to farming.
Chalcolithic (copper) age in Europe started from about 3500 BC. This was also a period of Megalithic culture.
Commerce was practiced to some extent in very early times, as is proved by the distribution of Melian obsidian over all the Aegean area. We find Cretan vessels exported to Melos, Egypt and the Greek mainland. Melian vases came in their turn to Crete. After 1600 BC there is very close commerce with Egypt, and Aegean things find their way to all coasts of the Mediterranean. No traces of currency have come to light, unless certain axeheads, too slight for practical use, had that character. Standard weights have been found, as well as representations of ingots. The Aegean written documents have not yet proved (by being found outside the area) to be epistolary (letter writing) correspondence with other countries. Representations of ships are not common, but several have been observed on Aegean gems, gem-sealings, frying pans and vases. They are vessels of low free-board, with masts and oars. Familiarity with the sea is proved by the free use of marine motifs in decoration. The most detailed illustrations are to be found on the 'ship fresco' at Akrotiri on the island of Thera (Santorini) preserved by the ash fall from the volcanic eruption which destroyed the town there.
Discoveries, later in the 20th century, of sunken trading vessels such as those at Uluburun and Cape Gelidonya off the south coast of Turkey have brought forth an enormous amount of new information about that culture.
For details of monumental evidence the articles on Crete, Mycenae, Tiryns, Troad, Cyprus, etc., must be consulted. The most representative site explored up to now is Knossos (see Crete) which has yielded not only the most various but the most continuous evidence from the Neolithic age to the twilight of classical civilization. Next in importance come Hissarlik, Mycenae, Phaestus, Hagia Triada, Tiryns, Phylakope, Palaikastro and Gournia.
Mycenae and Tiryns are the two principal sites on which evidence of a prehistoric civilization was remarked long ago by the ancient Greeks.
The curtain-wall and towers of the Mycenaean citadel, its gate with heraldic lions, and the great "Treasury of Atreus" had borne silent witness for ages before Heinrich Schliemann's time; but they were supposed only to speak to the Homeric, or, at farthest, a rude Heroic beginning of purely Hellenic civilization. It was not until Schliemann exposed the contents of the graves which lay just inside the gate, that scholars recognized the advanced stage of art which prehistoric dwellers in the Mycenaean citadel had attained.
There had been, however, a good deal of other evidence available before 1876, which, had it been collated and seriously studied, might have discounted the sensation that the discovery of the citadel graves eventually made. Although it was recognized that certain tributaries, represented for example, in the XVIIIth Dynasty tomb of Rekhmara at Egyptian Thebes as bearing vases of peculiar forms, were of some Mediterranean race, neither their precise habitat nor the degree of their civilization could be determined while so few actual prehistoric remains were known in the Mediterranean lands. Nor did the Aegean objects which were lying obscurely in museums in 1870, or thereabouts, provide a sufficient test of the real basis underlying the Hellenic myths of the Argolid, the Troad and Crete, to cause these to be taken seriously. Aegean vases have been exhibited both at Sèvres and Neuchatel since about 1840, the provenance (i.e. source or origin) being in the one case Phylakope in Melos, in the other Cephalonia.
Ludwig Ross, the German archaeologist appointed Curator of the Antiquities of Athens at the time of the establishment of the Kingdom of Greece, by his explorations in the Greek islands from 1835 onwards, called attention to certain early intaglios, since known as Inselsteine; but it was not until 1878 that C. T. Newton demonstrated these to be no strayed Phoenician products. In 1866 primitive structures were discovered on the island of Therasia by quarrymen extracting pozzolana, a siliceous volcanic ash, for the Suez Canal works. When this discovery was followed up in 1870, on the neighbouring Santorini (Thera), by representatives of the French School at Athens, much pottery of a class now known immediately to precede the typical late Aegean ware, and many stone and metal objects, were found. These were dated by the geologist Ferdinand A. Fouqué, somewhat arbitrarily, to 2000 BC, by consideration of the superincumbent eruptive stratum.
Meanwhile, in 1868, tombs at Ialysus in Rhodes had yielded to Alfred Biliotti many fine painted vases of styles which were called later the third and fourth "Mycenaean"; but these, bought by John Ruskin, and presented to the British Museum, excited less attention than they deserved, being supposed to be of some local Asiatic fabric of uncertain date. Nor was a connection immediately detected between them and the objects found four years later in a tomb at Menidi in Attica and a rock-cut "bee-hive" grave near the Argive Heraeum.
Even Schliemann's first excavations at Hissarlik in the Troad did not excite surprise. But the "Burnt City" of his second stratum, revealed in 1873, with its fortifications and vases, and a hoard of gold, silver and bronze objects, which the discoverer connected with it, began to arouse a curiosity which was destined presently to spread far outside the narrow circle of scholars. As soon as Schliemann came on the Mycenae graves three years later, light poured from all sides on the prehistoric period of Greece. It was recognized that the character of both the fabric and the decoration of the Mycenaean objects was not that of any well-known art. A wide range in space was proved by the identification of the Inselsteine and the Ialysus vases with the new style, and a wide range in time by collation of the earlier Theraean and Hissarlik discoveries. A relationship between objects of art described by Homer and the Mycenaean treasure was generally allowed, and a correct opinion prevailed that, while certainly posterior, the civilization of the Iliad was reminiscent of the Mycenaean.
Schliemann got to work again at Hissarlik in 1878, and greatly increased our knowledge of the lower strata, but did not recognize the Aegean remains in his "Lydian" city of the sixth stratum. These were not to be fully revealed until Dr. Wilhelm Dorpfeld, who had become Schliemann's assistant in 1879, resumed the work at Hissarlik in 1892 after the first explorer's death. But by laying bare in 1884 the upper stratum of remains on the rock of Tiryns, Schliemann made a contribution to our knowledge of prehistoric domestic life which was amplified two years later by Christos Tsountas's discovery of the palace at Mycenae. Schliemann's work at Tiryns was not resumed till 1905, when it was proved, as had long been suspected, that an earlier palace underlies the one he had exposed.
From 1886 dates the finding of Mycenaean sepulchres outside the Argolid, from which, and from the continuation of Tsountas's exploration of the buildings and lesser graves at Mycenae, a large treasure, independent of Schliemann's princely gift, has been gathered into the National Museum at Athens. In that year tholos-tombs, most already pillaged but retaining some of their furniture, were excavated at Arkina and Eleusis in Attica, at Dimini near Volos in Thessaly, at Kampos on the west of Mount Taygetus, and at Maskarata in Cephalonia. The richest grave of all was explored at Vaphio in Laconia in 1889, and yielded, besides many gems and miscellaneous goldsmiths' work, two golden goblets chased with scenes of bull-hunting, and certain broken vases painted in a large bold style which remained an enigma until the excavation of Knossos.
In 1890 and 1893, Staes cleared out certain less rich tholos-tombs at Thoricus in Attica; and other graves, either rock-cut "bee-hives" or chambers, were found at Spata and Aphidna in Attica, in Aegina and Salamis, at the Argive Heraeum and Nauplia in the Argolid, near Thebes and Delphi, and not far from the Thessalian Larissa. During the Acropolis excavations in Athens, which terminated in 1888, many potsherds of the Mycenaean style were found; but Olympia had yielded either none, or such as had not been recognized before being thrown away, and the temple site at Delphi produced nothing distinctively Aegean (in dating). The American explorations of the Argive Heraeum, concluded in 1895, also failed to prove that site to have been important in the prehistoric time, though, as was to be expected from its neighbourhood to Mycenae itself, there were traces of occupation in the later Aegean periods.
Prehistoric research had now begun to extend beyond the Greek mainland. Certain central Aegean islands, Antiparos, Ios, Amorgos, Syros and Siphnos, were all found to be singularly rich in evidence of the Middle-Aegean period. The series of Syran-built graves, containing crouching corpses, is the best and most representative that is known in the Aegean. Melos, long marked as a source of early objects but not systematically excavated until taken in hand by the British School at Athens in 1896, yielded at Phylakope remains of all the Aegean periods, except the Neolithic.
A map of Cyprus in the later Bronze Age (such as is given by J. L. Myres and M. O. Richter in Catalogue of the Cyprus Museum) shows more than 25 settlements in and about the Mesaorea district alone, of which one, that at Enkomi, near the site of Salamis, has yielded the richest Aegean treasure in precious metal found outside Mycenae. E. Chantre in 1894 picked up lustreless ware, like that of Hissariik, in central Phtygia and at Pteria, and the English archaeological expeditions, sent subsequently into north-western Anatolia, have never failed to bring back ceramic specimens of Aegean appearance from the valleys of the Rhyndncus, Sangarius and Halys.
In Egypt in 1887, Flinders Petrie found painted sherds of Cretan style at Kahun in the Fayum, and farther up the Nile, at Tell el-Amarna, chanced on bits of no fewer than 800 Aegean vases in 1889. There have now been recognized in the collections at Cairo, Florence, London, Paris and Bologna several Egyptian imitations of the Aegean style which can be set off against the many debts which the centres of Aegean culture owed to Egypt. Two Aegean vases were found at Sidon in 1885, and many fragments of Aegean and especially Cypriot pottery have been found during recent excavations of sites in Philistia by the Palestine Fund.
Sicily, ever since P. Orsi excavated the Sicel cemetery near Lentini in 1877, has proved a mine of early remains, among which appear in regular succession Aegean fabrics and motives of decoration from the period of the second stratum at Hissarlik. Sardinia has Aegean sites, for example, at Abini near Teti; and Spain has yielded objects recognized as Aegean from tombs near Cadiz and from Saragossa.
One land, however, has eclipsed all others in the Aegean by the wealth of its remains of all the prehistoric ages— Crete; and so much so that, for the present, we must regard it as the fountainhead of Aegean civilization, and probably for long its political and social centre. The island first attracted the notice of archaeologists by the remarkable archaic Greek bronzes found in a cave on Mount Ida in 1885, as well as by epigraphic monuments such as the famous law of Gortyna (also called Gortyn). But the first undoubted Aegean remains reported from it were a few objects extracted from Cnossus by Minos Kalokhairinos of Candia in 1878. These were followed by certain discoveries made in the S. plain Messara by F. Halbherr. Unsuccessful attempts at Cnossus were made by both W. J. Stillman and H. Schliemann, and A. J. Evans, coming on the scene in 1893, travelled in succeeding years about the island picking up trifles of unconsidered evidence, which gradually convinced him that greater things would eventually be found. He obtained enough to enable him to forecast the discovery of written characters, till then not suspected in Aegean civilization. The revolution of 1897–1898 opened the door to wider knowledge, and much exploration has ensued, for which see Crete.
Thus the "Aegean Area" has now come to mean the Archipelago with Crete and Cyprus, the Hellenic peninsula with the Ionian islands, and Western Anatolia. Evidence is still wanting for the Macedonian and Thracian coasts. Offshoots are found in the western Mediterranean area, in Sicily, Italy, Sardinia and Spain, and in the eastern Mediterranean area in Syria and Egypt. Regarding the Cyrenaica, we are still insufficiently informed.
The final collapse of the Mycenaean civilisation appears to have occurred about 1000 BC. The palace at Knossus was once more destroyed, and never rebuilt or re-inhabited. Iron took the place of Bronze, and Aegean art, as a living thing, ceased on the Greek mainland and in the Aegean isles including Crete, together with Aegean writing. In Cyprus, and perhaps on the south-west Anatolian coasts, there is some reason to think that the cataclysm was not complete, and Aegean art continued to languish, cut off from its fountain-head. Such artistic faculty as survived elsewhere issued in the lifeless geometric style which is reminiscent of the later Aegean, but wholly unworthy of it. Cremation took the place of burial of the dead. This great disaster, which cleared the ground for a new growth of local art, was probably due to yet another incursion of northern tribes, in possession of superior iron weaponsthose tribes which later Greek tradition and Homer knew as the Dorians. They crushed a civilization already hard hit; and it took two or three centuries for the artistic spirit, instinct in the Aegean area, and probably preserved in suspended animation by the survival of Aegean racial elements, to blossom anew. On this conquest seems to have ensued a long period of unrest and popular movements, known to Greek tradition as the Ionian Migration and the Aeolic and Dorian "colonizations", and when once more we see the Aegean area clearly, it is dominated by Hellenes, though it has not lost all memory of its earlier culture. | https://en.wikipedia.org/wiki?curid=2624 |
Aegina
Aegina (; , "Aígina" ; ) is one of the Saronic Islands of Greece in the Saronic Gulf, from Athens. Tradition derives the name from Aegina, the mother of the hero Aeacus, who was born on the island and became its king.
The municipality of Aegina consists of the island of Aegina and a few offshore islets. It is part of the Islands regional unit, Attica region. The municipality is subdivided into the following five communities (population in 2011 in parentheses ):
The capital is the town of Aegina, situated at the northwestern end of the island. Due to its proximity to Athens, it is a popular vacation place during the summer months, with quite a few Athenians owning second houses on the island.
The province of Aegina () was one of the provinces of the Attica Prefecture and was created in 1833 as part of Attica and Boeotia Prefecture. Its territory corresponded with that of the current municipalities Aegina and Agkistri. It was abolished in 2006.
Aegina is roughly triangular in shape, approximately from east to west and from north to south, with an area of .
An extinct volcano constitutes two-thirds of Aegina. The northern and western sides consist of stony but fertile plains, which are well cultivated and produce luxuriant crops of grain, with some cotton, vines, almonds, olives and figs, but the most characteristic crop of Aegina today (2000s) is pistachio. Economically, the sponge fisheries are of notable importance. The southern volcanic part of the island is rugged and mountainous, and largely barren. Its highest rise is the conical Mount Oros (531 m) in the south, and the Panhellenian ridge stretches northward with narrow fertile valleys on either side.
The beaches are also a popular tourist attraction. Hydrofoil ferries from Piraeus take only forty minutes to reach Aegina; the regular ferry takes about an hour, with ticket prices for adults within the 4–15 euro range. There are regular bus services from Aegina town to destinations throughout the island such as Agia Marina. Portes is a fishing village on the east coast.
Aegina, according to Herodotus, was a colony of Epidaurus, to which state it was originally subject. Its placement between Attica and the Peloponnesus made it a site of trade even earlier, and its earliest inhabitants allegedly came from Asia Minor. Minoan ceramics have been found in contexts of . The famous Aegina Treasure, now in the British Museum is estimated to date between 1700 and 1500 BC. The discovery on the island of a number of gold ornaments belonging to the last period of Mycenaean art suggests that Mycenaean culture existed in Aegina for some generations after the Dorian conquest of Argos and Lacedaemon. It is probable that the island was not Doricised before the 9th century BC.
One of the earliest historical facts is its membership in the Amphictyony or League of Calauria, attested around the 8th century BC. This ostensibly religious league included—besides Aegina—Athens, the Minyan (Boeotian) Orchomenos, Troezen, Hermione, Nauplia, and Prasiae. It was probably an organisation of city-states that were still Mycenaean, for the purpose of suppressing piracy in the Aegean that began as a result of the decay of the naval supremacy of the Mycenaean princes.
Aegina seems to have belonged to the Eretrian league during the Lelantine War; this, perhaps, may explain the war with Samos, a major member of the rival Chalcidian league during the reign of King Amphicrates (Herod. iii. 59), i.e. not later than the earlier half of the 7th century BC.
Its early history reveals that the maritime importance of the island dates back to pre-Dorian times. It is usually stated on the authority of Ephorus, that Pheidon of Argos established a mint in Aegina, the first city-state to issue coins in Europe, the Aeginetic stater. One stamped stater (having the mark of some authority in the form of a picture or words) can be seen in the Bibliothèque Nationale of Paris. It is an electrum stater of a turtle, an animal sacred to Aphrodite, struck at Aegina that dates from 700 BC. Therefore, it is thought that the Aeginetes, within 30 or 40 years of the invention of coinage in Asia Minor by the Ionian Greeks or the Lydians (c. 630 BC), might have been the ones to introduce coinage to the Western world. The fact that the Aeginetic standard of weights and measures (developed during the mid-7th century) was one of the two standards in general use in the Greek world (the other being the Euboic-Attic) is sufficient evidence of the early commercial importance of the island. The Aeginetic weight standard of about 12.3 grams was widely adopted in the Greek world during the 7th century BC. The Aeginetic stater was divided into three drachmae of 4.1 grams of silver. Staters depicting a sea-turtle were struck up to the end of the 5th century BC. Following the end of the Peloponnesian War, 404 BC, it was replaced by the land tortoise.
During the naval expansion of Aegina during the Archaic Period, Kydonia was an ideal maritime stop for Aegina's fleet on its way to other Mediterranean ports controlled by the emerging sea-power Aegina. During the next century Aegina was one of the three principal states trading at the emporium of Naucratis in Egypt, and it was the only Greek state near Europe that had a share in this factory. At the beginning of the 5th century BC it seems to have been an entrepôt of the Pontic grain trade, which, at a later date, became an Athenian monopoly.
Unlike the other commercial states of the 7th and 6th centuries BC, such as Corinth, Chalcis, Eretria and Miletus, Aegina did not found any colonies. The settlements to which Strabo refers (viii. 376) cannot be regarded as any real exceptions to this statement.
The known history of Aegina is almost exclusively a history of its relations with the neighbouring state of Athens, which began to compete with the thalassocracy (sea power) of Aegina about the beginning of the 6th century BC. Solon passed laws limiting Aeginetan commerce in Attica. The legendary history of these relations, as recorded by Herodotus (v. 79–89; vi. 49–51, 73, 85–94), involves critical problems of some difficulty and interest. He traces the hostility of the two states back to a dispute about the images of the goddesses Damia and Auxesia, which the Aeginetes had carried off from Epidauros, their parent state.
The Epidaurians had been accustomed to make annual offerings to the Athenian deities Athena and Erechtheus in payment for the Athenian olive-wood of which the statues were made. Upon the refusal of the Aeginetes to continue these offerings, the Athenians endeavoured to carry away the images. Their design was frustrated miraculously – according to the Aeginetan version, the statues fell upon their knees – and only a single survivor returned to Athens. There he became victim to the fury of his comrades' widows who pierced him with their brooch-pins. No date is assigned by Herodotus for this "old feud"; recent writers, such as J. B. Bury and R. W. Macan, suggest the period between Solon and Peisistratus, . It is possible that the whole episode is mythical. A critical analysis of the narrative seems to reveal little else than a series of aetiological traditions (explanatory of cults and customs), such as of the kneeling posture of the images of Damia and Auxesia, of the use of native ware instead of Athenian in their worship, and of the change in women's dress at Athens from the Dorian to the Ionian style.
The account which Herodotus gives of the hostilities between the two states during the early years of the 5th century BC is to the following effect. The Thebans, after the defeat by Athens about 507 BC, appealed to Aegina for assistance. The Aeginetans at first contented themselves with sending the images of the Aeacidae, the tutelary heroes of their island. Subsequently, however, they contracted an alliance, and ravaged the seaboard of Attica. The Athenians were preparing to make reprisals, in spite of the advice of the Delphic oracle that they should desist from attacking Aegina for thirty years, and content themselves meanwhile with dedicating a precinct to Aeacus, when their projects were interrupted by the Spartan intrigues for the restoration of Hippias.
In 491 BC Aegina was one of the states which gave the symbols of submission ("earth and water") to Achaemenid Persia. Athens at once appealed to Sparta to punish this act of medism, and Cleomenes I, one of the Spartan kings, crossed over to the island, to arrest those who were responsible for it. His attempt was at first unsuccessful; but, after the deposition of Demaratus, he visited the island a second time, accompanied by his new colleague Leotychides, seized ten of the leading citizens and deposited them at Athens as hostages.
After the death of Cleomenes and the refusal of the Athenians to restore the hostages to Leotychides, the Aeginetes retaliated by seizing a number of Athenians at a festival at Sunium. Thereupon the Athenians concerted a plot with Nicodromus, the leader of the democratic party in the island, for the betrayal of Aegina. He was to seize the old city, and they were to come to his aid on the same day with seventy vessels. The plot failed owing to the late arrival of the Athenian force, when Nicodromus had already fled the island. An engagement followed in which the Aeginetes were defeated. Subsequently, however, they succeeded in winning a victory over the Athenian fleet.
All the incidents subsequent to the appeal of Athens to Sparta are referred expressly by Herodotus to the interval between the sending of the heralds in 491 BC and the invasion of Datis and Artaphernes in 490 BC (cf. Herod. vi. 49 with 94).
There are difficulties with this story, of which the following are the principal elements:
As the final victory of Athens over Aegina was in 458 BC, the thirty years of the oracle would carry us back to the year 488 BC as the date of the dedication of the precinct and the beginning of hostilities. This inference is supported by the date of the building of the 200 triremes "for the war against Aegina" on the advice of Themistocles, which is given in the "Constitution of Athens" as 483–482 BC. It is probable, therefore, that Herodotus is in error both in tracing back the beginning of hostilities to an alliance between Thebes and Aegina () and in claiming the episode of Nicodromus occurred prior to the battle of Marathon.
Overtures were unquestionably made by Thebes for an alliance with Aegina , but they came to nothing. The refusal of Aegina was in the diplomatic guise of "sending the Aeacidae." The real occasion of the beginning of the war was the refusal of Athens to restore the hostages some twenty years later. There was but one war, and it lasted from 488 to 481 BC. That Athens had the worst of it in this war is certain. Herodotus had no Athenian victories to record after the initial success, and the fact that Themistocles was able to carry his proposal to devote the surplus funds of the state to the building of so large a fleet seems to imply that the Athenians were themselves convinced that a supreme effort was necessary.
It may be noted, in confirmation of this opinion, that the naval supremacy of Aegina is assigned by the ancient writers on chronology to precisely this period, i.e. the years 490–480 BC.
In the repulse of Xerxes I it is possible that the Aeginetes played a larger part than is conceded to them by Herodotus. The Athenian tradition, which he follows in the main, would naturally seek to obscure their services. It was to Aegina rather than Athens that the prize of valour at Salamis was awarded, and the destruction of the Persian fleet appears to have been as much the work of the Aeginetan contingent as of the Athenian (Herod. viii. 91). There are other indications, too, of the importance of the Aeginetan fleet in the Greek scheme of defence. In view of these considerations it becomes difficult to credit the number of the vessels that is assigned to them by Herodotus (30 as against 180 Athenian vessels, cf. Greek History, sect. Authorities). During the next twenty years the Philo-Laconian policy of Cimon secured Aegina, as a member of the Spartan league, from attack. The change in Athenian foreign policy, which was consequent upon the ostracism of Cimon in 461 BC, resulted in what is sometimes called the First Peloponnesian War, during which most of the fighting was experienced by Corinth and Aegina. The latter state was forced to surrender to Athens after a siege, and to accept the position of a subject-ally (). The tribute was fixed at 30 talents.
By the terms of the Thirty Years' Peace (445 BC) Athens promised to restore to Aegina her autonomy, but the clause remained ineffective. During the first winter of the Peloponnesian War (431 BC) Athens expelled the Aeginetans and established a cleruchy in their island. The exiles were settled by Sparta in Thyreatis, on the frontiers of Laconia and Argolis. Even in their new home they were not safe from Athenian rancour. A force commanded by Nicias landed in 424 BC, and killed most of them. At the end of the Peloponnesian War Lysander restored the scattered remnants of the old inhabitants to the island, which was used by the Spartans as a base for operations against Athens during the Corinthian War. Its greatness, however, was at an end. The part which it plays henceforward is insignificant.
It would be a mistake to attribute the demise of Aegina solely to the development of the Athenian navy. It is probable that the power of Aegina had steadily declined during the twenty years after Salamis, and that it had declined absolutely, as well as relatively to that of Athens. Commerce was the source of Aegina's greatness, and her trade, which seems to have been principally with the Levant, must have suffered seriously from the war with Persia. Aegina's medism in 491 is to be explained by its commercial relations with the Persian Empire. It was forced into patriotism in spite of itself, and the glory won by the battle of Salamis was paid for by the loss of its trade and the decay of its marine. The completeness of the ruin of so powerful a state is explained by the economic conditions of the island, the prosperity of which was based on slave labour. It is impossible, indeed, to accept Aristotle's (cf. Athenaeus vi. 272) estimate of 470,000 as the number of the slave population; it is clear, however, that the number must have been much greater than that of the free inhabitants. In this respect the history of Aegina does but anticipate the history of Greece as a whole.
The constitutional history of Aegina is unusually simple. So long as the island retained its independence the government was an oligarchy. There is no trace of heroic monarchy and no tradition of a "tyrannis". The story of Nicodromus, while it proves the existence of a democratic party, suggests, at the same time, that it could count upon little support.
Aegina with the rest of Greece became dominated successively by the Macedonians (322–229 BC), the Achaeans (229–211 BC), Aetolians (211–210 BC), Attalus of Pergamum (210–133 BC) and the Romans (after 133 BC). A sign at the Archaeological Museum of Aegina is reported to say that a Jewish community is believed to have been established in Aegina "at the end of the second and during the 3rd century AD" by Jews fleeing the barbarian invasions of the time in Greece. However, the first phases of those invasions began in the 4th century. Local Christian tradition has it that a Christian community was established there in the 1st century, having as its bishop Crispus, the ruler of the Corinthian synagogue, who became a Christian, and was baptised by Paul the Apostle. There are written records of participation by later bishops of Aegina, Gabriel and Thomas, in the Councils of Constantinople in 869 and 879. The see was at first a suffragan of the metropolitan see of Corinth, but was later given the rank of archdiocese. No longer a residential bishopric, Aegina is today listed by the Catholic Church as a titular see.
Aegina belonged to the East Roman (Byzantine) Empire after the division of the Roman Empire in 395. It remained Eastern Roman during the period of crisis of the 7th–8th centuries, when most of the Balkans and the Greek mainland were overrun by Slavic invasions. Indeed, according to the "Chronicle of Monemvasia", the island served as a refuge for the Corinthians fleeing these incursions. The island flourished during the early 9th century, as evidenced by church construction activity, but suffered greatly from Arab raids originating from Crete. Various hagiographies record a large-scale raid , that resulted in the flight of much of the population to the Greek mainland. During that time, some of the population sought refuge in the island's hinterland, establishing the settlement of Palaia Chora.
According to the 12th-century bishop of Athens, Michael Choniates, by his time the island had become a base for pirates. This is corroborated by Benedict of Peterborough's graphic account of Greece, as it was in 1191; he states that many of the islands were uninhabited for fear of pirates and that Aegina, along with Salamis and Makronisos, were their strongholds.
After the dissolution and partition of the Byzantine Empire by the Fourth Crusade in 1204, Aegina was accorded to the Republic of Venice. In the event, it became controlled by the Duchy of Athens. The Catalan Company seized control of Athens, and with it Aegina, in 1317, and in 1425 the island became controlled by the Venetians, when Alioto Caopena, at that time ruler of Aegina, placed himself by treaty under the Republic's protection to escape the danger of a Turkish raid. The island must then have been fruitful, for one of the conditions by which Venice accorded him protection was that he should supply grain to Venetian colonies. He agreed to surrender the island to Venice if his family became extinct. Antonio II Acciaioli opposed the treaty for one of his adopted daughters had married the future lord of Aegina, Antonello Caopena.
In 1451, Aegina became Venetian. The islanders welcomed Venetian rule; the claims of Antonello's uncle Arnà, who had lands in Argolis, were satisfied by a pension. A Venetian governor ("rettore") was appointed, who was dependent on the authorities of Nauplia. After Arnà's death, his son Alioto renewed his claim to the island but was told that the republic was resolved to keep it. He and his family were pensioned and one of them aided in the defence of Aegina against the Turks in 1537, was captured with his family, and died in a Turkish dungeon.
In 1463 the Turco-Venetian war began, which was destined to cost the Venetians Negroponte (Euboea), the island of Lemnos, most of the Cyclades islands, Scudra and their colonies in the Morea. Peace was concluded in 1479. Venice still retained Aegina, Lepanto (Naupactus), Nauplia, Monemvasia, Modon, Navarino, Coron, and the islands Crete, Mykonos and Tinos. Aegina remained subject to Nauplia.
Aegina obtained money for its defences by reluctantly sacrificing its cherished relic, the head of St. George, which had been carried there from Livadia by the Catalans. In 1462, the Venetian Senate ordered the relic to be removed to St. Giorgio Maggiore in Venice and on 12 November, it was transported from Aegina by Vettore Cappello, the famous Venetian commander. In return, the Senate gave the Aeginetes 100 ducats apiece towards fortifying the island.
In 1519, the government was reformed. The system of having two rectors was found to result in frequent quarrels and the republic thenceforth sent out a single official styled Bailie and Captain, assisted by two councillors, who performed the duties of camerlengo by turns. The Bailie's authority extended over the rector of Aegina, whereas Kastri (opposite the island Hydra) was granted to two families, the Palaiologoi and the Alberti.
Society at Nauplia was divided into three classes: nobles, citizens and plebeians, and it was customary for nobles alone to possess the much-coveted local offices, such as the judge of the inferior court and inspector of weights and measures. The populace now demanded its share and the home government ordered that at least one of the three inspectors should be a non-noble.
Aegina had always been exposed to the raids of corsairs and had oppressive governors during these last 30 years of Venetian rule. Venetian nobles were not willing to go to this island. In 1533, three rectors of Aegina were punished for their acts of injustice and there is a graphic account of the reception given by the Aeginetans to the captain of Nauplia, who came to command an enquiry into the administration of these delinquents (vid. inscription over the entrance of St. George the Catholic in Paliachora). The rectors had spurned their ancient right to elect an islander to keep one key of the money-chest. They had also threatened to leave the island en masse with the commissioner, unless the captain avenged their wrongs. To spare the economy of the community, it was ordered that appeals from the governor's decision should be made on Crete, instead of in Venice. The republic was to pay a bakshish to the Turkish governor of the Morea and to the voivode who was stationed at the frontier of Thermisi (opposite Hydra). The fortifications too, were allowed to become decrepit and were inadequately guarded.
After the end of the Duchy of Athens and the principality of Achaia, the only Latin possessions left on the mainland of Greece were the papal city of Monemvasia, the fortress of Vonitsa, the Messenian stations Coron and Modon, Lepanto, Pteleon, Navarino, and the castles of Argos and Nauplia, to which the island of Aegina was subordinate.
In 1502–03, the new peace treaty left Venice with nothing but Cephalonia, Monemvasia and Nauplia, with their appurtenances in the Morea. And against the sack of Megara, it had to endure the temporary capture of the castle of Aegina by Kemal Reis and the abduction of 2000 inhabitants. This treaty was renewed in 1513 and 1521. All supplies of grain from Nauplia and Monemvasia had to be imported from Turkish possessions, while corsairs rendered dangerous all traffic by sea.
In 1537, sultan Suleiman declared war upon Venice and his admiral Hayreddin Barbarossa devastated much of the Ionian Islands, and in October invaded the island of Aegina. On the fourth day Palaiochora was captured, but the Latin church of St George was spared. Hayreddin Barbarossa had the adult male population massacred and took away 6,000 surviving women and children as slaves. Then Barbarossa sailed to Naxos, whence he carried off an immense booty, compelling the Duke of Naxos to purchase his further independence by paying a tribute of 5000 ducats.
With the peace of 1540, Venice ceded Nauplia and Monemvasia. For nearly 150 years afterwards, Venice ruled no part of the mainland of Greece except Parga and Butrinto (subordinate politically to the Ionian Islands), but it still retained its insular dominions Cyprus, Crete, Tenos and six Ionian islands.
The island was attacked and left desolate by Francesco Morosini during the Cretan War (1654).
In 1684, the beginning of the Morean War between Venice and the Ottoman Empire resulted in the temporary reconquest of a large part of the country by the Republic. In 1687 the Venetian army arrived in Piraeus and captured Attica. The number of the Athenians at that time exceeded 6,000, the Albanians from the villages of Attica excluded, whilst in 1674 the population of Aegina did not seem to exceed 3,000 inhabitants, two thirds of which were women. The Aeginetans had been reduced to poverty to pay their taxes. The most significant plague epidemic began in Attica during 1688, an occasion that caused the massive migration of Athenians toward the south; most of them settled in Aegina. In 1693 Morosini resumed command, but his only acts were to refortify the castle of Aegina, which he had demolished during the Cretan war in 1655, the cost of upkeep being paid as long as the war lasted by the Athenians, and to place it and Salamis under Malipiero as Governor. This caused the Athenians to send him a request for the renewal of Venetian protection and an offer of an annual tribute. He died in 1694 and Zeno was appointed at his place.
In 1699, thanks to English mediation, the war ended with the peace of Karlowitz by which Venice retained possession of the 7 Ionian islands as well as Butrinto and Parga, the Morea, Spinalonga and Suda, Tenos, Santa Maura and Aegina and ceased to pay a tribute for Zante, but which restored Lepanto to the Ottoman sultan. Cerigo and Aegina were united administratively since the peace with Morea, which not only paid all the expenses of administration but furnished a substantial balance for the naval defence of Venice, in which it was directly interested.
During the early part of the Ottoman–Venetian War of 1714–1718 the Ottoman Fleet commanded by Canum Hoca captured Aegina. Ottomans rule in Aegina and the Morea was resumed and confirmed by the Treaty of Passarowitz, and they retained control of the island with the exception of a brief Russian occupation Orlov Revolt (early 1770s), until the beginning of the Greek War of Independence in 1821.
During the Greek War of Independence, Aegina became an administrative centre for the Greek revolutionary authorities. Ioannis Kapodistrias was briefly established here.
In 1896, the physician Nikolaos Peroglou introduced the systematic cultivation of pistachios, which soon became popular among the inhabitants of the island. By 1950, pistachio cultivation had significantly displaced the rest of the agricultural activity due to its high profitability but also due to the phylloxera that threatened the vineyards that time. As a result, in the early 60s, the first pistachio peeling factory was established in the Plakakia area by Grigorios Konidaris. The quality of ""Fistiki Aeginis"" (Aegina Pistachios), a name that was established as a product of Protected Designation of Origin (PDO) in 1996, is considered internationally excellent and superior to several foreign varieties, due to the special climatic conditions of the island (drought) as well as soil's volcanic characteristics. Pistachios have made Aegina famous all over the world. Today, half of the pistachio growers are members of the Agricultural Cooperative of Aegina's Pistachio Producers. It is estimated that pistachio cultivation covers 29,000 acres of the island while the total production reaches 2,700 tons per year. In recent years, in mid-September, the Pistachio Festival has been organized every year under the name ""Fistiki Fest"".
In Greek mythology, Aegina was a daughter of the river god Asopus and the nymph Metope. She bore at least two children: Menoetius by Actor, and Aeacus by the god Zeus. When Zeus abducted Aegina, he took her to Oenone, an island close to Attica. Here, Aegina gave birth to Aeacus, who would later become king of Oenone; thenceforth, the island's name was Aegina.
Aegina was the gathering place of Myrmidons; in Aegina they gathered and trained. Zeus needed an elite army and at first thought that Aegina, which at the time did not have any villagers, was a good place. So he changed some ants (, Myrmigia) into warriors who had six hands and wore black armour. Later, the Myrmidons, commanded by Achilles, were known as the most fearsome fighting unit in Greece. | https://en.wikipedia.org/wiki?curid=2627 |
Aegis
The aegis ( ; "aigis"), as stated in the "Iliad", is carried by Athena and Zeus, but its nature is uncertain. It had been interpreted as an animal skin or a shield, sometimes bearing the head of a Gorgon. There may be a connection with a deity named Aex or Aix, a daughter of Helios and a nurse of Zeus or alternatively a mistress of Zeus (Hyginus, "Astronomica" 2. 13). The aegis of Athena is referred to in several places in "The Iliad". "It produced a sound as from a myriad roaring dragons ("Iliad", 4.17) and was borne by Athena in battle ... and among them went bright-eyed Athene, holding the precious aegis which is ageless and immortal: a hundred tassels of pure gold hang fluttering from it, tight-woven each of them, and each the worth of a hundred oxen."
The modern concept of doing something "under someone's "aegis" means doing something under the protection of a powerful, knowledgeable, or benevolent source. The word "aegis" is identified with protection by a strong force with its roots in Greek mythology and adopted by the Romans; there are parallels in Norse mythology and in Egyptian mythology as well, where the Greek word "aegis" is applied by extension.
Virgil imagines the Cyclopes in Hephaestus' forge, who "busily burnished the aegis Athena wears in her angry moods—a fearsome thing with a surface of gold like scaly snake-skin, and the linked serpents and the Gorgon herself upon the goddess's breast—a severed head rolling its eyes", furnished with golden tassels and bearing the "Gorgoneion" (Medusa's head) in the central boss. Some of the Attic vase-painters retained an archaic tradition that the tassels had originally been serpents in their representations of the aegis. When the Olympian deities overtook the older deities of Greece and she was born of Metis (inside Zeus who had swallowed the goddess) and "re-born" through the head of Zeus fully clothed, Athena already wore her typical garments.
When the Olympian shakes the aegis, Mount Ida is wrapped in clouds, the thunder rolls and men are struck down with fear. "Aegis-bearing Zeus", as he is in the "Iliad", sometimes lends the fearsome aegis to Athena. In the "Iliad" when Zeus sends Apollo to revive the wounded Hector, Apollo, holding the aegis, charges the Achaeans, pushing them back to their ships drawn up on the shore. According to Edith Hamilton's "Mythology: Timeless Tales of Gods and Heroes", the Aegis is the breastplate of Zeus, and was "awful to behold". However, Zeus is normally portrayed in classical sculpture holding a thunderbolt or lightning, bearing neither a shield nor a breastplate.
Classical Greece interpreted the Homeric aegis usually as a cover of some kind borne by Athena. It was supposed by Euripides ("Ion", 995) that the aegis borne by Athena was the skin of the slain Gorgon, yet the usual understanding is that the "Gorgoneion" was "added" to the aegis, a votive offering from a grateful Perseus.
In a similar interpretation, Aex, a daughter of Helios, represented as a great fire-breathing chthonic serpent similar to the Chimera, was slain and flayed by Athena, who afterwards wore its skin, the aegis, as a cuirass (Diodorus Siculus iii. 70), or as a chlamys. The Douris cup shows that the aegis was represented exactly as the skin of the great serpent, with its scales clearly delineated.
John Tzetzes says that aegis was the skin of the monstrous giant Pallas whom Athena overcame and whose name she attached to her own.
In a late rendering by Gaius Julius Hyginus ("Poetical Astronomy" ii. 13), Zeus is said to have used the skin of a pet goat owned by his nurse Amalthea ("aigis" "goat-skin") which suckled him in Crete, as a shield when he went forth to do battle against the Titans.
The aegis appears in works of art sometimes as an animal's skin thrown over Athena's shoulders and arms, occasionally with a border of snakes, usually also bearing the Gorgon head, the "gorgoneion". In some pottery it appears as a tasselled cover over Athena's dress. It is sometimes represented on the statues of Roman emperors, heroes, and warriors, and on cameos and vases. A vestige of that appears in a portrait of Alexander the Great in a fresco from Pompeii dated to the first century BC, which shows the image of the head of a woman on his armor that resembles the Gorgon.
Herodotus thought he had identified the source of the ægis in ancient Libya, which was always a distant territory of ancient magic for the Greeks. "Athene's garments and ægis were borrowed by the Greeks from the Libyan women, who are dressed in exactly the same way, except that their leather garments are fringed with thongs, not serpents."
Robert Graves in "The Greek Myths" (1955) asserts that the ægis in its Libyan sense had been a shamanic pouch containing various ritual objects, bearing the device of a monstrous serpent-haired visage with tusk-like teeth and a protruding tongue which was meant to frighten away the uninitiated. In this context, Graves identifies the aegis as clearly belonging first to Athena.
One current interpretation is that the Hittite sacral hieratic hunting bag ("kursas"), a rough and shaggy goatskin that has been firmly established in literary texts and iconography by H.G. Güterbock, was a source of the aegis.
The Greek "aigis", has many meanings including:
The original meaning may have been the first, and "Zeus Aigiokhos" = "Zeus who holds the aegis" may have originally meant "Sky/Heaven, who holds the thunderstorm". The transition to the meaning "shield" or "goatskin" may have come by folk etymology among a people familiar with draping an animal skin over the left arm as a shield. | https://en.wikipedia.org/wiki?curid=2628 |
Aelia Capitolina
Aelia Capitolina (Traditional English Pronunciation: ; Latin in full: ) was a Roman colony, built under Emperor Hadrian on the site of Jewish Jerusalem, which had been almost totally razed after the siege of 70 CE, this being one apparent reason for the Bar Kokhba revolt of 132–136 AD. "Aelia Capitolina" remained the official name of pagan Jerusalem until the rise of Christianity under Emperor Constantine I, who brought back the name Jerusalem in 324. The first part of the Roman pagan name was still in use in Arabic in 638 CE, when the Muslim armies conquered the city which they called 'إلياء', Iliyā'.
"Aelia" came from Hadrian's "nomen gentile", "Aelius", while "Capitolina" meant that the new city was dedicated to "Jupiter Capitolinus", to whom a temple was built on the Temple Mount. Under the rule of Antiochus IV Epiphanes, before King Herod's reign, the site of the Second Temple at the Temple Mount had already been reconsecrated to Zeus. This led to the Maccabean Revolt—which resulted in the Jewish-Roman alliance.
The Latin name "Aelia" is the source of the much later Arabic term Iliyā' (إلياء), a 7th-century Islamic name for Jerusalem.
Jerusalem, once heavily rebuilt by Herod, was still in ruins following the decisive siege of the city, as part of the First Jewish–Roman War in AD 70. Josephus—a contemporary historian and proponent of the Judean cause who was born in Jerusalem and fought the Romans in that war—reports that "Jerusalem ... was so thoroughly razed to the ground by those that demolished it to its foundations, that nothing was left that could ever persuade visitors that it had once been a place of habitation." The Talmud (Makkot) tells of Rabbi Akiva and several other sages visiting the ruins of Jerusalem. His colleagues were aggrieved at seeing a fox scuttling out of what had been the Temple's Holy of Holies as an indication of the desolation, while Akiva laughed, telling them through what many believe to be divine inspiration that one day the Temple will be rebuilt.
According to Eusebius, the Jerusalem church was scattered twice, in 70 and 135, with the difference that from 70–130 the bishops of Jerusalem have evidently Jewish names, whereas after 135 the bishops of Aelia Capitolina appear to be Greeks. Eusebius' evidence for continuation of a church at Aelia Capitolina is confirmed by the Bordeaux Pilgrim.
According to rabbinic sources, when the Roman emperor Hadrian vowed to rebuild Jerusalem from the wreckage in AD 130, he considered reconstructing Jerusalem as a gift to the Jewish people. The Jews awaited with hope, but after Hadrian visited Jerusalem, he was discouraged from doing so by a Samaritan. He then decided to rebuild the city as a Roman colony, which would be inhabited by his legionaries. Hadrian's new city was to be dedicated to himself and certain Roman gods, in particular Jupiter.
There is controversy as to whether Hadrian's anti-Jewish decrees followed the Jewish Bar Kokhba revolt or preceded it and were the cause of the revolt. The older view is that the Bar Kokhba revolt, which took the Romans three years to suppress, enraged Hadrian, and he became determined to erase Judaism from the province. Circumcision was forbidden and Jews were expelled from the city. Hadrian renamed Iudaea Province to "Syria Palaestina", dispensing with the name of Judaea.
Jerusalem was renamed "Aelia Capitolina" and rebuilt in the style of a typical Roman town. Jews were prohibited from entering the city on pain of death, except for one day each year, during the holiday of Tisha B'Av. Taken together, these measures (which also affected Jewish Christians) essentially secularized the city. The ban was maintained until the 7th century, though Christians would soon be granted an exemption: during the 4th century, the Roman Emperor Constantine I ordered the construction of Christian holy sites in the city, including the Church of the Holy Sepulchre. Burial remains from the Byzantine period are exclusively Christian, suggesting that the population of Jerusalem in Byzantine times probably consisted only of Christians.
In the fifth century, the eastern continuation of the Roman Empire that was ruled from Constantinople, maintained control of the city. At the beginning of the fifth century, within the span of a few decades, the city shifted from Byzantine to Persian rule, then back to Roman-Byzantine dominion. Following Sassanid Khosrau II's early seventh century push through Syria, his generals Shahrbaraz and Shahin attacked Jerusalem ("") aided by the Jews of Palaestina Prima, who had risen up against the Byzantines. In the Siege of Jerusalem of 614 AD, after 21 days of relentless siege warfare, Jerusalem was captured. Byzantine chronicles relate that the Sassanids and Jews slaughtered tens of thousands of Christians in the city, many at the Mamilla Pool, and destroyed their monuments and churches, including the Church of the Holy Sepulchre. The conquered city would remain in Sassanid hands for some fifteen years until the Byzantine Emperor Heraclius reconquered it in 629.
Byzantine Jerusalem was conquered by the Arab armies of Umar ibn al-Khattab in AD 638, which resulted in the removal of the restrictions on Jews living in the city. Among Muslims of Islam's earliest era it was referred to as "Madinat bayt al-Maqdis", 'City of the Temple', a name restricted to the Temple Mount. The rest of the city was called "Iliya", reflecting the Roman name Aelia Capitolina.
The city was without walls, protected by a light garrison of the Tenth Legion, during the Late Roman Period. The detachment at Jerusalem, which apparently encamped all over the city's western hill, was responsible for preventing Jews from returning to the city. Roman enforcement of this prohibition continued through the 4th century.
The urban plan of Aelia Capitolina was that of a typical Roman town wherein main thoroughfares crisscrossed the urban grid lengthwise and widthwise. The urban grid was based on the usual central north–south road ("cardo") and central east–west route ("decumanus"). However, as the main cardo ran up the western hill, and the Temple Mount blocked the eastward route of the main decumanus, a second pair of main roads was added; the secondary cardo ran down the Tyropoeon Valley, and the secondary decumanus ran just to the north of the Temple Mount. The main Hadrianic cardo terminated not far beyond its junction with the decumanus, where it reached the Roman garrison's encampment, but in the Byzantine era it was extended over the former camp to reach the southern walls of the city.
The two cardines converged near the "Damascus Gate", and a semicircular piazza covered the remaining space; in the piazza a columnar monument was constructed, hence the Arabic name for the gate - "Bab el-Amud" ("Gate of the Column"). Tetrapylones were constructed at the other junctions between the main roads.
This street pattern has been preserved in the Old City of Jerusalem to the present. The original thoroughfare, flanked by rows of columns and shops, was about 73 feet (22 meters) wide, but buildings have extended onto the streets over the centuries, and the modern lanes replacing the ancient grid are now quite narrow. The substantial remains of the western cardo have now been exposed to view near the junction with Suq el-Bazaar, and remnants of one of the tetrapylones are preserved in the 19th century Franciscan chapel at the junction of the Via Dolorosa and Suq Khan ez-Zeit.
As was standard for new Roman cities, Hadrian placed the city's main forum at the junction of the main cardo and decumanus, now the location for the (smaller) Muristan. Adjacent to the Forum, at the junction of the same cardo, and the other decumanus, Hadrian built a large temple to Venus, which later became the Church of the Holy Sepulchre; despite 11th century destruction, which resulted in the modern Church having a much smaller footprint, several boundary walls of Hadrian's temple have been found among the archaeological remains beneath the Church. The "Struthion Pool" lay in the path of the northern decumanus, so Hadrian placed vaulting over it, added a large pavement on top, and turned it into a secondary forum; the pavement can still be seen under the Convent of the Sisters of Zion.
Near the Struthion Pool, Hadrian built a triple-arched gateway as an entrance to the eastern forum of Aelia Capitolina. Traditionally, this was thought to be the gate of Herod's Antonia Fortress, which itself was alleged to be the location of Jesus' trial and Pontius Pilate's "Ecce homo" speech as described in . This was due in part to the 1864 discovery of a game etched on a flagstone of the pool. According to the nuns of the convent, the game was played by Roman soldiers and ended in the execution of a 'monk king'. It is possible that following its destruction, the Antonia Fortress's pavement tiles were brought to the cistern of Hadrian's plaza.
When later constructions narrowed the "Via Dolorosa", the two arches on either side of the central arch became incorporated into a succession of more modern buildings. The Basilica of Ecce Homo now preserves the northern arch. The southern arch was incorporated into a monastery for Uzbek dervishes belonging to the Order of the Golden Chain in the 16th century, but these were demolished in the 19th century in order to found a mosque.
Footnotes
Citations | https://en.wikipedia.org/wiki?curid=2632 |
Agarose
Agarose is a polysaccharide, generally extracted from certain red seaweed. It is a linear polymer made up of the repeating unit of agarobiose, which is a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agarose is one of the two principal components of agar, and is purified from agar by removing agar's other component, agaropectin.
Agarose is frequently used in molecular biology for the separation of large molecules, especially DNA, by electrophoresis. Slabs of agarose gels (usually 0.7 - 2%) for electrophoresis are readily prepared by pouring the warm, liquid solution into a mold. A wide range of different agaroses of varying molecular weights and properties are commercially available for this purpose. Agarose may also be formed into beads and used in a number of chromatographic methods for protein purification.
Agarose is a linear polymer with a molecular weight of about 120,000, consisting of alternating D-galactose and 3,6-anhydro-L-galactopyranose linked by α-(1→3) and β-(1→4) glycosidic bonds. The 3,6-anhydro-L-galactopyranose is an L-galactose with an anhydro bridge between the 3 and 6 positions, although some L-galactose units in the polymer may not contain the bridge. Some D-galactose and L-galactose units can be methylated, and pyruvate and sulfate are also found in small quantities.
Each agarose chain contains ~800 molecules of galactose, and the agarose polymer chains form helical fibres that aggregate into supercoiled structure with a radius of 20-30 nm. The fibers are quasi-rigid, and have a wide range of length depending on the agarose concentration. When solidified, the fibres form a three-dimensional mesh of channels of diameter ranging from 50 nm to >200 nm depending on the concentration of agarose used - higher concentrations yield lower average pore diameters. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state.
Agarose is available as a white powder which dissolves in near-boiling water, and forms a gel when it cools. Agarose exhibits the phenomenon of thermal hysteresis in its liquid-to-gel transition, i.e. it gels and melts at different temperatures. The gelling and melting temperatures vary depending on the type of agarose. Standard agaroses derived from "Gelidium" has a gelling temperature of and a melting temperature of , while those derived from "Gracilaria", due to its higher methoxy substituents, has a gelling temperature of and melting temperature of . The melting and gelling temperatures may be dependent on the concentration of the gel, particularly at low gel concentration of less than 1%. The gelling and melting temperatures are therefore given at a specified agarose concentration.
Natural agarose contains uncharged methyl groups and the extent of methylation is directly proportional to the gelling temperature. Synthetic methylation however have the reverse effect, whereby increased methylation lowers the gelling temperature. A variety of chemically modified agaroses with different melting and gelling temperatures are available through chemical modifications.
The agarose in the gel forms a meshwork that contains pores, and the size of the pores depends on the concentration of agarose added. On standing the agarose gels are prone to syneresis (extrusion of water through the gel surface), but the process is slow enough to not interfere with the use of the gel.
Agarose gel can have high gel strength at low concentration, making it suitable as an anti-convection medium for gel electrophoresis. Agarose gels as dilute as 0.15% can form slabs for gel electrophoresis. The agarose polymer contains charged groups, in particular pyruvate and sulfate. These negatively charged groups can slow down the movement of DNA molecules in a process called electroendosmosis (EEO), and low EEO agarose is therefore generally preferred for use in agarose gel electrophoresis of nucleic acids. Zero EEO agaroses are also available but these may be undesirable for some applications as they may be made by adding positively charged groups that can affect subsequent enzyme reactions. Electroendosmosis is a reason agarose is used preferentially over agar as agaropectin in agar contains a significant amount of negatively charged sulphate and carboxyl groups. The removal of agaropectin in agarose substantially reduces the EEO, as well as reducing the non-specific adsorption of biomolecules to the gel matrix. However, for some applications such as the electrophoresis of serum protein, a high EEO may be desirable, and agaropectin may be added in the gel used.
The melting and gelling temperatures of agarose can be modified by chemical modifications, most commonly by hydroxyethylation, which reduces the number of intrastrand hydrogen bonds, resulting in lower melting and setting temperatures than standard agaroses. The exact temperature is determined by the degree of substitution, and many available low-melting-point (LMP) agaroses can remain fluid at range. This property allows enzymatic manipulations to be carried out directly after the DNA gel electrophoresis by adding slices of melted gel containing DNA fragment of interest to a reaction mixture. The LMP agarose contains fewer sulphates which can affect some enzymatic reactions, and is therefore preferably used for some applications. Hydroxyethylation may reduce the pore size by reducing the packing density of the agarose bundles, therefore LMP gel can also have an effect on the time and separation during electrophoresis. Ultra-low melting or gelling temperature agaroses may gel only at .
Agarose is a preferred matrix for work with proteins and nucleic acids as it has a broad range of physical, chemical and thermal stability, and its lower degree of chemical complexity also makes it less likely to interact with biomolecules. Agarose is most commonly used as the medium for analytical scale electrophoretic separation in agarose gel electrophoresis. Gels made from purified agarose have a relatively large pore size, making them useful for separation of large molecules, such as proteins and protein complexes >200 kilodaltons, as well as DNA fragments >100 basepairs. Agarose is also used widely for a number of other applications, for example immunodiffusion and immunoelectrophoresis, as the agarose fibers functions as an anchor for immunocomplexes.
Agarose gel electrophoresis is the routine method for resolving DNA in the laboratory. Agarose gels have lower resolving power for DNA than acrylamide gels, but they have greater range of separation, and are therefore usually used for DNA fragments with lengths of 50–20,000 bp (base pairs), although resolution of over 6 Mb is possible with pulsed field gel electrophoresis (PFGE). It can also be used to separate large protein molecules, and it is the preferred matrix for the gel electrophoresis of particles with effective radii larger than 5-10 nm.
The pore size of the gel affects the size of the DNA that can be sieved. The lower the concentration of the gel, the larger the pore size, and the larger the DNA that can be sieved. However low-concentration gels (0.1 - 0.2%) are fragile and therefore hard to handle, and the electrophoresis of large DNA molecules can take several days. The limit of resolution for standard agarose gel electrophoresis is around 750 kb. This limit can be overcome by PFGE, where alternating orthogonal electric fields are applied to the gel. The DNA fragments reorientate themselves when the applied field switches direction, but larger molecules of DNA take longer to realign themselves when the electric field is altered, while for smaller ones it is quicker, and the DNA can therefore be fractionated according to size.
Agarose gels are cast in a mold, and when set, usually run horizontally submerged in a buffer solution. Tris-acetate-EDTA and Tris-Borate-EDTA buffers are commonly used, but other buffers such as Tris-phosphate, barbituric acid-sodium barbiturate or Tris-barbiturate buffers may be used in other applications. The DNA is normally visualized by staining with ethidium bromide and then viewed under a UV light, but other methods of staining are available, such as SYBR Green, GelRed, methylene blue, and crystal violet. If the separated DNA fragments are needed for further downstream experiment, they can be cut out from the gel in slices for further manipulation.
Agarose gel matrix is often used for protein purification, for example, in column-based preparative scale separation as in gel filtration chromatography, affinity chromatography and ion exchange chromatography. It is however not used as a continuous gel, rather it is formed into porous beads or resins of varying fineness. The beads are highly porous so that protein may flow freely through the beads. These agarose-based beads are generally soft and easily crushed, so they should be used under gravity-flow, low-speed centrifugation, or low-pressure procedures. The strength of the resins can be improved by increased cross-linking and chemical hardening of the agarose resins, however such changes may also result in a lower binding capacity for protein in some separation procedures such as affinity chromatography.
Agarose is a useful material for chromatography because it does not absorb biomolecules to any significant extent, has good flow properties, and can tolerate extremes of pH and ionic strength as well as high concentration of denaturants such as 8M urea or 6M guanidine HCl. Examples of agarose-based matrix for gel filtration chromatography are Sepharose and WorkBeads 40 SEC (cross-linked beaded agarose), "Praesto" and Superose (highly cross-linked beaded agaroses), and Superdex (dextran covalently linked to agarose).
For affinity chromatography, beaded agarose is the most commonly used matrix resin for the attachment of the ligands that bind protein. The ligands are linked covalently through a spacer to activated hydroxyl groups of agarose bead polymer. Proteins of interest can then be selectively bound to the ligands to separate them from other proteins, after which it can be eluted. The agarose beads used are typically of 4% and 6% densities with a high binding capacity for protein.
Agarose plate may sometimes be used instead of agar for culturing organisms as agar may contain impurities that can affect the growth of the organism or some downstream procedures such as polymerase chain reaction (PCR). Agarose is also harder than agar and may therefore be preferable where greater gel strength is necessary, and its lower gelling temperature may prevent causing thermal shock to the organism when the cells are suspended in liquid before gelling. It may be used for the culture of strict autotrophic bacteria, plant protoplast, "Caenorhabditis elegans", other organisms and various cell lines.
Agarose is often used as a support for the tri-dimensional culture of human and animal cells. Because agarose forms a non-cytotoxic hydrogels, it can be utilized to reproduce the natural environment of cells in the human body, the extracellular matrix. However, agarose forms a stiff inert hydrogel that does not carry any biological information, thus the human and animal cells can not adhere to the polysaccharide. Because of these specifics properties, agarose hydrogel mimics the natural environment of cartilage cells and have been shown to be support the differentiation of chondrocytes into cartilage. In order to modify the mechanical properties of agarose to reproduce the natural environment of other human cells, agarose can be chemically modified through the precise oxidation of the primary alcohol of the D-galactose into carboxylic acid. This chemical modification provides a novel class of materials named carboxylated agarose. Through the control over the number of carboxylated D-galactose on the polysaccharide backbone, the mechanical properties of the resulting hydrogel can be precisely controlled. These carboxylated agarose hydrogels can be then covalently bond to peptides to form hydrogel on which cells can adhere. These carboxylated agarose hydrogels have been shown to direct the organization of human endothelial cells into polarized lumens.
Mixing of fully carboxylated agarose with natural agarose can be used to make hydrogels that span a whole range of mechanical properties.
Agarose is sometimes used instead of agar to measure microorganism motility and mobility. Motile species will be able to migrate, albeit slowly, throughout the porous gel and infiltration rates can then be visualized. The gel's porosity is directly related to the concentration of agar or agarose in the medium, so different concentration gels may be used to assess a cell's swimming, swarming, gliding and twitching motility. Under-agarose cell migration assay may be used to measure chemotaxis and chemokinesis. A layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient. | https://en.wikipedia.org/wiki?curid=2635 |
Atomic absorption spectroscopy
Atomic absorption spectroscopy (AAS) and atomic emission spectroscopy (AES) is a spectroanalytical procedure for the quantitative determination of chemical elements using the absorption of optical radiation (light) by free atoms in the gaseous state. Atomic absorption spectroscopy is based on absorption of light by free metallic ions.
In analytical chemistry the technique is used for determining the concentration of a particular element (the analyte) in a sample to be analyzed. AAS can be used to determine over 70 different elements in solution, or directly in solid samples via electrothermal vaporization, and is used in pharmacology, biophysics,
archaeology and toxicology research.
Atomic emission spectroscopy was first used as an analytical technique, and the underlying principles were established in the second half of the 19th century by Robert Wilhelm Bunsen and Gustav Robert Kirchhoff, both professors at the University of Heidelberg, Germany.
The modern form of AAS was largely developed during the 1950s by a team of Australian chemists. They were led by Sir Alan Walsh at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Division of Chemical Physics, in Melbourne, Australia.
Atomic absorption spectrometry has many uses in different areas of chemistry such as clinical analysis of metals in biological fluids and tissues such as whole blood, plasma, urine, saliva, brain tissue, liver, hair, muscle tissue,
Atomic absorption spectrometry can use in qualitative and quantitative analysis.
The technique makes use of the atomic absorption spectrum of a sample in order to assess the concentration of specific analytes within it. It requires standards with known analyte content to establish the relation between the measured absorbance and the analyte concentration and relies therefore on the Beer-Lambert law.
In order to analyze a sample for its atomic constituents, it has to be atomized. The atomizers most commonly used nowadays are flames and electrothermal (graphite tube) atomizers. The atoms should then be irradiated by optical radiation, and the radiation source could be an element-specific line radiation source or a continuum radiation source. The radiation then passes through a monochromator in order to separate the element-specific radiation from any other radiation emitted by the radiation source, which is finally measured by a detector.
The atomizers most commonly used nowadays are (spectroscopic) flames and electrothermal (graphite tube) atomizers. Other atomizers, such as glow-discharge atomization, hydride atomization, or cold-vapor atomization, might be used for special purposes.
The oldest and most commonly used atomizers in AAS are flames, principally the air-acetylene flame with a temperature of about 2300 °C and the nitrous oxide system (N2O)-acetylene flame with a temperature of about 2700 °C. The latter flame, in addition, offers a more reducing environment, being ideally suited for analytes with high affinity to oxygen.
Liquid or dissolved samples are typically used with flame atomizers. The sample solution is aspirated by a pneumatic analytical nebulizer, transformed into an aerosol, which is introduced into a spray chamber, where it is mixed with the flame gases and conditioned in a way that only the finest aerosol droplets (< 10 μm) enter the flame. This conditioning process reduces interference, but only about 5% of the aerosolized solution reaches the flame because of it.
On top of the spray chamber is a burner head that produces a flame that is laterally long (usually 5–10 cm) and only a few mm deep. The radiation beam passes through this flame at its longest axis, and the flame gas flow-rates may be adjusted to produce the highest concentration of free atoms. The burner height may also be adjusted, so that the radiation beam passes through the zone of highest atom cloud density in the flame, resulting in the highest sensitivity.
The processes in a flame include the stages of desolvation (drying) in which the solvent is evaporated and the dry sample nano-particles remain, vaporization (transfer to the gaseous phase) in which the solid particles are converted into gaseous molecule, atomization in which the molecules are dissociated into free atoms, and ionization where (depending on the ionization potential of the analyte atoms and the energy available in a particular flame) atoms may be in part converted to gaseous ions.
Each of these stages includes the risk of interference in case the degree of phase transfer is different for the analyte in the calibration standard and in the sample. Ionization is generally undesirable, as it reduces the number of atoms that are available for measurement, i.e., the sensitivity.
In flame AAS a steady-state signal is generated during the time period when the sample is aspirated. This technique is typically used for determinations in the mg L−1 range, and may be extended down to a few μg L−1 for some elements.
Electrothermal AAS (ET AAS) using graphite tube atomizers was pioneered by Boris V. L’vov at the Saint Petersburg Polytechnical Institute, Russia, since the late 1950s, and investigated in parallel by Hans Massmann at the Institute of Spectrochemistry and Applied Spectroscopy (ISAS) in Dortmund, Germany.
Although a wide variety of graphite tube designs have been used over the years, the dimensions nowadays are typically 20–25 mm in length and 5–6 mm inner diameter. With this technique liquid/dissolved, solid and gaseous samples may be analyzed directly. A measured volume (typically 10–50 μL) or a weighed mass (typically around 1 mg) of a solid sample are introduced into the graphite tube and subject to a temperature program. This typically consists of stages, such as drying – the solvent is evaporated; pyrolysis – the majority of the matrix constituents are removed; atomization – the analyte element is released to the gaseous phase; and cleaning – eventual residues in the graphite tube are removed at high temperature.
The graphite tubes are heated via their ohmic resistance using a low-voltage high-current power supply; the temperature in the individual stages can be controlled very closely, and temperature ramps between the individual stages facilitate separation of sample components. Tubes may be heated transversely or longitudinally, where the former ones have the advantage of a more homogeneous temperature distribution over their length. The so-called stabilized temperature platform furnace (STPF) concept, proposed by Walter Slavin, based on research of Boris L’vov, makes ET AAS essentially free from interference. The major components of this concept are atomization of the sample from a graphite platform inserted into the graphite tube (L’vov platform) instead of from the tube wall in order to delay atomization until the gas phase in the atomizer has reached a stable temperature; use of a chemical modifier in order to stabilize the analyte to a pyrolysis temperature that is sufficient to remove the majority of the matrix components; and integration of the absorbance over the time of the transient absorption signal instead of using peak height absorbance for quantification.
In ET AAS a transient signal is generated, the area of which is directly proportional to the mass of analyte (not its concentration) introduced into the graphite tube. This technique has the advantage that any kind of sample, solid, liquid or gaseous, can be analyzed directly. Its sensitivity is 2–3 orders of magnitude higher than that of flame AAS, so that determinations in the low μg L−1 range (for a typical sample volume of 20 μL) and ng g−1 range (for a typical sample mass of 1 mg) can be carried out. It shows a very high degree of freedom from interferences, so that ET AAS might be considered the most robust technique available nowadays for the determination of trace elements in complex matrices.
While flame and electrothermal vaporizers are the most common atomization techniques, several other atomization methods are utilized for specialized use.
A glow-discharge device (GD) serves as a versatile source, as it can simultaneously introduce and atomize the sample. The glow discharge occurs in a low-pressure argon gas atmosphere between 1 and 10 torr. In this atmosphere lies a pair of electrodes applying a DC voltage of 250 to 1000 V to break down the argon gas into positively charged ions and electrons. These ions, under the influence of the electric field, are accelerated into the cathode surface containing the sample, bombarding the sample and causing neutral sample atom ejection through the process known as sputtering. The atomic vapor produced by this discharge is composed of ions, ground state atoms, and fraction of excited atoms. When the excited atoms relax back into their ground state, a low-intensity glow is emitted, giving the technique its name.
The requirement for samples of glow discharge atomizers is that they are electrical conductors. Consequently, atomizers are most commonly used in the analysis of metals and other conducting samples. However, with proper modifications, it can be utilized to analyze liquid samples as well as nonconducting materials by mixing them with a conductor (e.g. graphite).
Hydride generation techniques are specialized in solutions of specific elements. The technique provides a means of introducing samples containing arsenic, antimony, selenium, bismuth, and lead into an atomizer in the gas phase. With these elements, hydride atomization enhances detection limits by a factor of 10 to 100 compared to alternative methods. Hydride generation occurs by adding an acidified aqueous solution of the sample to a 1% aqueous solution of sodium borohydride, all of which is contained in a glass vessel. The volatile hydride generated by the reaction that occurs is swept into the atomization chamber by an inert gas, where it undergoes decomposition. This process forms an atomized form of the analyte, which can then be measured by absorption or emission spectrometry.
The cold-vapor technique is an atomization method limited to only the determination of mercury, due to it being the only metallic element to have a large enough vapor pressure at ambient temperature. Because of this, it has an important use in determining organic mercury compounds in samples and their distribution in the environment. The method initiates by converting mercury into Hg2+ by oxidation from nitric and sulfuric acids, followed by a reduction of Hg2+ with tin(II) chloride. The mercury, is then swept into a long-pass absorption tube by bubbling a stream of inert gas through the reaction mixture. The concentration is determined by measuring the absorbance of this gas at 253.7 nm. Detection limits for this technique are in the parts-per-billion range making it an excellent mercury detection atomization method.
Two types of burners are used: total consumption burner and premix burner.
We have to distinguish between line source AAS (LS AAS) and continuum source AAS (CS AAS). In classical LS AAS, as it has been proposed by Alan Walsh, the high spectral resolution required for AAS measurements is provided by the radiation source itself that emits the spectrum of the analyte in the form of lines that are narrower than the absorption lines. Continuum sources, such as deuterium lamps, are only used for background correction purposes. The advantage of this technique is that only a medium-resolution monochromator is necessary for measuring AAS; however, it has the disadvantage that usually a separate lamp is required for each element that has to be determined. In CS AAS, in contrast, a single lamp, emitting a continuum spectrum over the entire spectral range of interest is used for all elements. Obviously, a high-resolution monochromator is required for this technique, as will be discussed later.
Hollow cathode lamps (HCL) are the most common radiation source in LS AAS. Inside the sealed lamp, filled with argon or neon gas at low pressure, is a cylindrical metal cathode containing the element of interest and an anode. A high voltage is applied across the anode and cathode, resulting in an ionization of the fill gas. The gas ions are accelerated towards the cathode and, upon impact on the cathode, sputter cathode material that is excited in the glow discharge to emit the radiation of the sputtered material, i.e., the element of interest. In the majority of cases single element lamps are used, where the cathode is pressed out of predominantly compounds of the target element. Multi-element lamps are available with combinations of compounds of the target elements pressed in the cathode. Multi element lamps produce slightly less sensitivity than single element lamps and the combinations of elements have to be selected carefully to avoid spectral interferences. Most multi-element lamps combine a handful of elements, e.g.: 2 - 8. Atomic Absorption Spectrometers can feature as few as 1-2 hollow cathode lamp positions or in automated multi-element spectrometers, a 8-12 lamp positions may be typically available.
Electrodeless discharge lamps (EDL) contain a small quantity of the analyte as a metal or a salt in a quartz bulb together with an inert gas, typically argon gas, at low pressure. The bulb is inserted into a coil that is generating an electromagnetic radio frequency field, resulting in a low-pressure inductively coupled discharge in the lamp. The emission from an EDL is higher than that from an HCL, and the line width is generally narrower, but EDLs need a separate power supply and might need a longer time to stabilize.
Deuterium HCL or even hydrogen HCL and deuterium discharge lamps are used in LS AAS for background correction purposes. The radiation intensity emitted by these lamps decreases significantly with increasing wavelength, so that they can be only used in the wavelength range between 190 and about 320 nm.
When a continuum radiation source is used for AAS, it is necessary to use a high-resolution monochromator, as will be discussed later. In addition, it is necessary that the lamp emits radiation of intensity at least an order of magnitude above that of a typical HCL over the entire wavelength range from 190 nm to 900 nm. A special high-pressure xenon short arc lamp, operating in a hot-spot mode has been developed to fulfill these requirements.
As already pointed out above, there is a difference between medium-resolution spectrometers that are used for LS AAS and high-resolution spectrometers that are designed for CS AAS. The spectrometer includes the spectral sorting device (monochromator) and the detector.
In LS AAS the high resolution that is required for the measurement of atomic absorption is provided by the narrow line emission of the radiation source, and the monochromator simply has to resolve the analytical line from other radiation emitted by the lamp. This can usually be accomplished with a band pass between 0.2 and 2 nm, i.e., a medium-resolution monochromator. Another feature to make LS AAS element-specific is modulation of the primary radiation and the use of a selective amplifier that is tuned to the same modulation frequency, as already postulated by Alan Walsh. This way any (unmodulated) radiation emitted for example by the atomizer can be excluded, which is imperative for LS AAS. Simple monochromators of the Littrow or (better) the Czerny-Turner design are typically used for LS AAS. Photomultiplier tubes are the most frequently used detectors in LS AAS, although solid state detectors might be preferred because of their better signal-to-noise ratio.
When a continuum radiation source is used for AAS measurement it is indispensable to work with a high-resolution monochromator. The resolution has to be equal to or better than the half width of an atomic absorption line (about 2 pm) in order to avoid losses of sensitivity and linearity of the calibration graph. The research with high-resolution (HR) CS AAS was pioneered by the groups of O’Haver and Harnly in the US, who also developed the (up until now) only simultaneous multi-element spectrometer for this technique. The breakthrough, however, came when the group of Becker-Ross in Berlin, Germany, built a spectrometer entirely designed for HR-CS AAS. The first commercial equipment for HR-CS AAS was introduced by Analytik Jena (Jena, Germany) at the beginning of the 21st century, based on the design proposed by Becker-Ross and Florek. These spectrometers use a compact double monochromator with a prism pre-monochromator and an echelle grating monochromator for high resolution. A linear charge coupled device (CCD) array with 200 pixels is used as the detector. The second monochromator does not have an exit slit; hence the spectral environment at both sides of the analytical line becomes visible at high resolution. As typically only 3–5 pixels are used to measure the atomic absorption, the other pixels are available for correction purposes. One of these corrections is that for lamp flicker noise, which is independent of wavelength, resulting in measurements with very low noise level; other corrections are those for background absorption, as will be discussed later.
The relatively small number of atomic absorption lines (compared to atomic emission lines) and their narrow width (a few pm) make spectral overlap rare; there are only few examples known that an absorption line from one element will overlap with another. Molecular absorption, in contrast, is much broader, so that it is more likely that some molecular absorption band will overlap with an atomic line. This kind of absorption might be caused by un-dissociated molecules of concomitant elements of the sample or by flame gases. We have to distinguish between the spectra of di-atomic molecules, which exhibit a pronounced fine structure, and those of larger (usually tri-atomic) molecules that don't show such fine structure. Another source of background absorption, particularly in ET AAS, is scattering of the primary radiation at particles that are generated in the atomization stage, when the matrix could not be removed sufficiently in the pyrolysis stage.
All these phenomena, molecular absorption and radiation scattering, can result in artificially high absorption and an improperly high (erroneous) calculation for the concentration or mass of the analyte in the sample. There are several techniques available to correct for background absorption, and they are significantly different for LS AAS and HR-CS AAS.
In LS AAS background absorption can only be corrected using instrumental techniques, and all of them are based on two sequential measurements: firstly, total absorption (atomic plus background), secondly, background absorption only. The difference of the two measurements gives the net atomic absorption. Because of this, and because of the use of additional devices in the spectrometer, the signal-to-noise ratio of background-corrected signals is always significantly inferior compared to uncorrected signals. It should also be pointed out that in LS AAS there is no way to correct for (the rare case of) a direct overlap of two atomic lines. In essence there are three techniques used for background correction in LS AAS:
This is the oldest and still most commonly used technique, particularly for flame AAS. In this case, a separate source (a deuterium lamp) with broad emission is used to measure the background absorption over the entire width of the exit slit of the spectrometer. The use of a separate lamp makes this technique the least accurate one, as it cannot correct for any structured background. It also cannot be used at wavelengths above about 320 nm, as the emission intensity of the deuterium lamp becomes very weak. The use of deuterium HCL is preferable compared to an arc lamp due to the better fit of the image of the former lamp with that of the analyte HCL.
This technique (named after their inventors) is based on the line-broadening and self-reversal of emission lines from HCL when high current is applied. Total absorption is measured with normal lamp current, i.e., with a narrow emission line, and background absorption after application of a high-current pulse with the profile of the self-reversed line, which has little emission at the original wavelength, but strong emission on both sides of the analytical line. The advantage of this technique is that only one radiation source is used; among the disadvantages are that the high-current pulses reduce lamp lifetime, and that the technique can only be used for relatively volatile elements, as only those exhibit sufficient self-reversal to avoid dramatic loss of sensitivity. Another problem is that background is not measured at the same wavelength as total absorption, making the technique unsuitable for correcting structured background.
An alternating magnetic field is applied at the atomizer (graphite furnace) to split the absorption line into three components, the π component, which remains at the same position as the original absorption line, and two σ components, which are moved to higher and lower wavelengths, respectively. Total absorption is measured without magnetic field and background absorption with the magnetic field on. The π component has to be removed in this case, e.g. using a polarizer, and the σ components do not overlap with the emission profile of the lamp, so that only the background absorption is measured. The advantages of this technique are that total and background absorption are measured with the same emission profile of the same lamp, so that any kind of background, including background with fine structure can be corrected accurately, unless the molecule responsible for the background is also affected by the magnetic field and using a chopper as a polariser reduces the signal to noise ratio. While the disadvantages are the increased complexity of the spectrometer and power supply needed for running the powerful magnet needed to split the absorption line.
In HR-CS AAS background correction is carried out mathematically in the software using information from detector pixels that are not used for measuring atomic absorption; hence, in contrast to LS AAS, no additional components are required for background correction.
It has already been mentioned that in HR-CS AAS lamp flicker noise is eliminated using correction pixels. In fact, any increase or decrease in radiation intensity that is observed to the same extent at all pixels chosen for correction is eliminated by the correction algorithm. This obviously also includes a reduction of the measured intensity due to radiation scattering or molecular absorption, which is corrected in the same way. As measurement of total and background absorption, and correction for the latter, are strictly simultaneous (in contrast to LS AAS), even the fastest changes of background absorption, as they may be observed in ET AAS, do not cause any problem. In addition, as the same algorithm is used for background correction and elimination of lamp noise, the background corrected signals show a much better signal-to-noise ratio compared to the uncorrected signals, which is also in contrast to LS AAS.
The above technique can obviously not correct for a background with fine structure, as in this case the absorbance will be different at each of the correction pixels. In this case HR-CS AAS is offering the possibility to measure correction spectra of the molecule(s) that is (are) responsible for the background and store them in the computer. These spectra are then multiplied with a factor to match the intensity of the sample spectrum and subtracted pixel by pixel and spectrum by spectrum from the sample spectrum using a least-squares algorithm. This might sound complex, but first of all the number of di-atomic molecules that can exist at the temperatures of the atomizers used in AAS is relatively small, and second, the correction is performed by the computer within a few seconds. The same algorithm can actually also be used to correct for direct line overlap of two atomic absorption lines, making HR-CS AAS the only AAS technique that can correct for this kind of spectral interference. | https://en.wikipedia.org/wiki?curid=2637 |
Confucianism
Confucianism, also known as Ruism, is a system of thought and behavior originating in ancient China. Variously described as tradition, a philosophy, a religion, a humanistic or rationalistic religion, a way of governing, or simply a way of life, Confucianism developed from what was later called the Hundred Schools of Thought from the teachings of the Chinese philosopher Confucius (551–479 BCE).
Confucius considered himself a recodifier and retransmitter of the theology and values inherited from the Shang (c. 1600–1046 BCE) and Zhou dynasties (c. 1046–256 BCE) for the Warring States period. Confucianism was suppressed during the Legalist and autocratic Qin dynasty (221–206 BCE), but survived. During the Han dynasty (206 BCE–220 CE), Confucian approaches edged out the "proto-Taoist" Huang–Lao as the official ideology, while the emperors mixed both with the realist techniques of Legalism.
A Confucian revival began during the Tang dynasty (618–907). In the late Tang, Confucianism developed in response to Buddhism and Taoism and was reformulated as Neo-Confucianism. This reinvigorated form was adopted as the basis of the imperial exams and the core philosophy of the scholar official class in the Song dynasty (960–1297). The abolition of the examination system in 1905 marked the end of official Confucianism. The intellectuals of the New Culture Movement of the early twentieth century blamed Confucianism for China's weaknesses. They searched for new doctrines to replace Confucian teachings; some of these new ideologies include the "Three Principles of the People" with the establishment of the Republic of China, and then Maoism under the People's Republic of China. In the late twentieth century Confucian work ethic has been credited with the rise of the East Asian economy.
With particular emphasis on the importance of the family and social harmony, rather than on an otherworldly source of spiritual values, the core of Confucianism is humanistic. According to Herbert Fingarette's conceptualisation of Confucianism as a philosophical system which regards "the secular as sacred", Confucianism transcends the dichotomy between religion and humanism, considering the ordinary activities of human life—and especially human relationships—as a manifestation of the sacred, because they are the expression of humanity's moral nature ("xìng" ), which has a transcendent anchorage in Heaven ("Tiān" ). While "Tiān" has some characteristics that overlap the category of godhead, it is primarily an "impersonal" absolute principle, like the "Dào" () or the "Brahman". Confucianism focuses on the practical order that is given by a this-worldly awareness of the "Tiān". Confucian liturgy (called "rú", or sometimes , meaning 'orthoprax') led by Confucian priests or "sages of rites" () to worship the gods in public and ancestral Chinese temples is preferred on certain occasions, by Confucian religious groups and for civil religious rites, over Taoist or popular ritual.
The worldly concern of Confucianism rests upon the belief that human beings are fundamentally good, and teachable, improvable, and perfectible through personal and communal endeavor, especially self-cultivation and self-creation. Confucian thought focuses on the cultivation of virtue in a morally organised world. Some of the basic Confucian ethical concepts and practices include "rén", "yì", and "lǐ", and "zhì". "Rén" (, 'benevolence' or 'humaneness') is the essence of the human being which manifests as compassion. It is the virtue-form of Heaven. "Yì" () is the upholding of righteousness and the moral disposition to do good. "Lǐ" () is a system of ritual norms and propriety that determines how a person should properly act in everyday life in harmony with the law of Heaven. "Zhì" () is the ability to see what is right and fair, or the converse, in the behaviors exhibited by others. Confucianism holds one in contempt, either passively or actively, for failure to uphold the cardinal moral values of "rén" and "yì".
Traditionally, cultures and countries in the Chinese cultural sphere are strongly influenced by Confucianism, including mainland China, Taiwan, Hong Kong, Macau, Korea, Japan, and Vietnam, as well as various territories settled predominantly by Chinese people, such as Singapore. Today, it has been credited for shaping East Asian societies and Chinese communities, and to some extent, other parts of Asia. In the last decades there have been talks of a "Confucian Revival" in the academic and the scholarly community, and there has been a grassroots proliferation of various types of Confucian churches. In late 2015 many Confucian personalities formally established a national Holy Confucian Church () in China to unify the many Confucian congregations and civil society organisations.
Strictly speaking, there is no term in Chinese which directly corresponds to "Confucianism". In the Chinese language, the character "rú" meaning "scholar" or "learned" or "refined man" is generally used both in the past and the present to refer to things related to Confucianism. The character "rú" in ancient China had diverse meanings. Some examples include "to tame", "to mould", "to educate", "to refine". Several different terms, some of which with modern origin, are used in different situations to express different facets of Confucianism, including:
Three of them use "rú". These names do not use the name "Confucius" at all, but instead focus on the ideal of the Confucian man. The use of the term "Confucianism" has been avoided by some modern scholars, who favor "Ruism" and "Ruists" instead. Robert Eno argues that the term has been "burdened... with the ambiguities and irrelevant traditional associations". Ruism, as he states, is more faithful to the original Chinese name for the school.
According to Zhou Youguang, "rú" originally referred to shamanic methods of holding rites and existed before Confucius's times, but with Confucius it came to mean devotion to propagating such teachings to bring civilisation to the people. Confucianism was initiated by Confucius, developed by Mencius (~372–289 BCE) and inherited by later generations, undergoing constant transformations and restructuring since its establishment, but preserving the principles of humaneness and righteousness at its core.
Traditionally, Confucius was thought to be the author or editor of the Five Classics which were the basic texts of Confucianism. The scholar Yao Xinzhong allows that there are good reasons to believe that Confucian classics took shape in the hands of Confucius, but that "nothing can be taken for granted in the matter of the early versions of the classics". Professor Yao says that perhaps most scholars today hold the "pragmatic" view that Confucius and his followers, although they did not intend to create a system of classics, "contributed to their formation". In any case, it is undisputed that for most of the last 2,000 years, Confucius was believed to have either written or edited these texts.
The scholar Tu Weiming explains these classics as embodying "five visions" which underlie the development of Confucianism:
Confucianism revolves around the pursuit of the unity of the individual self and the God of Heaven ("Tiān" ), or, otherwise said, around the relationship between humanity and Heaven. The principle of Heaven ("Lǐ" or "Dào" ), is the order of the creation and the source of divine authority, monistic in its structure. Individuals may realise their humanity and become one with Heaven through the contemplation of such order. This transformation of the self may be extended to the family and society to create a harmonious fiduciary community. Joël Thoraval studied Confucianism as a diffused civil religion in contemporary China, finding that it expresses itself in the widespread worship of five cosmological entities: Heaven and Earth ("Di" ), the sovereign or the government ("jūn" ), ancestors ("qīn" ) and masters ("shī" ).
Heaven is not some being pre-existing the temporal world. According to the scholar Stephan Feuchtwang, in Chinese cosmology, which is not merely Confucian but shared by all Chinese religions, "the universe creates itself out of a primary chaos of material energy" ("hundun" and "qi" ), organising through the polarity of yin and yang which characterises any thing and life. Creation is therefore a continuous ordering; it is not a creation "ex nihilo". Yin and yang are the invisible and visible, the receptive and the active, the unshaped and the shaped; they characterise the yearly cycle (winter and summer), the landscape (shady and bright), the sexes (female and male), and even sociopolitical history (disorder and order). Confucianism is concerned with finding "middle ways" between yin and yang at every new configuration of the world.
Confucianism conciliates both the inner and outer polarities of spiritual cultivation, that is to say self-cultivation and world redemption, synthesised in the ideal of "sageliness within and kingliness without". "Rén", translated as "humaneness" or the essence proper of a human being, is the character of compassionate mind; it is the virtue endowed by Heaven and at the same time the means by which man may achieve oneness with Heaven comprehending his own origin in Heaven and therefore divine essence. In the "Dàtóng shū" () it is defined as "to form one body with all things" and "when the self and others are not separated ... compassion is aroused".
"Tiān" (), a key concept in Chinese thought, refers to the God of Heaven, the northern culmen of the skies and its spinning stars, earthly nature and its laws which come from Heaven, to "Heaven and Earth" (that is, "all things"), and to the awe-inspiring forces beyond human control. There are such a number of uses in Chinese thought that it is not possible to give one translation into English.
Confucius used the term in a mystical way. He wrote in the "Analects" (7.23) that Tian gave him life, and that Tian watched and judged (6.28; 9.12). In 9.5 Confucius says that a person may know the movements of the Tian, and this provides with the sense of having a special place in the universe. In 17.19 Confucius says that Tian spoke to him, though not in words. The scholar Ronnie Littlejohn warns that Tian was not to be interpreted as personal God comparable to that of the Abrahamic faiths, in the sense of an otherworldly or transcendent creator. Rather it is similar to what Taoists meant by "Dao": "the way things are" or "the regularities of the world", which Stephan Feuchtwang equates with the ancient Greek concept of "physis", "nature" as the generation and regenerations of things and of the moral order. Tian may also be compared to the "Brahman" of Hindu and Vedic traditions. The scholar Promise Hsu, in the wake of Robert B. Louden, explained 17:19 ("What does Tian ever say? Yet there are four seasons going round and there are the hundred things coming into being. What does Tian say?") as implying that even though Tian is not a "speaking person", it constantly "does" through the rhythms of nature, and communicates "how human beings ought to live and act", at least to those who have learnt to carefully listen to it.
Zigong, a disciple of Confucius, said that Tian had set the master on the path to become a wise man (9.6). In 7.23 Confucius says that he has no doubt left that the Tian gave him life, and from it he had developed right virtue ( "dé"). In 8.19 he says that the lives of the sages are interwoven with Tian.
Regarding personal gods ("shén", energies who emanate from and reproduce the Tian) enliving nature, in the "Analects" Confucius says that it is appropriate () for people to worship ( "jìng") them, though through proper rites (), implying respect of positions and discretion. Confucius himself was a ritual and sacrificial master. Answering to a disciple who asked whether it is better to sacrifice to the god of the stove or to the god of the family (a popular saying), in 3.13 Confucius says that in order to appropriately pray gods one should first know and respect Heaven. In 3.12 he explains that religious rituals produce meaningful experiences, and one has to offer sacrifices in person, acting in presence, otherwise "it is the same as not having sacrificed at all". Rites and sacrifices to the gods have an ethical importance: they generate good life, because taking part in them leads to the overcoming of the self. Analects 10.11 tells that Confucius always took a small part of his food and placed it on the sacrificial bowls as an offering to his ancestors.
Other movements, such as Mohism which was later absorbed by Taoism, developed a more theistic idea of Heaven. Feuchtwang explains that the difference between Confucianism and Taoism primarily lies in the fact that the former focuses on the realisation of the starry order of Heaven in human society, while the latter on the contemplation of the Dao which spontaneously arises in nature.
As explained by Stephan Feuchtwang, the order coming from Heaven preserves the world, and has to be followed by humanity finding a "middle way" between yin and yang forces in each new configuration of reality. Social harmony or morality is identified as patriarchy, which is expressed in the worship of ancestors and deified progenitors in the male line, at ancestral shrines.
Confucian ethical codes are described as humanistic. They may be practiced by all the members of a society. Confucian ethics is characterised by the promotion of virtues, encompassed by the Five Constants, "Wǔcháng" () in Chinese, elaborated by Confucian scholars out of the inherited tradition during the Han dynasty. The Five Constants are:
These are accompanied by the classical "Sìzì" (), that singles out four virtues, one of which is included among the Five Constants:
There are still many other elements, such as "chéng" (, honesty), "shù" (, kindness and forgiveness), "lián" (, honesty and cleanness), "chǐ" (, shame, judge and sense of right and wrong), "yǒng" (, bravery), "wēn" (, kind and gentle), "liáng" (, good, kindhearted), "gōng" (, respectful, reverent), "jiǎn" (, frugal), "ràng" (, modestly, self-effacing).
"Rén" () is the Confucian virtue denoting the good feeling a virtuous human experiences when being altruistic. It is exemplified by a normal adult's protective feelings for children. It is considered the essence of the human being, endowed by Heaven, and at the same time the means by which man may act according to the principle of Heaven (, "Tiān lǐ") and become one with it.
Yán Huí, Confucius's most outstanding student, once asked his master to describe the rules of "rén" and Confucius replied, "one should see nothing improper, hear nothing improper, say nothing improper, do nothing improper." Confucius also defined "rén" in the following way: "wishing to be established himself, seeks also to establish others; wishing to be enlarged himself, he seeks also to enlarge others."
Another meaning of "rén" is "not to do to others as you would not wish done to yourself." Confucius also said, ""rén" is not far off; he who seeks it has already found it." "Rén" is close to man and never leaves him.
"Li" () is a classical Chinese word which finds its most extensive use in Confucian and post-Confucian Chinese philosophy. "Li" is variously translated as "rite" or "reason," "ratio" in the pure sense of Vedic "ṛta" ("right," "order") when referring to the cosmic law, but when referring to its realisation in the context of human social behaviour it has also been translated as "customs", "measures" and "rules", among other terms. "Li" also means religious rites which establish relations between humanity and the gods.
According to Stephan Feuchtwang, rites are conceived as "what makes the invisible visible", making possible for humans to cultivate the underlying order of nature. Correctly performed rituals move society in alignment with earthly and heavenly (astral) forces, establishing the harmony of the three realms—Heaven, Earth and humanity. This practice is defined as "centring" ( "yāng" or "zhōng"). Among all things of creation, humans themselves are "central" because they have the ability to cultivate and centre natural forces.
"Li" embodies the entire web of interaction between humanity, human objects, and nature. Confucius includes in his discussions of "li" such diverse topics as learning, tea drinking, titles, mourning, and governance. Xunzi cites "songs and laughter, weeping and lamentation... rice and millet, fish and meat... the wearing of ceremonial caps, embroidered robes, and patterned silks, or of fasting clothes and mourning clothes... spacious rooms and secluded halls, soft mats, couches and benches" as vital parts of the fabric of "li".
Confucius envisioned proper government being guided by the principles of "li". Some Confucians proposed that all human beings may pursue perfection by learning and practising "li". Overall, Confucians believe that governments should place more emphasis on "li" and rely much less on penal punishment when they govern.
Loyalty (, "zhōng") is particularly relevant for the social class to which most of Confucius's students belonged, because the most important way for an ambitious young scholar to become a prominent official was to enter a ruler's civil service.
Confucius himself did not propose that "might makes right," but rather that a superior should be obeyed because of his moral rectitude. In addition, loyalty does not mean subservience to authority. This is because reciprocity is demanded from the superior as well. As Confucius stated "a prince should employ his minister according to the rules of propriety; ministers should serve their prince with faithfulness (loyalty)."
Similarly, Mencius also said that "when the prince regards his ministers as his hands and feet, his ministers regard their prince as their belly and heart; when he regards them as his dogs and horses, they regard him as another man; when he regards them as the ground or as grass, they regard him as a robber and an enemy." Moreover, Mencius indicated that if the ruler is incompetent, he should be replaced. If the ruler is evil, then the people have the right to overthrow him. A good Confucian is also expected to remonstrate with his superiors when necessary. At the same time, a proper Confucian ruler should also accept his ministers' advice, as this will help him govern the realm better.
In later ages, however, emphasis was often placed more on the obligations of the ruled to the ruler, and less on the ruler's obligations to the ruled. Like filial piety, loyalty was often subverted by the autocratic regimes in China. Nonetheless, throughout the ages, many Confucians continued to fight against unrighteous superiors and rulers. Many of these Confucians suffered and sometimes died because of their conviction and action. During the Ming-Qing era, prominent Confucians such as Wang Yangming promoted individuality and independent thinking as a counterweight to subservience to authority. The famous thinker Huang Zongxi also strongly criticised the autocratic nature of the imperial system and wanted to keep imperial power in check.
Many Confucians also realised that loyalty and filial piety have the potential of coming into conflict with one another. This may be true especially in times of social chaos, such as during the period of the Ming-Qing transition.
In Confucian philosophy, filial piety (, "xiào") is a virtue of respect for one's parents and ancestors, and of the hierarchies within society: father–son, elder–junior and male–female. The Confucian classic "Xiaojing" ("Book of Piety"), thought to be written around the Qin-Han period, has historically been the authoritative source on the Confucian tenet of "xiào". The book, a conversation between Confucius and his disciple Zeng Shen, is about how to set up a good society using the principle of "xiào".
In more general terms, filial piety means to be good to one's parents; to take care of one's parents; to engage in good conduct not just towards parents but also outside the home so as to bring a good name to one's parents and ancestors; to perform the duties of one's job well so as to obtain the material means to support parents as well as carry out sacrifices to the ancestors; not be rebellious; show love, respect and support; display courtesy; ensure male heirs, uphold fraternity among brothers; wisely advise one's parents, including dissuading them from moral unrighteousness, for blindly following the parents' wishes is not considered to be "xiao"; display sorrow for their sickness and death; and carry out sacrifices after their death.
Filial piety is considered a key virtue in Chinese culture, and it is the main concern of a large number of stories. One of the most famous collections of such stories is "The Twenty-four Filial Exemplars". These stories depict how children exercised their filial piety in the past. While China has always had a diversity of religious beliefs, filial piety has been common to almost all of them; historian Hugh D.R. Baker calls respect for the family the only element common to almost all Chinese believers.
Social harmony results in part from every individual knowing his or her place in the natural order, and playing his or her part well. Reciprocity or responsibility ("renqing") extends beyond filial piety and involves the entire network of social relations, even the respect for rulers. When Duke Jing of Qi asked about government, by which he meant proper administration so as to bring social harmony, Confucius replied:
There is government, when the prince is prince, and the minister is minister; when the father is father, and the son is son
Particular duties arise from one's particular situation in relation to others. The individual stands simultaneously in several different relationships with different people: as a junior in relation to parents and elders, and as a senior in relation to younger siblings, students, and others. While juniors are considered in Confucianism to owe their seniors reverence, seniors also have duties of benevolence and concern toward juniors. The same is true with the husband and wife relationship where the husband needs to show benevolence towards his wife and the wife needs to respect the husband in return. This theme of mutuality still exists in East Asian cultures even to this day.
The Five Bonds are: ruler to ruled, father to son, husband to wife, elder brother to younger brother, friend to friend. Specific duties were prescribed to each of the participants in these sets of relationships. Such duties are also extended to the dead, where the living stand as sons to their deceased family. The only relationship where respect for elders isn't stressed was the friend to friend relationship, where mutual equal respect is emphasised instead. All these duties take the practical form of prescribed rituals, for instance wedding and death rituals.
The "junzi" (, "jūnzǐ", "lord's son") is a Chinese philosophical term often translated as "gentleman" or "superior person" and employed by Confucius in his works to describe the ideal man. In the "I Ching" it is used by the Duke of Wen.
In Confucianism, the sage or wise is the ideal personality; however, it is very hard to become one of them. Confucius created the model of "junzi", gentleman, which may be achieved by any individual. Later, Zhu Xi defined "junzi" as second only to the sage. There are many characteristics of the "junzi": he may live in poverty, he does more and speaks less, he is loyal, obedient and knowledgeable. The "junzi" disciplines himself. "Ren" is fundamental to become a "junzi".
As the potential leader of a nation, a son of the ruler is raised to have a superior ethical and moral position while gaining inner peace through his virtue. To Confucius, the "junzi" sustained the functions of government and social stratification through his ethical values. Despite its literal meaning, any righteous man willing to improve himself may become a "junzi".
On the contrary, the "xiaoren" (, "xiăorén", "small or petty person") does not grasp the value of virtues and seeks only immediate gains. The petty person is egotistic and does not consider the consequences of his action in the overall scheme of things. Should the ruler be surrounded by "xiaoren" as opposed to "junzi", his governance and his people will suffer due to their small-mindness. Examples of such "xiaoren" individuals may range from those who continually indulge in sensual and emotional pleasures all day to the politician who is interested merely in power and fame; neither sincerely aims for the long-term benefit of others.
The "junzi" enforces his rule over his subjects by acting virtuously himself. It is thought that his pure virtue would lead others to follow his example. The ultimate goal is that the government behaves much like a family, the "junzi" being a beacon of filial piety.
Confucius believed that social disorder often stemmed from failure to perceive, understand, and deal with reality. Fundamentally, then, social disorder may stem from the failure to call things by their proper names, and his solution to this was "zhèngmíng" (). He gave an explanation of "zhengming" to one of his disciples.
Zi-lu said, "The vassal of Wei has been waiting for you, in order with you to administer the government. What will you consider the first thing to be done?"
The Master replied, "What is necessary to rectify names."
"So! indeed!" said Zi-lu. "You are wide off the mark! Why must there be such rectification?"
The Master said, "How uncultivated you are, Yu! The superior man [Junzi] cannot care about the everything, just as he cannot go to check all himself!
If names be not correct, language is not in accordance with the truth of things.
If language be not in accordance with the truth of things, affairs cannot be carried on to success.
When affairs cannot be carried on to success, proprieties and music do not flourish.
When proprieties and music do not flourish, punishments will not be properly awarded.
When punishments are not properly awarded, the people do not know how to move hand or foot.
Therefore a superior man considers it necessary that the names he uses may be spoken appropriately, and also that what he speaks may be carried out appropriately. What the superior man requires is just that in his words there may be nothing incorrect."
Xun Zi chapter (22) "On the Rectification of Names" claims the ancient sage-kings chose names () that directly corresponded with actualities (), but later generations confused terminology, coined new nomenclature, and thus could no longer distinguish right from wrong. Since social harmony is of utmost importance, without the proper rectification of names, society would essentially crumble and "undertakings [would] not [be] completed."
According to He Guanghu, Confucianism may be identified as a continuation of the Shang-Zhou (~1600–256 BCE) official religion, or the Chinese aboriginal religion which has lasted uninterrupted for three thousand years. Both the dynasties worshipped the supreme godhead, called "Shangdi" ( "Highest Deity") or simply "Dì" () by the Shang and "Tian" ( "Heaven") by the Zhou. Shangdi was conceived as the first ancestor of the Shang royal house, an alternate name for him being the "Supreme Progenitor" ( "Shàngjiǎ"). In Shang theology, the multiplicity of gods of nature and ancestors were viewed as parts of Di, and the four "fāng" ("directions" or "sides") and their "fēng" ("winds") as his cosmic will. With the Zhou dynasty, which overthrew the Shang, the name for the supreme godhead became "Tian" ( "Heaven"). While the Shang identified Shangdi as their ancestor-god to assert their claim to power by divine right, the Zhou transformed this claim into a legitimacy based on moral power, the Mandate of Heaven. In Zhou theology, Tian had no singular earthly progeny, but bestowed divine favour on virtuous rulers. Zhou kings declared that their victory over the Shang was because they were virtuous and loved their people, while the Shang were tyrants and thus were deprived of power by Tian.
John C. Didier and David Pankenier relate the shapes of both the ancient Chinese characters for Di and Tian to the patterns of stars in the northern skies, either drawn, in Didier's theory by connecting the constellations bracketing the north celestial pole as a square, or in Pankenier's theory by connecting some of the stars which form the constellations of the Big Dipper and broader Ursa Major, and Ursa Minor (Little Dipper). Cultures in other parts of the world have also conceived these stars or constellations as symbols of the origin of things, the supreme godhead, divinity and royal power. The supreme godhead was also identified with the dragon, symbol of unlimited power ("qi"), of the "protean" primordial power which embodies both yin and yang in unity, associated to the constellation Draco which winds around the north ecliptic pole, and slithers between the Little and Big Dipper.
By the 6th century BCE the power of Tian and the symbols that represented it on earth (architecture of cities, temples, altars and ritual cauldrons, and the Zhou ritual system) became "diffuse" and claimed by different potentates in the Zhou states to legitimise economic, political, and military ambitions. Divine right no longer was an exclusive privilege of the Zhou royal house, but might be bought by anyone able to afford the elaborate ceremonies and the old and new rites required to access the authority of Tian.
Besides the waning Zhou ritual system, what may be defined as "wild" ( "yě") traditions, or traditions "outside of the official system", developed as attempts to access the will of Tian. The population had lost faith in the official tradition, which was no longer perceived as an effective way to communicate with Heaven. The traditions of the ("Nine Fields") and of the "Yijing" flourished. Chinese thinkers, faced with this challenge to legitimacy, diverged in a "Hundred Schools of Thought", each proposing its own theories for the reconstruction of the Zhou moral order.
Confucius (551–479 BCE) appeared in this period of political decadence and spiritual questioning. He was educated in Shang-Zhou theology, which he contributed to transmit and reformulate giving centrality to self-cultivation and agency of humans, and the educational power of the self-established individual in assisting others to establish themselves (the principle of "àirén", "loving others"). As the Zhou reign collapsed, traditional values were abandoned resulting in a period of moral decline. Confucius saw an opportunity to reinforce values of compassion and tradition into society. Disillusioned with the widespread vulgarisation of the rituals to access Tian, he began to preach an ethical interpretation of traditional Zhou religion. In his view, the power of Tian is immanent, and responds positively to the sincere heart driven by humaneness and rightness, decency and altruism. Confucius conceived these qualities as the foundation needed to restore socio-political harmony. Like many contemporaries, Confucius saw ritual practices as efficacious ways to access Tian, but he thought that the crucial knot was the state of meditation that participants enter prior to engage in the ritual acts. Confucius amended and recodified the classical books inherited from the Xia-Shang-Zhou dynasties, and composed the "Spring and Autumn Annals".
Philosophers in the Warring States period, both "inside the square" (focused on state-endorsed ritual) and "outside the square" (non-aligned to state ritual) built upon Confucius's legacy, compiled in the "Analects", and formulated the classical metaphysics that became the lash of Confucianism. In accordance with the Master, they identified mental tranquility as the state of Tian, or the One (一 "Yī"), which in each individual is the Heaven-bestowed divine power to rule one's own life and the world. Going beyond the Master, they theorised the oneness of production and reabsorption into the cosmic source, and the possibility to understand and therefore reattain it through meditation. This line of thought would have influenced all Chinese individual and collective-political mystical theories and practices thereafter.
Since the 2000s, there has been a growing identification of the Chinese intellectual class with Confucianism. In 2003, the Confucian intellectual Kang Xiaoguang published a manifesto in which he made four suggestions: Confucian education should enter official education at any level, from elementary to high school; the state should establish Confucianism as the state religion by law; Confucian religion should enter the daily life of ordinary people through standardisation and development of doctrines, rituals, organisations, churches and activity sites; the Confucian religion should be spread through non-governmental organisations. Another modern proponent of the institutionalisation of Confucianism in a state church is Jiang Qing.
In 2005, the Center for the Study of Confucian Religion was established, and "guoxue" started to be implemented in public schools on all levels. Being well received by the population, even Confucian preachers have appeared on television since 2006. The most enthusiastic New Confucians proclaim the uniqueness and superiority of Confucian Chinese culture, and have generated some popular sentiment against Western cultural influences in China.
The idea of a "Confucian Church" as the state religion of China has roots in the thought of Kang Youwei, an exponent of the early New Confucian search for a regeneration of the social relevance of Confucianism, at a time when it was de-institutionalised with the collapse of the Qing dynasty and the Chinese empire. Kang modeled his ideal "Confucian Church" after European national Christian churches, as a hierarchic and centralised institution, closely bound to the state, with local church branches, devoted to the worship and the spread of the teachings of Confucius.
In contemporary China, the Confucian revival has developed into various interwoven directions: the proliferation of Confucian schools or academies ("shuyuan" ), the resurgence of Confucian rites ("chuántǒng lǐyí" ), and the birth of new forms of Confucian activity on the popular level, such as the Confucian communities ("shèqū rúxué" ). Some scholars also consider the reconstruction of lineage churches and their ancestral temples, as well as cults and temples of natural and national gods within broader Chinese traditional religion, as part of the renewal of Confucianism.
Other forms of revival are salvationist folk religious movements groups with a specifically Confucian focus, or Confucian churches, for example the "Yidan xuetang" () of Beijing, the "Mengmutang" () of Shanghai, Confucian Shenism ( "Rúzōng Shénjiào") or the phoenix churches, the Confucian Fellowship ( "Rújiào Dàotán") in northern Fujian which has spread rapidly over the years after its foundation, and ancestral temples of the Kong kin (the lineage of the descendants of Confucius himself) operating as Confucian-teaching churches.
Also, the Hong Kong Confucian Academy, one of the direct heirs of Kang Youwei's Confucian Church, has expanded its activities to the mainland, with the construction of statues of Confucius, Confucian hospitals, restoration of temples and other activities. In 2009, Zhou Beichen founded another institution which inherits the idea of Kang Youwei's Confucian Church, the Holy Hall of Confucius ( "Kǒngshèngtáng") in Shenzhen, affiliated with the Federation of Confucian Culture of Qufu City. It was the first of a nationwide movement of congregations and civil organisations that was unified in 2015 in the Holy Confucian Church ( "Kǒngshènghuì"). The first spiritual leader of the Holy Church is the renowned scholar Jiang Qing, the founder and manager of the Yangming Confucian Abode ( "Yángmíng jīngshě"), a Confucian academy in Guiyang, Guizhou.
Chinese folk religious temples and kinship ancestral shrines may, on peculiar occasions, choose Confucian liturgy (called "rú" or "zhèngtǒng", "orthoprax") led by Confucian ritual masters ( "lǐshēng") to worship the gods, instead of Taoist or popular ritual. "Confucian businessmen" ( "rúshāngrén", also "refined businessman") is a recently rediscovered concept defining people of the economic-entrepreneurial elite who recognise their social responsibility and therefore apply Confucian culture to their business.
To govern by virtue, let us compare it to the North Star: it stays in its place, while the myriad stars wait upon it. ("Analects" 2.1)
A key Confucian concept is that in order to govern others one must first govern oneself according to the universal order. When actual, the king's personal virtue ("de") spreads beneficent influence throughout the kingdom. This idea is developed further in the Great Learning, and is tightly linked with the Taoist concept of wu wei (): the less the king does, the more gets done. By being the "calm center" around which the kingdom turns, the king allows everything to function smoothly and avoids having to tamper with the individual parts of the whole.
This idea may be traced back to the ancient shamanic beliefs of the king being the axle between the sky, human beings, and the Earth. The emperors of China were considered agents of Heaven, endowed with the Mandate of Heaven. They hold the power to define the hierarchy of divinities, by bestowing titles upon mountains, rivers and dead people, acknowledging them as powerful and therefore establishing their cults.
Confucianism, despite supporting the importance of obeying national authority, places this obedience under absolute moral principles that curbed the willful exercise of power, rather than being unconditional. Submission to authority ("tsun wang") was only taken within the context of the moral obligations that ruler's had toward their subjects, in particular benevolence ("jen"). From the earliest periods of Confucianism, the Right of revolution against tyranny was always recognised by Confucianism, including the most pro-authoritarian scholars such as Xunzi.
In teaching, there should be no distinction of classes. ("Analects" 15.39)
Although Confucius claimed that he never invented anything but was only transmitting ancient knowledge ("Analects" 7.1), he did produce a number of new ideas. Many European and American admirers such as Voltaire and Herrlee G. Creel point to the revolutionary idea of replacing nobility of blood with nobility of virtue. "Jūnzǐ" (, lit. "lord's child"), which originally signified the younger, non-inheriting, offspring of a noble, became, in Confucius's work, an epithet having much the same meaning and evolution as the English "gentleman."
A virtuous commoner who cultivates his qualities may be a "gentleman", while a shameless son of the king is only a "small man." That he admitted students of different classes as disciples is a clear demonstration that he fought against the feudal structures that defined pre-imperial Chinese society.
Another new idea, that of meritocracy, led to the introduction of the imperial examination system in China. This system allowed anyone who passed an examination to become a government officer, a position which would bring wealth and honour to the whole family. The Chinese imperial examination system started in the Sui dynasty. Over the following centuries the system grew until finally almost anyone who wished to become an official had to prove his worth by passing a set of written government examinations. The practice of meritocracy still exists today in across China and East Asia today.
The works of Confucius were translated into European languages through the agency of Jesuit scholars stationed in China. Matteo Ricci was among the very earliest to report on the thoughts of Confucius, and father Prospero Intorcetta wrote about the life and works of Confucius in Latin in 1687.
Translations of Confucian texts influenced European thinkers of the period, particularly among the Deists and other philosophical groups of the Enlightenment who were interested by the integration of the system of morality of Confucius into Western civilisation.
Confucianism influenced Gottfried Leibniz, who was attracted to the philosophy because of its perceived similarity to his own. It is postulated that certain elements of Leibniz's philosophy, such as "simple substance" and "preestablished harmony," were borrowed from his interactions with Confucianism. The French philosopher Voltaire was also influenced by Confucius, seeing the concept of Confucian rationalism as an alternative to Christian dogma. He praised Confucian ethics and politics, portraying the sociopolitical hierarchy of China as a model for Europe.
From the late 17th century onwards a whole body of literature known as the Han Kitab developed amongst the Hui Muslims of China who infused Islamic thought with Confucianism. Especially the works of Liu Zhi such as "Tiānfāng Diǎnlǐ"() sought to harmonise Islam with not only Confucianism but also with Taoism and is considered to be one of the crowning achievements of the Chinese Islamic culture.
Important military and political figures in modern Chinese history continued to be influenced by Confucianism, like the Muslim warlord Ma Fuxiang. The New Life Movement in the early 20th century was also influenced by Confucianism.
Referred to variously as the Confucian hypothesis and as a debated component of the more all-encompassing Asian Development Model, there exists among political scientists and economists a theory that Confucianism plays a large latent role in the ostensibly non-Confucian cultures of modern-day East Asia, in the form of the rigorous work ethic it endowed those cultures with. These scholars have held that, if not for Confucianism's influence on these cultures, many of the people of the East Asia region would not have been able to modernise and industrialise as quickly as Singapore, Malaysia, Hong Kong, Taiwan, Japan, South Korea and even China have done.
For example, the impact of the Vietnam War on Vietnam was devastating, but over the last few decades Vietnam has been re-developing in a very fast pace. Most scholars attribute the origins of this idea to futurologist Herman Kahn's "World Economic Development: 1979 and Beyond".
Other studies, for example Cristobal Kay's "Why East Asia Overtook Latin America: Agrarian Reform, Industrialization, and Development", have attributed the Asian growth to other factors, for example the character of agrarian reforms, "state-craft" (state capacity), and interaction between agriculture and industry.
After Confucianism had become the official 'state religion' in China, its influence penetrated all walks of life and all streams of thought in Chinese society for the generations to come. This did not exclude martial arts culture. Though in his own day, Confucius had rejected the practice of Martial Arts (with the exception of Archery), he did serve under rulers who used military power extensively to achieve their goals. In later centuries, Confucianism heavily influenced many educated martial artists of great influence, such as Sun Lutang, especially from the 19th century onwards, when bare-handed martial arts in China became more widespread and had begun to more readily absorb philosophical influences from Confucianism, Buddhism and Daoism. Some argue therefore that despite Confucius's disdain with martial culture, his teachings became of much relevance to it.
Confucius and Confucianism were opposed or criticised from the start, including Laozi's philosophy and Mozi's critique, and Legalists such as Han Fei ridiculed the idea that virtue would lead people to be orderly. In modern times, waves of opposition and vilification showed that Confucianism, instead of taking credit for the glories of Chinese civilisation, now had to take blame for its failures. The Taiping Rebellion described Confucianism sages as well as gods in Taoism and Buddhism as devils. In the New Culture Movement, Lu Xun criticised Confucianism for shaping Chinese people into the condition they had reached by the late Qing Dynasty: his criticisms are dramatically portrayed in "A Madman's Diary," which implies that Confucian society was cannibalistic. Leftists during the Cultural Revolution described Confucius as the representative of the class of slave owners.
In South Korea, there has long been criticism. Some South Koreans believe Confucianism has not contributed to the modernisation of South Korea. For example, South Korean writer Kim Kyong-il wrote an essay entitled "Confucius Must Die For the Nation to Live" (, "gongjaga jug-eoya naraga sanda"). Kim said that filial piety is one-sided and blind, and if it continues social problems will continue as government keeps forcing Confucian filial obligations onto families.
Confucianism "largely defined the mainstream discourse on gender in China from the Han dynasty onward." The gender roles prescribed in the Three Obediences and Four Virtues became a cornerstone of the family, and thus, societal stability. Starting from the Han period, Confucians began to teach that a virtuous woman was supposed to follow the males in her family: the father before her marriage, the husband after she marries, and her sons in widowhood. In the later dynasties, more emphasis was placed on the virtue of chastity. The Song dynasty Confucian Cheng Yi stated that: "To starve to death is a small matter, but to lose one's chastity is a great matter." Chaste widows were revered and memorialised during the Ming and Qing periods. This "cult of chastity" accordingly condemned many widows to poverty and loneliness by placing a social stigma on remarriage.
For years, many modern scholars have regarded Confucianism as a sexist, patriarchal ideology that was historically damaging to Chinese women. It has also been argued by some Chinese and Western writers that the rise of neo-Confucianism during the Song dynasty had led to a decline of status of women. Some critics have also accused the prominent Song neo-Confucian scholar Zhu Xi for believing in the inferiority of women and that men and women need to be kept strictly separate, while Sima Guang also believed that women should remain indoors and not deal with the matters of men in the outside world. Finally, scholars have discussed the attitudes toward women in Confucian texts such as Analects. In a much-discussed passage, women are grouped together with "xiaoren" (, literally "small people", meaning people of low status or low moral) and described as being difficult to cultivate or deal with. Many traditional commentators and modern scholars have debated over the precise meaning of the passage, and whether Confucius referred to all women or just certain groups of women.
Further analysis suggests, however, that women's place in Confucian society may be more complex. During the Han dynasty period, the influential Confucian text "Lessons for Women" ("Nüjie"), was written by Ban Zhao (45–114 CE) to instruct her daughters how to be proper Confucian wives and mothers, that is, to be silent, hard-working, and compliant. She stresses the complementarity and equal importance of the male and female roles according to yin-yang theory, but she clearly accepts the dominance of the male. However, she does present education and literary power as important for women. In later dynasties, a number of women took advantage of the Confucian acknowledgment of education to become independent in thought.
Joseph A. Adler points out that "Neo-Confucian writings do not necessarily reflect either the prevailing social practices or the scholars' own attitudes and practices in regard to actual women." Matthew Sommers has also indicated that the Qing dynasty government began to realise the utopian nature of enforcing the "cult of chastity" and began to allow practices such as widow remarrying to stand. Moreover, some Confucian texts like the "Chunqiu Fanlu" have passages that suggest a more equal relationship between a husband and his wife. More recently, some scholars have also begun to discuss the viability of constructing a "Confucian feminism".
Ever since Europeans first encountered Confucianism, the issue of how Confucianism should be classified has been subject to debate. In the 16th and the 17th centuries, the earliest European arrivals in China, the Christian Jesuits, considered Confucianism to be an ethical system, not a religion, and one that was compatible with Christianity. The Jesuits, including Matteo Ricci, saw Chinese rituals as "civil rituals" that could co-exist alongside the spiritual rituals of Catholicism.
By the early 18th century, this initial portrayal was rejected by the Dominicans and Franciscans, creating a dispute among Catholics in East Asia that was known as the "Rites Controversy." The Dominicans and Franciscans argued that Chinese ancestral worship was a form of idolatry that was contradictory to the tenets of Christianity. This view was reinforced by Pope Benedict XIV, who ordered a ban on Chinese rituals.
Some critics view Confucianism as definitely pantheistic and nontheistic, in that it is not based on the belief in the supernatural or in a personal god existing separate from the temporal plane. Confucius views about Tiān 天 and about the divine providence ruling the world, can be found above (in this page) and in Analects 6:26, 7:22, and 9:12, for example. On spirituality, Confucius said to Chi Lu, one of his students: "You are not yet able to serve men, how can you serve spirits?" Attributes such as ancestor worship, ritual, and sacrifice were advocated by Confucius as necessary for social harmony; these attributes may be traced to the traditional Chinese folk religion.
Scholars recognise that classification ultimately depends on how one defines religion. Using stricter definitions of religion, Confucianism has been described as a moral science or philosophy. But using a broader definition, such as Frederick Streng's characterisation of religion as "a means of ultimate transformation," Confucianism could be described as a "sociopolitical doctrine having religious qualities." With the latter definition, Confucianism is religious, even if non-theistic, in the sense that it "performs some of the basic psycho-social functions of full-fledged religions." | https://en.wikipedia.org/wiki?curid=5820 |
Chinese philosophy
Chinese philosophy originates in the Spring and Autumn period and Warring States period, during a period known as the "Hundred Schools of Thought", which was characterized by significant intellectual and cultural developments. Although much of Chinese philosophy begins in the Warring States period, elements of Chinese philosophy have existed for several thousand years; some can be found in the Yi Jing (the "Book of Changes"), an ancient compendium of divination, which dates back to at least 672 BCE. It was during the Warring States era that what Sima Tan termed the major philosophical schools of China--Confucianism, Legalism, and Taoism--arose, along with philosophies that later fell into obscurity, like Agriculturalism, Mohism, Chinese Naturalism, and the Logicians.
Early Shang dynasty thought was based upon cycles. This notion stems from what the people of the Shang Dynasty could observe around them: day and night cycled, the seasons progressed again and again, and even the moon waxed and waned until it waxed again. Thus, this notion, which remained relevant throughout Chinese history, reflects the order of nature. In juxtaposition, it also marks a fundamental distinction from western philosophy, in which the dominant view of time is a linear progression. During the Shang, fate could be manipulated by great deities, commonly translated as gods. Ancestor worship was present and universally recognized. There was also human and animal sacrifice.
When the Shang were overthrown by the Zhou, a new political, religious and philosophical concept was introduced called the "Mandate of Heaven". This mandate was said to be taken when rulers became unworthy of their position and provided a shrewd justification for Zhou rule. During this period, archaeological evidence points to an increase in literacy and a partial shift away from the faith placed in Shangdi (the Supreme Being in traditional Chinese religion), with ancestor worship becoming commonplace and a more worldly orientation coming to the fore.
Confucianism developed during the Spring and Autumn period from the teachings of the Chinese philosopher Confucius (551–479 BCE), who considered himself a retransmitter of Zhou values. His philosophy concerns the fields of ethics and politics, emphasizing personal and governmental morality, correctness of social relationships, justice, traditionalism, and sincerity. The Analects stress the importance of ritual, but also the importance of 'ren', which loosely translates as 'human-heartedness', Confucianism, along with Legalism, is responsible for creating the world’s first meritocracy, which holds that one's status should be determined by education and character rather than ancestry, wealth, or friendship. Confucianism was and continues to be a major influence in Chinese culture, the state of China and the surrounding areas of East Asia.
Before the Han dynasty the largest rivals to Confucianism were Chinese Legalism, and Mohism. Confucianism largely became the dominant philosophical school of China during the early Han dynasty following the replacement of its contemporary, the more Taoistic Huang-Lao. Legalism as a coherent philosophy disappeared largely due to its relationship with the unpopular authoritarian rule of Qin Shi Huang, however, many of its ideas and institutions would continue to influence Chinese philosophy until the end of Imperial rule during the Xinhai Revolution.
Mohism, though initially popular due to its emphasis on brotherly love versus harsh Qin Legalism, fell out of favour during the Han Dynasty due to the efforts of Confucians in establishing their views as political orthodoxy. The Six Dynasties era saw the rise of the Xuanxue philosophical school and the maturation of Chinese Buddhism, which had entered China from India during the Late Han Dynasties. By the time of the Tang dynasty five-hundred years after Buddhism's arrival into China, it had transformed into a thoroughly Chinese religious philosophy dominated by the school of Zen Buddhism. Neo-Confucianism became highly popular during the Song dynasty and Ming Dynasty due in large part to the eventual combination of Confucian and Zen Philosophy.
During the 19th and 20th centuries, Chinese philosophy integrated concepts from Western philosophy. Anti-Qing dynasty revolutionaries, involved in the Xinhai Revolution, saw Western philosophy as an alternative to traditional philosophical schools; students in the May Fourth Movement called for completely abolishing the old imperial institutions and practices of China. During this era, Chinese scholars attempted to incorporate Western philosophical ideologies such as democracy, Marxism, socialism, liberalism, republicanism, anarchism and nationalism into Chinese philosophy. The most notable examples are Sun Yat-Sen's Three Principles of the People ideology and Mao Zedong's Maoism, a variant of Marxism–Leninism. In the modern People's Republic of China, the official ideology is Deng Xiaoping's "market economy socialism".
Although the People's Republic of China has been historically hostile to the philosophy of ancient China, the influences of past are still deeply ingrained in the Chinese culture. In the post-Chinese economic reform era, modern Chinese philosophy has reappeared in forms such as the "New Confucianism". As in Japan, philosophy in China has become a melting pot of ideas. It accepts new concepts, while attempting also to accord old beliefs their due. Chinese philosophy still carries profound influence amongst the people of East Asia, and even Southeast Asia.
Around 500 BCE, after the Zhou state weakened and China moved into the Spring and Autumn period, the classic period of Chinese philosophy began. This is known as the Hundred Schools of Thought (; "zhūzǐ bǎijiā"; "various scholars, hundred schools"). This period is considered the golden age of Chinese philosophy. Of the many schools founded at this time and during the subsequent Warring States period, the four most influential ones were Confucianism, Daoism (often spelled "Taoism"), Mohism and Legalism.
Confucianism is a philosophical school developed from the teachings of Confucius collected and written by his disciples after his death in "The Analects", and in the Warring States period, Mencius in "The Mencius" and Xunzi in "The Xunzi". It is a system of moral, social, political, and religious thought that has had tremendous influence on Chinese history, thought, and culture down to the 20th century. Some Westerners have considered it to have been the "state religion" of imperial China because of its lasting influence on Asian culture. Its influence also spread to Korea, Japan, Vietnam and many other Asian countries.
Confucianism reached its peak of influence during the Tang and Song Dynasties under a rebranded Confucianism called Neo-Confucianism. Confucius expanded on the already present ideas of Chinese religion and culture to reflect the time period and environment of political chaos during the Warring States period. Because Confucius embedded the Chinese culture so heavily into his philosophy it was able to resonate with the people of China. This high approval of Confucianism can be seen through the reverence of Confucius in modern-day China.
The major Confucian concepts include "rén" (humanity or humaneness), "zhèngmíng" (rectification of names; e.g. a ruler who rules unjustly is no longer a ruler and may be dethroned), "zhōng" (loyalty), "xiào" (filial piety), and "li" (ritual). Confucius taught both positive and negative versions of the Golden Rule. The concepts Yin and Yang represent two opposing forces that are permanently in conflict with each other, leading to perpetual contradiction and change. The Confucian idea of "Rid of the two ends, take the middle" is a Chinese equivalent of Hegel's idea of "thesis, antithesis, and synthesis", which is a way of reconciling opposites, arriving at some middle ground combining the best of both. Confucius heavily emphasized the idea of microcosms in society (subunits of family and community) success's were the foundations for a successful state or country. Confucius believed in the use of education to further knowledge the people in ethics, societal behavior, and reverence in other humans. With the combination of education, successful family, and his ethical teachings he believed he could govern a well established society in China.
Taoism arose as a philosophy and later also developed into a religion based on the texts the "Tao Te Ching" ("Dào Dé Jīng"; ascribed to Laozi) and the "Zhuangzi" (partly ascribed to Zhuangzi). The character "Dao" ("Dao") literally means 'path' or 'way'. However, in Taoism it refers more often to a meta-physical force that encompasses the entire universe but which cannot be described nor felt. All major Chinese philosophical schools have investigated the correct "Way" to go about a moral life, but in Taoism it takes on the most abstract meanings, leading this school to be named after it. It advocated nonaction ("wu wei"), the strength of softness, spontaneity, and relativism. Although it serves as a rival to Confucianism, a school of active morality, this rivalry is compromised and given perspective by the idiom "practice Confucianism on the outside, Taoism on the inside."
Most of Taoism's focus is on the notion that human attempts to make the world better actually make the world worse. Therefore, it is better to strive for harmony, minimising potentially harmful interference with nature or in human affairs.
Philosopher Han Fei synthesized together earlier the methods of his predecessors, which famous historian Sima Tan posthumously termed Legalism. With an essential principle like "when the epoch changed, the ways changed", late pre-Han Dynasty reformers emphasized rule by law.
In Han Fei's philosophy, a ruler should govern his subjects by the following trinity:
What has been termed by some as the intrastate Realpolitik of the Warring States period was highly progressive, and extremely critical of the Confucian and Mohist schools. But that of the Qin dynasty would be blamed for creating a totalitarian society, thereby experiencing decline. Its main motto is: "Set clear strict laws, or deliver harsh punishment". In Han Fei's philosophy the ruler possessed authority regarding reward and penalty, enacted through law. Shang Yang and Han Fei promoted absolute adherence to the law, regardless of the circumstances or the person. Ministers were only to be rewarded if their words were accurate to the results of their proposals. Legalism, in accordance with Shang Yang's interpretation, could encourage the state to be a militaristic autarky.
The School of Naturalists or the School of Yin-yang () was a Warring States era philosophy that synthesized the concepts of yin-yang and the Wu Xing; Zou Yan is considered the founder of this school. His theory attempted to explain the universe in terms of basic forces in nature: the complementary agents of yin (dark, cold, female, negative) and yang (light, hot, male, positive) and the Five Elements or Five Phases (water, fire, wood, metal, and earth). In its early days, this theory was most strongly associated with the states of Yan and Qi. In later periods, these epistemological theories came to hold significance in both philosophy and popular belief. This school was absorbed into Taoism's alchemic and magical dimensions as well as into the Chinese medical framework. The earliest surviving recordings of this are in the Ma Wang Dui texts and Huang Di Nei Jing.
Mohism (Moism), founded by Mozi (), promotes universal love with the aim of mutual benefit. Everyone must love each other equally and impartially to avoid conflict and war. Mozi was strongly against Confucian ritual, instead emphasizing pragmatic survival through farming, fortification, and statecraft. Tradition is inconsistent, and human beings need an extra-traditional guide to identify which traditions are acceptable. The moral guide must then promote and encourage social behaviors that maximize general benefit. As motivation for his theory, Mozi brought in the "Will of Heaven", but rather than being religious his philosophy parallels utilitarianism.
The logicians (School of Names) were concerned with logic, paradoxes, names and actuality (similar to Confucian rectification of names). The logician Hui Shi was a friendly rival to Zhuangzi, arguing against Taoism in a light-hearted and humorous manner. Another logician, Gongsun Long, originated the famous When a White Horse is Not a Horse dialogue. This school did not thrive because the Chinese regarded sophistry and dialectic as impractical.
Agriculturalism was an early agrarian social and political philosophy that advocated peasant utopian communalism and egalitarianism. The philosophy is founded on the notion that human society originates with the development of agriculture, and societies are based upon "people's natural prospensity to farm."
The Agriculturalists believed that the ideal government, modeled after the semi-mythical governance of Shennong, is led by a benevolent king, one who works alongside the people in tilling the fields. The Agriculturalist king is not paid by the government through its treasuries; his livelihood is derived from the profits he earns working in the fields, not his leadership. Unlike the Confucians, the Agriculturalists did not believe in the division of labour, arguing instead that the economic policies of a country need to be based upon an egalitarian self sufficiency. The Agriculturalists supported the fixing of prices, in which all similar goods, regardless of differences in quality and demand, are set at exactly the same, unchanging price.
The short founder Qin dynasty, where Legalism was the official philosophy, quashed Mohist and Confucianist schools. Legalism remained influential during the early Han Dynasty under the Taoist-Realist ideology Huang-Lao until Emperor Wu of Han adopted Confucianism as official doctrine. Confucianism and Taoism became the determining forces of Chinese thought until the introduction of Buddhism.
Confucianism was particularly strong during the Han dynasty, whose greatest thinker was Dong Zhongshu, who integrated Confucianism with the thoughts of the Zhongshu School and the theory of the Five Elements. He also was a promoter of the New Text school, which considered Confucius as a divine figure and a spiritual ruler of China, who foresaw and started the evolution of the world towards the Universal Peace. In contrast, there was an Old Text school that advocated the use of Confucian works written in ancient language (from this comes the denomination "Old Text") that were so much more reliable. In particular, they refuted the assumption of Confucius as a godlike figure and considered him as the greatest sage, but simply a human and mortal
The 3rd and 4th centuries saw the rise of the "Xuanxue" (mysterious learning), also called "Neo-Taoism". The most important philosophers of this movement were Wang Bi, Xiang Xiu and Guo Xiang. The main question of this school was whether Being came before Not-Being (in Chinese, "ming" and "wuming"). A peculiar feature of these Taoist thinkers, like the Seven Sages of the Bamboo Grove, was the concept of "feng liu" (lit. wind and flow), a sort of romantic spirit which encouraged following the natural and instinctive impulse.
Buddhism arrived in China around the 1st century AD, but it was not until the Northern and Southern, Sui and Tang dynasties that it gained considerable influence and acknowledgement. At the beginning, it was considered a sort of Taoist sect. Mahayana Buddhism was far more successful in China than its rival Hinayana, and both Indian schools and local Chinese sects arose from the 5th century. Two chiefly important monk philosophers were Sengzhao and Daosheng. But probably the most influential and original of these schools was the Chan sect, which had an even stronger impact in Japan as the Zen sect.
In the mid-Tang Buddhism reached its peak, and reportedly there were 4,600 monasteries, 40,000 hermitages and 260,500 monks and nuns. The power of the Buddhist clergy was so great and the wealth of the monasteries so impressive, that it instigated criticism from Confucian scholars, who considered Buddhism as a foreign religion. In 845 Emperor Wuzong ordered the Great Anti-Buddhist Persecution, confiscating the riches and returning monks and nuns to lay life. From then on, Buddhism lost much of its influence.
Xuanxue was a philosophical school that combined elements of Confucianism and Taoism to reinterpret the I Ching"," Tao Te Ching"," and "Zhuangzi." The most important philosophers of this movement were Wang Bi, Xiang Xiu and Guo Xiang. The main question of this school was whether Being came before Not-Being (in Chinese, "ming" and "wuming"). A peculiar feature of these Taoist thinkers, like the Seven Sages of the Bamboo Grove, was the concept of "feng liu" (lit. wind and flow), a sort of romantic spirit which encouraged following the natural and instinctive impulse.
Buddhism is a religion, a practical philosophy, and arguably a psychology, focusing on the teachings of Gautama Buddha, who lived on the Indian subcontinent most likely from the mid-6th to the early 5th century BCE. When used in a generic sense, a Buddha is generally considered to be someone who discovers the true nature of reality.
Buddhism until the 4th century A.D had little impact on China but in the 4th century its teachings hybridized with those of Taoism. Buddhism brought to China the idea of many hells, where sinners went, but the deceased sinners souls could be saved by pious acts. Since Chinese traditional thought focused more on ethics rather than metaphysics, the merging of Buddhist and Taoist concepts developed several schools distinct from the originating Indian schools. The most prominent examples with philosophical merit are Sanlun, Tiantai, Huayan, and Chán (a.k.a. Zen). They investigate consciousness, levels of truth, whether reality is ultimately empty, and how enlightenment is to be achieved. Buddhism has a spiritual aspect that complements the action of Neo-Confucianism, with prominent Neo-Confucians advocating certain forms of meditation.
Neo-Confucianism was a revived version of old Confucian principles that appeared around the Song dynasty, with Buddhist, Taoist, and Legalist features. The first philosophers, such as Shao Yong, Zhou Dunyi and Chang Zai, were cosmologists and worked on the Yi Jing. The Cheng brothers, Cheng Yi and Cheng Hao, are considered the founders of the two main schools of thought of Neo-Confucianism: the School of Principle the first, the School of Mind the latter. The School of Principle gained supremacy during the Song dynasty with the philosophical system elaborated by Zhu Xi, which became mainstream and officially adopted by the government for the Imperial examinations under the Yuan dynasty. The School of Mind was developed by Lu Jiuyuan, Zhu Xi's main rival, but was soon forgotten. Only during the Ming dynasty was the School of Mind revived by Wang Shouren, whose influence is equal to that of Zhu Xi. This school was particularly important in Japan.
During the Qing dynasty many philosophers objected against Neo-Confucianism and there was a return to the Han Dynasty Confucianism, and also the reprise of the controversy between Old Text and New Text. In this period also started the penetration of Western culture, but most Chinese thought that the Westerners were maybe more advanced in technology and warfare, but that China had primacy in moral and intellectual fields.
Despite Confucianism losing popularity to Taoism and Buddhism, Neo-Confucianism combined those ideas into a more metaphysical framework. Its concepts include "li" (principle, akin to Plato's forms), "qi" (vital or material force), "taiji" (the Great Ultimate), and "xin" (mind). Song dynasty philosopher Zhou Dunyi (1017–1073) is seen commonly seen as the first true "pioneer" of Neo-Confucianism, using Daoist metaphysics as a framework for his ethical philosophy. Neo-Confucianism developed both as a renaissance of traditional Confucian ideas, and as a reaction to the ideas of Buddhism and religious Daoism. Although the Neo-Confucianists denounced Buddhist metaphysics, Neo-Confucianism did borrow Daoist and Buddhist terminology and concepts.
Neo-Confucianist philosophers like Zhu Xi and Wang Yangming are seen as the most important figures of Neo-Confucianism.
During the Industrial and Modern Ages, Chinese philosophy had also begun to integrate concepts of Western philosophy, as steps toward modernization. Notably, Chinese philosophy never developed the concept of human rights, so that classical Chinese lacked words for them. In 1864, W.A.P. Martin had to invent the word "quanli"() to translate the Western concept of "rights" in the process of translating Henry Wheaton's "Elements of International Law" into classical Chinese.
By the time of the Xinhai Revolution in 1911, there were many calls such as the May Fourth Movement to completely abolish the old imperial institutions and practices of China. There have been attempts to incorporate democracy, republicanism, and industrialism into Chinese philosophy, notably by Sun Yat-Sen at the beginning of the 20th century. Mao Zedong added Marxism, Stalinism, Chinese Marxist Philosophy and other communist thought.
When the Communist Party of China took over the reign, previous schools of thought, excepting notably Legalism, were denounced as backward, and later even purged during the Cultural Revolution, whereas their influences on Chinese thoughts remain until today. The current government of the People's Republic of China is trying to encourage a form of market socialism.
Since the radical movement of the Cultural Revolution, the Chinese government has become much more tolerant with the practice of traditional beliefs. The 1978 Constitution of the People's Republic of China guarantees "freedom of religion" with a number of restrictions. Spiritual and philosophical institutions have been allowed to be established or re-established, as long they are not perceived to be a threat to the power of the CPC. Moreover, those organizations are heavily monitored. The influences of the past are still deeply ingrained in the Chinese culture.
New Confucianism is an intellectual movement of Confucianism that began in the early 20th century in Republican China, and revived in post-Mao era contemporary China. It is deeply influenced by, but not identical with, the Neo-Confucianism of the Song and Ming dynasties.
Although the individual philosophical schools differ considerably, they nevertheless share a common vocabulary and set of concerns.
Among the terms commonly found in Chinese philosophy are:
Among the commonalities of Chinese philosophies are: | https://en.wikipedia.org/wiki?curid=5822 |
Confucius
Confucius ( ; 551 BC–479 BC) was a Chinese philosopher and politician of the Spring and Autumn period.
The philosophy of Confucius, also known as Confucianism, emphasized personal and governmental morality, correctness of social relationships, justice, kindness, and sincerity. His followers competed successfully with many other schools during the Hundred Schools of Thought era only to be suppressed in favor of the Legalists during the Qin dynasty. Following the victory of Han over Chu after the collapse of Qin, Confucius's thoughts received official sanction in the new government and were further developed into a system known in the West as Neo-Confucianism, and later New Confucianism (Modern Neo-Confucianism).
Confucius is traditionally credited with having authored or edited many of the Chinese classic texts including all of the Five Classics, but modern scholars are cautious of attributing specific assertions to Confucius himself. Aphorisms concerning his teachings were compiled in the "Analects", but only many years after his death.
Confucius's principles have commonality with Chinese tradition and belief. He championed strong family loyalty, ancestor veneration, and respect of elders by their children and of husbands by their wives, recommending family as a basis for ideal government. He espoused the well-known principle "Do not do unto others what you do not want done to yourself", the Golden Rule. He is also a traditional deity in Daoism.
Confucius is widely considered as one of the most important and influential individuals in human history. His teaching and philosophy greatly impacted people around the world and remain influential today.
The name "Confucius" is a Latinized form of the Mandarin Chinese "Kǒng Fūzǐ" (, meaning "Master Kǒng"), and was coined in the late 16th century by the early Jesuit missionaries to China. Confucius's clan name was "Kǒng" (; Old Chinese: ), and his given name was "Qiū" (; OC: ). His "capping name", given upon reaching adulthood and by which he would have been known to all but his older family members, was "Zhòngní" (), the "Zhòng" indicating that he was the second son in his family.
It is thought that Confucius was born on September 28, 551BC, in Zou (, in modern Shandong province). The area was notionally controlled by the kings of Zhou but effectively independent under the local lords of Lu, who ruled from the nearby city of Qufu. His father Kong He (or Shuliang He) was an elderly commandant of the local Lu garrison. His ancestry traced back through the dukes of Song to the Shang dynasty which had preceded the Zhou. Traditional accounts of Confucius's life relate that Kong He's grandfather had migrated the family from Song to Lu.
Kong He died when Confucius was three years old, and Confucius was raised by his mother Yan Zhengzai () in poverty. His mother would later die at less than 40 years of age. At age 19 he married Qiguan (), and a year later the couple had their first child, Kong Li (). Qiguan and Confucius would later have two daughters together, one of whom is thought to have died as a child.
Confucius was educated at schools for commoners, where he studied and learned the Six Arts.
Confucius was born into the class of "shi" (), between the aristocracy and the common people. He is said to have worked in various government jobs during his early 20s, and as a bookkeeper and a caretaker of sheep and horses, using the proceeds to give his mother a proper burial. When his mother died, Confucius (aged 23) is said to have mourned for three years, as was the tradition.
In Confucius's time, the state of Lu was headed by a ruling ducal house. Under the duke were three aristocratic families, whose heads bore the title of viscount and held hereditary positions in the Lu bureaucracy. The Ji family held the position "Minister over the Masses", who was also the "Prime Minister"; the Meng family held the position "Minister of Works"; and the Shu family held the position "Minister of War". In the winter of 505 BC, Yang Hu—a retainer of the Ji family—rose up in rebellion and seized power from the Ji family. However, by the summer of 501 BC, the three hereditary families had succeeded in expelling Yang Hu from Lu. By then, Confucius had built up a considerable reputation through his teachings, while the families came to see the value of proper conduct and righteousness, so they could achieve loyalty to a legitimate government. Thus, that year (501 BC), Confucius came to be appointed to the minor position of governor of a town. Eventually, he rose to the position of Minister of Crime.
Confucius desired to return the authority of the state to the duke by dismantling the fortifications of the city—strongholds belonging to the three families. This way, he could establish a centralized government. However, Confucius relied solely on diplomacy as he had no military authority himself. In 500 BC, Hou Fan—the governor of Hou—revolted against his lord of the Shu family. Although the Meng and Shu families unsuccessfully besieged Hou, a loyalist official rose up with the people of Hou and forced Hou Fan to flee to the Qi state. The situation may have been in favor for Confucius as this likely made it possible for Confucius and his disciples to convince the aristocratic families to dismantle the fortifications of their cities. Eventually, after a year and a half, Confucius and his disciples succeeded in convincing the Shu family to raze the walls of Hou, the Ji family in razing the walls of Bi, and the Meng family in razing the walls of Cheng. First, the Shu family led an army towards their city Hou and tore down its walls in 498 BC.
Soon thereafter, Gongshan Furao (also known as Gongshan Buniu), a retainer of the Ji family, revolted and took control of the forces at Bi. He immediately launched an attack and entered the capital Lu. Earlier, Gongshan had approached Confucius to join him, which Confucius considered as he wanted the opportunity to put his principles into practice but he gave up on the idea in the end. Confucius disapproved the use of a violent revolution by principle, even though the Ji family dominated the Lu state by force for generations and had exiled the previous duke. Creel (1949) states that, unlike the rebel Yang Hu before him, Gongshan may have sought to destroy the three hereditary families and restore the power of the duke. However, Dubs (1946) is of the view that Gongshan was encouraged by Viscount Ji Huan to invade the Lu capital in an attempt to avoid dismantling the Bi fortified walls. Whatever the situation may have been, Gongshan was considered an upright man who continued to defend the state of Lu, even after he was forced to flee.
During the revolt by Gongshan, Zhong You had managed to keep the duke and the three viscounts together at the court. Zhong You was one of the disciples of Confucius and Confucius had arranged for him to be given the position of governor by the Ji family. When Confucius heard of the raid, he requested that Viscount Ji Huan allow the duke and his court to retreat to a stronghold on his palace grounds. Thereafter, the heads of the three families and the duke retreated to the Ji's palace complex and ascended the Wuzi Terrace. Confucius ordered two officers to lead an assault against the rebels. At least one of the two officers was a retainer of the Ji family, but they were unable to refuse the orders while in the presence of the duke, viscounts, and court. The rebels were pursued and defeated at Gu. Immediately after the revolt was defeated, the Ji family razed the Bi city walls to the ground.
The attackers retreated after realizing that they would have to become rebels against the state and their lord. Through Confucius' actions, the Bi officials had inadvertently revolted against their own lord, thus forcing Viscount Ji Huan's hand in having to dismantle the walls of Bi (as it could have harbored such rebels) or confess to instigating the event by going against proper conduct and righteousness as an official. Dubs (1949) suggests that the incident brought to light Confucius' foresight, practical political ability, and insight into human character.
When it was time to dismantle the city walls of the Meng family, the governor was reluctant to have his city walls torn down and convinced the head of the Meng family not to do so. The "Zuozhuan" recalls that the governor advised against razing the walls to the ground as he said that it made Cheng vulnerable to the Qi state and cause the destruction of the Meng family. Even though Viscount Meng Yi gave his word not to interfere with an attempt, he went back on his earlier promise to dismantle the walls.
Later in 498 BC, Duke Ding personally went with an army to lay siege to Cheng in an attempt to raze its walls to the ground, but he did not succeed. Thus, Confucius could not achieve the idealistic reforms that he wanted including restoration of the legitimate rule of the duke. He had made powerful enemies within the state, especially with Viscount Ji Huan, due to his successes so far. According to accounts in the "Zuozhuan" and "Shiji", Confucius departed his homeland in 497 BC after his support for the failed attempt of dismantling the fortified city walls of the powerful Ji, Meng, and Shu families. He left the state of Lu without resigning, remaining in self-exile and unable to return as long as Viscount Ji Huan was alive.
The "Shiji" stated that the neighboring Qi state was worried that Lu was becoming too powerful while Confucius was involved in the government of the Lu state. According to this account, Qi decided to sabotage Lu's reforms by sending 100 good horses and 80 beautiful dancing girls to the duke of Lu. The duke indulged himself in pleasure and did not attend to official duties for three days. Confucius was disappointed and resolved to leave Lu and seek better opportunities, yet to leave at once would expose the misbehavior of the duke and therefore bring public humiliation to the ruler Confucius was serving. Confucius therefore waited for the duke to make a lesser mistake. Soon after, the duke neglected to send to Confucius a portion of the sacrificial meat that was his due according to custom, and Confucius seized upon this pretext to leave both his post and the Lu state.
After Confucius's resignation, he began a long journey or set of journeys around the principality states of north-east and central China including Wey, Song, Zheng, Cao, Chu, Qi, Chen, and Cai (and a failed attempt to go to Jin). At the courts of these states, he expounded his political beliefs but did not see them implemented.
According to the "Zuozhuan", Confucius returned home to his native Lu when he was 68, after he was invited to do so by Ji Kangzi, the chief minister of Lu. The "Analects" depict him spending his last years teaching 72 or 77 disciples and transmitting the old wisdom via a set of texts called the Five Classics.
During his return, Confucius sometimes acted as an advisor to several government officials in Lu, including Ji Kangzi, on matters including governance and crime.
Burdened by the loss of both his son and his favorite disciples, he died at the age of 71 or 72. He died from natural causes. Confucius was buried in Kong Lin cemetery which lies in the historical part of Qufu in the Shandong Province. The original tomb erected there in memory of Confucius on the bank of the Sishui River had the shape of an axe. In addition, it has a raised brick platform at the front of the memorial for offerings such as sandalwood incense and fruit.
Although Confucianism is often followed in a religious manner by the Chinese, many argue that its values are secular and that it is, therefore, less a religion than a secular morality. Proponents argue, however, that despite the secular nature of Confucianism's teachings, it is based on a worldview that is religious. Confucianism discusses elements of the afterlife and views concerning Heaven, but it is relatively unconcerned with some spiritual matters often considered essential to religious thought, such as the nature of souls. However, Confucius is said to have believed in astrology, saying: "Heaven sends down its good or evil symbols and wise men act accordingly".
In the "Analects", Confucius presents himself as a "transmitter who invented nothing". He puts the greatest emphasis on the importance of study, and it is the Chinese character for study () that opens the text. Far from trying to build a systematic or formalist theory, he wanted his disciples to master and internalize older classics, so that their deep thought and thorough study would allow them to relate the moral problems of the present to past political events (as recorded in the "Annals") or the past expressions of commoners' feelings and noblemen's reflections (as in the poems of the "Book of Odes").
One of the deepest teachings of Confucius may have been the superiority of personal exemplification over explicit rules of behavior. His moral teachings emphasized self-cultivation, emulation of moral exemplars, and the attainment of skilled judgment rather than knowledge of rules. Confucian ethics may, therefore, be considered a type of virtue ethics. His teachings rarely rely on reasoned argument, and ethical ideals and methods are conveyed indirectly, through allusion, innuendo, and even tautology. His teachings require examination and context to be understood. A good example is found in this famous anecdote:
By not asking about the horses, Confucius demonstrates that the sage values human beings over property; readers are led to reflect on whether their response would follow Confucius's and to pursue self-improvement if it would not have. Confucius serves not as an all-powerful deity or a universally true set of abstract principles, but rather the ultimate model for others. For these reasons, according to many commentators, Confucius's teachings may be considered a Chinese example of humanism.
One of his teachings was a variant of the Golden Rule, sometimes called the "Silver Rule" owing to its negative form:
Often overlooked in Confucian ethics are the virtues to the self: sincerity and the cultivation of knowledge. Virtuous action towards others begins with virtuous and sincere thought, which begins with knowledge. A virtuous disposition without knowledge is susceptible to corruption, and virtuous action without sincerity is not true righteousness. Cultivating knowledge and sincerity is also important for one's own sake; the superior person loves learning for the sake of learning and righteousness for the sake of righteousness.
The Confucian theory of ethics as exemplified in "lǐ" () is based on three important conceptual aspects of life: (a) ceremonies associated with sacrifice to ancestors and deities of various types, (b) social and political institutions, and (c) the etiquette of daily behavior. It was believed by some that "lǐ" originated from the heavens, but Confucius stressed the development of "lǐ" through the actions of sage leaders in human history. His discussions of "lǐ" seem to redefine the term to refer to all actions committed by a person to build the ideal society, rather than those simply conforming with canonical standards of ceremony.
In the early Confucian tradition, "lǐ" was doing the proper thing at the proper time, balancing between maintaining existing norms to perpetuate an ethical social fabric, and violating them in order to accomplish ethical good. Training in the "lǐ" of past sages cultivates in people virtues that include ethical judgment about when "lǐ" must be adapted in light of situational contexts.
In Confucianism, the concept of "li" is closely related to "yì" (), which is based upon the idea of reciprocity. "Yì" can be translated as righteousness, though it may simply mean what is ethically best to do in a certain context. The term contrasts with action done out of self-interest. While pursuing one's own self-interest is not necessarily bad, one would be a better, more righteous person if one's life was based upon following a path designed to enhance the greater good. Thus an outcome of "yì" is doing the right thing for the right reason.
Just as action according to "lǐ" should be adapted to conform to the aspiration of adhering to "yì", so "yì" is linked to the core value of "rén" ()."Rén" consists of five basic virtues: seriousness, generosity, sincerity, diligence and kindness. "Rén" is the virtue of perfectly fulfilling one's responsibilities toward others, most often translated as "benevolence" or "humaneness"; translator Arthur Waley calls it "Goodness" (with a capital "G"), and other translations that have been put forth include "authoritativeness" and "selflessness." Confucius's moral system was based upon empathy and understanding others, rather than divinely ordained rules. To develop one's spontaneous responses of "rén" so that these could guide action intuitively was even better than living by the rules of "yì". Confucius asserts that virtue is a mean between extremes. For example, the properly generous person gives the right amount—not too much and not too little.
Confucius's political thought is based upon his ethical thought. He argued that the best government is one that rules through "rites" ("lǐ") and people's natural morality, and "not" by using bribery and coercion. He explained that this is one of the most important analects: "If the people be led by laws, and uniformity sought to be given them by punishments, they will try to avoid the punishment, but have no sense of shame. If they be led by virtue, and uniformity sought to be given them by the rules of propriety, they will have the sense of the shame, and moreover will become good." (Translated by James Legge) in the Great Learning (). This "sense of shame" is an internalisation of duty, where the punishment precedes the evil action, instead of following it in the form of laws as in Legalism.
Confucius looked nostalgically upon earlier days, and urged the Chinese, particularly those with political power, to model themselves on earlier examples. In times of division, chaos, and endless wars between feudal states, he wanted to restore the Mandate of Heaven () that could unify the "world" (, "all under Heaven") and bestow peace and prosperity on the people. Because his vision of personal and social perfections was framed as a revival of the ordered society of earlier times, Confucius is often considered a great proponent of conservatism, but a closer look at what he proposes often shows that he used (and perhaps twisted) past institutions and rites to push a new political agenda of his own: a revival of a unified royal state, whose rulers would succeed to power on the basis of their moral merits instead of lineage. These would be rulers devoted to their people, striving for personal and social perfection, and such a ruler would spread his own virtues to the people instead of imposing proper behavior with laws and rules.
Confucius did not believe in the concept of "democracy", which is itself an Athenian concept unknown in ancient China, but could be interpreted by Confucius's principles recommending against individuals electing their own political leaders to govern them, or that anyone is capable of self-government. He expressed fears that the masses lacked the intellect to make decisions for themselves, and that, in his view, since not everyone is created equal, not everyone has a right of self-government.
While he supported the idea of government ruling by a virtuous king, his ideas contained a number of elements to limit the power of rulers. He argued for representing truth in language, and honesty was of paramount importance. Even in facial expression, truth must always be represented. Confucius believed that if a ruler is to lead correctly, by action, that orders would be unnecessary in that others will follow the proper actions of their ruler. In discussing the relationship between a king and his subject (or a father and his son), he underlined the need to give due respect to superiors. This demanded that the subordinates must advise their superiors if the superiors are considered to be taking a course of action that is wrong. Confucius believed in ruling by example, if you lead correctly, orders by force or punishment are not necessary.
Confucius's teachings were later turned into an elaborate set of rules and practices by his numerous disciples and followers, who organized his teachings into the Analects. Confucius's disciples and his only grandson, Zisi, continued his philosophical school after his death. These efforts spread Confucian ideals to students who then became officials in many of the royal courts in China, thereby giving Confucianism the first wide-scale test of its dogma.
Two of Confucius's most famous later followers emphasized radically different aspects of his teachings. In the centuries after his death, Mencius () and Xun Zi () both composed important teachings elaborating in different ways on the fundamental ideas associated with Confucius. Mencius (4th century BC) articulated the innate goodness in human beings as a source of the ethical intuitions that guide people towards "rén", "yì", and "lǐ", while Xun Zi (3rd century BC) underscored the realistic and materialistic aspects of Confucian thought, stressing that morality was inculcated in society through tradition and in individuals through training. In time, their writings, together with the "Analects" and other core texts came to constitute the philosophical corpus of Confucianism.
This realignment in Confucian thought was parallel to the development of Legalism, which saw filial piety as self-interest and not a useful tool for a ruler to create an effective state. A disagreement between these two political philosophies came to a head in 223 BC when the Qin state conquered all of China. Li Si, Prime Minister of the Qin dynasty, convinced Qin Shi Huang to abandon the Confucians' recommendation of awarding fiefs akin to the Zhou Dynasty before them which he saw as being against to the Legalist idea of centralizing the state around the ruler. When the Confucian advisers pressed their point, Li Si had many Confucian scholars killed and their books burned—considered a huge blow to the philosophy and Chinese scholarship.
Under the succeeding Han and Tang dynasties, Confucian ideas gained even more widespread prominence. Under Wudi, the works of Confucius were made the official imperial philosophy and required reading for civil service examinations in 140 BC which was continued nearly unbroken until the end of the 19th century. As Mohism lost support by the time of the Han, the main philosophical contenders were Legalism, which Confucian thought somewhat absorbed, the teachings of Laozi, whose focus on more spiritual ideas kept it from direct conflict with Confucianism, and the new Buddhist religion, which gained acceptance during the Southern and Northern Dynasties era. Both Confucian ideas and Confucian-trained officials were relied upon in the Ming Dynasty and even the Yuan Dynasty, although Kublai Khan distrusted handing over provincial control to them.
During the Song dynasty, the scholar Zhu Xi (AD 1130–1200) added ideas from Daoism and Buddhism into Confucianism. In his life, Zhu Xi was largely ignored, but not long after his death, his ideas became the new orthodox view of what Confucian texts actually meant. Modern historians view Zhu Xi as having created something rather different and call his way of thinking "Neo-Confucianism". Neo-Confucianism held sway in China, Japan, Korea, and Vietnam until the 19th century.
The works of Confucius were first translated into European languages by Jesuit missionaries in the 16th century during the late Ming dynasty. The first known effort was by Michele Ruggieri, who returned to Italy in 1588 and carried on his translations while residing in Salerno. Matteo Ricci started to report on the thoughts of Confucius, and a team of Jesuits—Prospero Intorcetta, Philippe Couplet, and two others—published a translation of several Confucian works and an overview of Chinese history in Paris in 1687. François Noël, after failing to persuade ClementXI that Chinese veneration of ancestors and Confucius did not constitute idolatry, completed the Confucian canon at Prague in 1711, with more scholarly treatments of the other works and the first translation of the collected works of Mencius. It is thought that such works had considerable importance on European thinkers of the period, particularly among the Deists and other philosophical groups of the Enlightenment who were interested by the integration of the system of morality of Confucius into Western civilization.
In the modern era Confucian movements, such as New Confucianism, still exist, but during the Cultural Revolution, Confucianism was frequently attacked by leading figures in the Communist Party of China. This was partially a continuation of the condemnations of Confucianism by intellectuals and activists in the early 20th century as a cause of the ethnocentric close-mindedness and refusal of the Qing Dynasty to modernize that led to the tragedies that befell China in the 19th century.
Confucius's works are studied by scholars in many other Asian countries, particularly those in the Chinese cultural sphere, such as Korea, Japan, and Vietnam. Many of those countries still hold the traditional memorial ceremony every year.
Among Tibetans, Confucius is often worshipped as a holy king and master of magic, divination and astrology. Tibetan Buddhists see him as learning divination from the buddha Manjushri (and that knowledge subsequently reaching Tibet through Princess Wencheng), while Bon practitioners see him as being a reincarnation of Tonpa Shenrab Miwoche, the legendary founder of Bon.
The Ahmadiyya Muslim Community believes Confucius was a Divine Prophet of God, as were Lao-Tzu and other eminent Chinese personages.
In modern times, Asteroid 7853, "Confucius", was named after the Chinese thinker.
Confucius began teaching after he turned 30, and taught more than 3,000 students in his life, about 70 of whom were considered outstanding. His disciples and the early Confucian community they formed became the most influential intellectual force in the Warring States period. The Han dynasty historian Sima Qian dedicated a chapter in his "Records of the Grand Historian" to the biographies of Confucius's disciples, accounting for the influence they exerted in their time and afterward. Sima Qian recorded the names of 77 disciples in his collective biography, while "Kongzi Jiayu", another early source, records 76, not completely overlapping. The two sources together yield the names of 96 disciples. 22 of them are mentioned in the "Analects", while the "Mencius" records 24.
Confucius did not charge any tuition, and only requested a symbolic gift of a bundle of dried meat from any prospective student. According to his disciple Zigong, his master treated students like doctors treated patients and did not turn anybody away. Most of them came from Lu, Confucius's home state, with 43 recorded, but he accepted students from all over China, with six from the state of Wey (such as Zigong), three from Qin, two each from Chen and Qi, and one each from Cai, Chu, and Song. Confucius considered his students' personal background irrelevant, and accepted noblemen, commoners, and even former criminals such as Yan Zhuoju and Gongye Chang. His disciples from richer families would pay a sum commensurate with their wealth which was considered a ritual donation.
Confucius's favorite disciple was Yan Hui, most probably one of the most impoverished of them all. Sima Niu, in contrast to Yan Hui, was from a hereditary noble family hailing from the Song state. Under Confucius's teachings, the disciples became well-learned in the principles and methods of government. He often engaged in discussion and debate with his students and gave high importance to their studies in history, poetry, and ritual. Confucius advocated loyalty to principle rather than to individual acumen, in which reform was to be achieved by persuasion rather than violence. Even though Confucius denounced them for their practices, the aristocracy was likely attracted to the idea of having trustworthy officials who were studied in morals as the circumstances of the time made it desirable. In fact, the disciple Zilu even died defending his ruler in Wey.
Yang Hu, who was a subordinate of the Ji family, had dominated the Lu government from 505 to 502 and even attempted a coup, which narrowly failed. As a likely consequence, it was after this that the first disciples of Confucius were appointed to government positions. A few of Confucius's disciples went on to attain official positions of some importance, some of which were arranged by Confucius. By the time Confucius was 50 years old, the Ji family had consolidated their power in the Lu state over the ruling ducal house. Even though the Ji family had practices with which Confucius disagreed and disapproved, they nonetheless gave Confucius's disciples many opportunities for employment. Confucius continued to remind his disciples to stay true to their principles and renounced those who did not, all the while being openly critical of the Ji family.
No contemporary painting or sculpture of Confucius survives, and it was only during the Han Dynasty that he was portrayed visually. Carvings often depict his legendary meeting with Laozi. Since that time there have been many portraits of Confucius as the ideal philosopher. The oldest known portrait of Confucius has been unearthed in the tomb of the Han dynasty ruler Marquis of Haihun (died 59 BC). The picture was painted on the wooden frame to a polished bronze mirror.
In former times, it was customary to have a portrait in Confucius Temples; however, during the reign of Hongwu Emperor (Taizu) of the Ming dynasty, it was decided that the only proper portrait of Confucius should be in the temple in his home town, Qufu in Shandong. In other temples, Confucius is represented by a memorial tablet. In 2006, the China Confucius Foundation commissioned a standard portrait of Confucius based on the Tang dynasty portrait by Wu Daozi.
The South Wall Frieze in the courtroom of the Supreme Court of the United States depicts Confucius as a teacher of harmony, learning, and virtue.
There have been two film adaptations of Confucius' life: "Confucius" (1940) starring Tang Huaiqiu, and "Confucius" (2010) starring Chow Yun-fat.
In music, Tori Amos imagines Confucius as working on a crossword puzzle in her 1992 song "Happy Phantom."
Soon after Confucius's death, Qufu, his home town, became a place of devotion and remembrance. The Han dynasty "Records of the Grand Historian" records that it had already become a place of pilgrimage for ministers. It is still a major destination for cultural tourism, and many people visit his grave and the surrounding temples. In Sinic cultures, there are many temples where representations of the Buddha, Laozi, and Confucius are found together. There are also many temples dedicated to him, which have been used for Confucian ceremonies.
Followers of Confucianism have a tradition of holding spectacular memorial ceremonies of Confucius () every year, using ceremonies that supposedly derived from Zhou Li () as recorded by Confucius, on the date of Confucius's birth. In the 20th century, this tradition was interrupted for several decades in mainland China, where the official stance of the Communist Party and the State was that Confucius and Confucianism represented reactionary feudalist beliefs which held that the subservience of the people to the aristocracy is a part of the natural order. All such ceremonies and rites were therefore banned. Only after the 1990s did the ceremony resume. As it is now considered a veneration of Chinese history and tradition, even Communist Party members may be found in attendance.
In Taiwan, where the Nationalist Party (Kuomintang) strongly promoted Confucian beliefs in ethics and behavior, the tradition of the memorial ceremony of Confucius () is supported by the government and has continued without interruption. While not a national holiday, it does appear on all printed calendars, much as Father's Day or Christmas Day do in the Western world.
In South Korea, a grand-scale memorial ceremony called Seokjeon Daeje is held twice a year on Confucius's birthday and the anniversary of his death, at Confucian academies across the country and Sungkyunkwan in Seoul.
Confucius's descendants were repeatedly identified and honored by successive imperial governments with titles of nobility and official posts. They were honored with the rank of a marquis 35 times since Gaozu of the Han dynasty, and they were promoted to the rank of duke 42 times from the Tang dynasty to the Qing dynasty. Emperor Xuanzong of Tang first bestowed the title of "Duke Wenxuan" on Kong Suizhi of the 35th generation. In 1055, Emperor Renzong of Song first bestowed the title of "Duke Yansheng" on Kong Zongyuan of the 46th generation.
During the Southern Song dynasty, the Duke Yansheng Kong Duanyou fled south with the Song Emperor to Quzhou in Zhejiang, while the newly established Jin dynasty (1115–1234) in the north appointed Kong Duanyou's brother Kong Duancao who remained in Qufu as Duke Yansheng. From that time up until the Yuan dynasty, there were two Duke Yanshengs, one in the north in Qufu and the other in the south at Quzhou. An invitation to come back to Qufu was extended to the southern Duke Yansheng Kong Zhu by the Yuan-dynasty Emperor Kublai Khan. The title was taken away from the southern branch after Kong Zhu rejected the invitation, so the northern branch of the family kept the title of Duke Yansheng. The southern branch remained in Quzhou where they live to this day. Confucius's descendants in Quzhou alone number 30,000. The Hanlin Academy rank of Wujing boshi 五經博士 was awarded to the southern branch at Quzhou by a Ming Emperor while the northern branch at Qufu held the title Duke Yansheng. The leader of the southern branch is 孔祥楷 Kong Xiangkai.
In 1351, during the reign of Emperor Toghon Temür of the Yuan dynasty, 53rd-generation descendant Kong Huan ()'s 2nd son Kong Shao () moved from China to Korea during the Goryeo Dynasty, and was received courteously by Princess Noguk (the Mongolian-born wife of the future king Gongmin). After being naturalized as a Korean citizen, he changed the hanja of his name from "昭" to "紹" (both pronounced "so" in Korean), married a Korean woman and bore a son (Gong Yeo (), 1329–1397), therefore establishing the Changwon Gong clan (), whose ancestral seat was located in Changwon, South Gyeongsang Province.
The clan then received an aristocratic rank during the succeeding Joseon Dynasty. In 1794, during the reign of King Jeongjo, the clan then changed its name to Gokbu Gong clan () in honor of Confucius's birthplace Qufu ().Famous descendants include actors such as Gong Yoo (real name Gong Ji-cheol (공지철)) & Gong Hyo-jin (공효진); and artists such as male idol group B1A4 member Gongchan (real name Gong Chan-sik (공찬식)), singer-songwriter Minzy (real name Gong Min-ji (공민지)), as well as her great-aunt traditional folk dancer (공옥진).
Despite repeated dynastic change in China, the title of Duke Yansheng was bestowed upon successive generations of descendants until it was abolished by the Nationalist Government in 1935. The last holder of the title, Kung Te-cheng of the 77th generation, was appointed Sacrificial Official to Confucius. Kung Te-cheng died in October 2008, and his son, Kung Wei-yi, the 78th lineal descendant, had died in 1989. Kung Te-cheng's grandson, Kung Tsui-chang, the 79th lineal descendant, was born in 1975; his great-grandson, Kung Yu-jen, the 80th lineal descendant, was born in Taipei on January 1, 2006. Te-cheng's sister, Kong Demao, lives in mainland China and has written a book about her experiences growing up at the family estate in Qufu. Another sister, Kong Deqi, died as a young woman. Many descendants of Confucius still live in Qufu today.
A descendant of Confucius, H. H. Kung was the Premier of the Republic of China. One of his sons, Kong Lingjie 孔令傑 married Debra Paget who gave birth to Gregory Kung ().
Confucius's family, the Kongs, have the longest recorded extant pedigree in the world today. The father-to-son family tree, now in its 83rd generation, has been recorded since the death of Confucius. According to the Confucius Genealogy Compilation Committee (CGCC), he has two million known and registered descendants, and there are an estimated three million in all. Of these, several tens of thousands live outside of China. In the 14th century, a Kong descendant went to Korea, where an estimated 34,000 descendants of Confucius live today. One of the main lineages fled from the Kong ancestral home in Qufu during the Chinese Civil War in the 1940s and eventually settled in Taiwan. There are also branches of the Kong family who have converted to Islam after marrying Muslim women, in Dachuan in Gansu province in the 1800s, and in 1715 in Xuanwei in Yunnan province. Many of the Muslim Confucius descendants are descended from the marriage of Ma Jiaga (), a Muslim woman, and Kong Yanrong (), 59th generation descendant of Confucius in the year 1480 and are found among the Hui and Dongxiang peoples. The new genealogy includes the Muslims. Kong Dejun () is a prominent Islamic scholar and Arabist from Qinghai province and a 77th generation descendant of Confucius.
Because of the huge interest in the Confucius family tree, there was a project in China to test the DNA of known family members of the collateral branches in mainland China. Among other things, this would allow scientists to identify a common Y chromosome in male descendants of Confucius. If the descent were truly unbroken, father-to-son, since Confucius's lifetime, the males in the family would all have the same Y chromosome as their direct male ancestor, with slight mutations due to the passage of time. The aim of the genetic test was the help members of collateral branches in China who lost their genealogical records to prove their descent. However, in 2009, many of the collateral branches decided not to agree to DNA testing. Bryan Sykes, professor of genetics at Oxford University, understands this decision: "The Confucius family tree has an enormous cultural significance," he said. "It's not just a scientific question." The DNA testing was originally proposed to add new members, many of whose family record books were lost during 20th-century upheavals, to the Confucian family tree. The main branch of the family which fled to Taiwan was never involved in the proposed DNA test at all.
In 2013 a DNA test performed on multiple different families who claimed descent from Confucius found that they shared the same Y chromosome as reported by Fudan University.
The fifth and most recent edition of the Confucius genealogy was printed by the CGCC. It was unveiled in a ceremony at Qufu on September 24, 2009. Women are now included for the first time. | https://en.wikipedia.org/wiki?curid=5823 |
Complex number
A complex number is a number that can be expressed in the form , where and are real numbers, and is a solution of the equation . Because no real number satisfies this equation, is called an imaginary number. For the complex number , is called the ', and is called the '. Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers, and are fundamental in many aspects of the scientific description of the natural world.
Complex numbers allow solutions to certain equations that have no solutions in real numbers. For example, the equation
has no real solution, since the square of a real number cannot be negative. Complex numbers provide a solution to this problem. The idea is to extend the real numbers with an indeterminate (sometimes called the imaginary unit) that is taken to satisfy the relation , so that solutions to equations like the preceding one can be found. In this case the solutions are and , as can be verified using the fact that :
According to the fundamental theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. In contrast, some polynomial equations with real coefficients have no solution in real numbers. The 16th-century Italian mathematician Gerolamo Cardano is credited with introducing complex numbers in his attempts to find solutions to cubic equations.
Formally, the complex number system can be defined as the algebraic extension of the ordinary real numbers by an imaginary number . This means that complex numbers can be added, subtracted, and multiplied, as polynomials in the variable , with the rule imposed. Furthermore, complex numbers can also be divided by nonzero complex numbers. Overall, the complex number system is a field.
Geometrically, complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number can be identified with the point in the complex plane. A complex number whose real part is zero is said to be purely imaginary; the points for these numbers lie on the vertical axis of the complex plane. A complex number whose imaginary part is zero can be viewed as a real number; its point lies on the horizontal axis of the complex plane. Complex numbers can also be represented in polar form, which associates each complex number with its distance from the origin (its magnitude) and with a particular angle known as the argument of this complex number.
The geometric identification of the complex numbers with the complex plane, which is a Euclidean plane (formula_4), makes their structure as a real 2-dimensional vector space evident. Real and imaginary parts of a complex number may be taken as components of a vector with respect to the canonical standard basis. The addition of complex numbers is thus immediately depicted as the usual component-wise addition of vectors. However, the complex numbers allow for a richer algebraic structure, comprising additional operations, that are not necessarily available in a vector space; for example, the multiplication of two complex numbers always yields again a complex number, and should not be mistaken for the usual "products" involving vectors, like the "scalar multiplication", the "scalar product" or other (sesqui)linear forms, available in many vector spaces; and the broadly exploited "vector product" exists only in an orientation-dependent form in three dimensions.
Based on the concept of real numbers, a complex number is a number of the form , where and are real numbers and is an indeterminate satisfying . For example, is a complex number.
This way, a complex number is defined as a polynomial with real coefficients in the single indeterminate , for which the relation is imposed. Based on this definition, complex numbers can be added and multiplied, using the addition and multiplication for polynomials. The relation induces the equalities and which hold for all integers ; these allow the reduction of any polynomial that results from the addition and multiplication of complex numbers to a linear polynomial in , again of the form with real coefficients
The real number is called the "real part" of the complex number ; the real number is called its "imaginary part". To emphasize, the imaginary part does not include a factor ; that is, the imaginary part is , not .
Formally, the complex numbers are defined as the quotient ring of the polynomial ring in the indeterminate , by the ideal generated by the polynomial (see below).
A real number can be regarded as a complex number whose imaginary part is 0. A purely imaginary number is a complex number whose real part is zero. As with polynomials, it is common to write for and for . Moreover, when the imaginary part is negative, that is, it is common to write instead of ; for example, for can be written instead of
Since in polynomials with real coefficients the multiplication of the indeterminate and a real is commutative, the polynomial may be written as This is often expedient for imaginary parts denoted by expressions, for example, when is a radical.
The real part of a complex number is denoted by or ; the imaginary part of a complex number is denoted by or . For example,
The set of all complex numbers is denoted by formula_7 (upright bold) or formula_8 (blackboard bold).
In some disciplines, in particular electromagnetism and electrical engineering, is used instead of since is frequently used to represent electric current. In these cases complex numbers are written as or .
A complex number can thus be identified with an ordered pair of real numbers, which in turn may be interpreted as coordinates of a point in a two-dimensional space. The most immediate space is the Euclidean plane with suitable coordinates, which is then called complex plane or Argand diagram, named after Jean-Robert Argand. Another prominent space on which the coordinates may be projected is the two-dimensional surface of a sphere, which is then called Riemann sphere.
The definition of the complex numbers involving two arbitrary real values immediately suggest the use of Cartesian coordinates in the complex plane. The horizontal ("real") axis is generally used to display the real part with increasing values to the right and the imaginary part marks the vertical ("imaginary") axis, increasing values upwards.
A charted number may be either viewed as the coordinatized point, or as a position vector from the origin to this point. The coordinate values of a complex number are said to give its "Cartesian", "rectangular", or "algebraic" form.
Notably, the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition, while multiplication (see below) corresponds to multiplying their magnitudes and adding the angles they make with the real axis. Viewed in this way the multiplication of a complex number by corresponds to rotating the position vector counterclockwise by a quarter turn (90°) about the origin
An alternative option for coordinates in the complex plane is the polar coordinate system that uses the distance of the point from the origin ("O"), and the angle subtended between the positive real axis and the line segment "Oz" in a counterclockwise sense. This leads to the polar form of complex numbers.
The "absolute value" (or "modulus" or "magnitude") of a complex number is
If is a real number (that is, if ), then . That is, the absolute value of a real number equals its absolute value as a complex number.
By Pythagoras' theorem, the absolute value of a complex number is the distance to the origin of the point representing the complex number in the complex plane.
The "argument" of (in many applications referred to as the "phase" ) is the angle of the radius "Oz" with the positive real axis, and is written as formula_11. As with the modulus, the argument can be found from the rectangular form formula_12 by applying the inverse tangent to the quotient of imaginary-by-real parts. By using a half-angle identity a single branch of the arctan suffices to cover the range of the -function, , and avoids a more subtle case-by-case analysis
Normally, as given above, the principal value in the interval is chosen. Values in the range are obtained by adding if the value is negative. The value of is expressed in radians in this article. It can increase by any integer multiple of and still give the same angle, viewed as subtended by the rays of the positive real axis and from the origin through "z". Hence, the arg function is sometimes considered as multivalued. The polar angle for the complex number 0 is indeterminate, but arbitrary choice of the polar angle 0 is common.
The value of equals the result of atan2:
Together, and give another way of representing complex numbers, the "polar form", as the combination of modulus and argument fully specify the position of a point on the plane. Recovering the original rectangular co-ordinates from the polar form is done by the formula called "trigonometric form"
Using Euler's formula this can be written as
Using the cis function, this is sometimes abbreviated to
In angle notation, often used in electronics to represent a phasor with amplitude and phase , it is written as
When visualizing complex functions, both a complex input and output are needed. Because each complex number is represented in two dimensions, visually graphing a complex function would require the perception of a four dimensional space, which is possible only in projections. Because of this, other ways of visualizing complex functions have been designed.
In domain coloring the output dimensions are represented by color and brightness, respectively. Each point in the complex plane as domain is "ornated", typically with "color" representing the argument of the complex number, and "brightness" representing the magnitude. Dark spots mark moduli near zero, brighter spots are farther away from the origin, the gradation may be discontinuous, but is assumed as monotonous. The colors often vary in steps of for to from red, yellow, green, cyan, blue, to magenta. These plots are called color wheel graphs. This provides a simple way to visualize the functions without losing information. The picture shows zeros for and poles at
Riemann surfaces are another way to visualize complex functions. Riemann surfaces can be thought of as deformations of the complex plane; while the horizontal axes represent the real and imaginary inputs, the single vertical axis only represents either the real or imaginary output. However, Riemann surfaces are built in such a way that rotating them 180 degrees shows the imaginary output, and vice versa. Unlike domain coloring, Riemann surfaces can represent multivalued functions like formula_19.
The solution in radicals (without trigonometric functions) of a general cubic equation contains the square roots of negative numbers when all three roots are real numbers, a situation that cannot be rectified by factoring aided by the rational root test if the cubic is irreducible (the so-called "casus irreducibilis"). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545, though his understanding was rudimentary.
Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root.
Many mathematicians contributed to the development of complex numbers. The rules for addition, subtraction, multiplication, and root extraction of complex numbers were developed by the Italian mathematician Rafael Bombelli. A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions.
The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his "Stereometrica" he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term formula_20 in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Hero merely replaced it by its positive (formula_21).
The impetus to study complex numbers as a topic in itself first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolò Fontana Tartaglia, Gerolamo Cardano). It was soon realized (but proved much later) that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's formula for a cubic equation of the form formula_22 + \sqrt[3]{q/2 - \sqrt{(q/2)^2-(p/3)^3}}. When formula_23 is negative (casus irreducibilis), the second cube root should be regarded as the complex conjugate of the first one.}} gives the solution to the equation as
At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation has solutions , formula_25 and formula_26. Substituting these in turn for formula_27 in Tartaglia's cubic formula and simplifying, one gets 0, 1 and −1 as the solutions of . Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues.
The term "imaginary" for these quantities was coined by René Descartes in 1637, although he was at pains to stress their imaginary nature
A further source of confusion was that the equation formula_28 seemed to be capriciously inconsistent with the algebraic identity formula_29, which is valid for non-negative real numbers and , and which was also used in complex number calculations with one of , positive and the other negative. The incorrect use of this identity (and the related identity formula_30) in the case when both and are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol in place of to guard against this mistake. Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout.
In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the complicated identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be simply re-expressed by the following well-known formula which bears his name, de Moivre's formula:
In 1748 Leonhard Euler went further and obtained Euler's formula of complex analysis:
by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities.
The idea of a complex number as a point in the complex plane (above) was first described by Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's "De Algebra tractatus".
Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Carl Friedrich Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology.
If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, √−1 positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness. - Gauss
In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis.
The English mathematician G.H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.
Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case.
The common terms used in the theory are chiefly due to the founders. Argand called formula_33 the "direction factor", and formula_34 the "modulus"; Cauchy (1828) called formula_33 the "reduced form" (l'expression réduite) and apparently introduced the term "argument"; Gauss used for formula_36, introduced the term "complex number" for , and called the "norm". The expression "direction coefficient", often used for formula_33, is due to Hankel (1867), and "absolute value," for "modulus," is due to Weierstrass.
Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others.
Two complex numbers are equal if and only if both their real and imaginary parts are equal. That is, complex numbers formula_42 and formula_43 are equal if and only if
formula_44 and formula_45. Nonzero complex numbers written in polar form are equal if and only if they have the same magnitude and their arguments differ by an integer multiple of .
Since complex numbers are naturally thought of as existing on a two-dimensional plane, there is no natural linear ordering on the set of complex numbers. In fact, there is no linear ordering on the complex numbers that is compatible with addition and multiplication – the complex numbers cannot have the structure of an ordered field. This is because any square in an ordered field is at least , but .
The "complex conjugate" of the complex number is given by . It is denoted by either formula_46 or . This unary operation on complex numbers cannot be expressed by applying only their basic operations addition, subtraction, multiplication and division.
Geometrically, formula_46 is the "reflection" of about the real axis. Conjugating twice gives the original complex number
which makes this operation an involution. The reflection leaves both the real part and the magnitude of formula_49 unchanged, that is
The imaginary part and the argument of a complex number formula_49 change their sign under conjugation
For details on argument and magnitude, see the section on Polar form.
The product of a complex number formula_55 and its conjugate is known as the absolute square. It is always a positive real number and equals the square of the magnitude of each:
This property can be used to convert a fraction with a complex denominator to an equivalent fraction with a real denominator by expanding both numerator and denominator of the fraction by the conjugate of the given denominator. This process is sometimes called "rationalization" of the denominator (although the denominator in the final expression might be an irrational real number), because it resembles the method to remove roots from simple expressions in a denominator.
The real and imaginary parts of a complex number can be extracted using the conjugation:
Moreover, a complex number is real if and only if it equals its own conjugate.
Conjugation distributes over the basic complex arithmetic operations:
Conjugation is also employed in inversive geometry, a branch of geometry studying reflections more general than ones about a line. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is looked for.
Two complex numbers formula_61 and formula_62 are most easily added by separately adding their real and imaginary parts of the summands. That is to say:
Similarly, subtraction can be performed as
Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers formula_61 and formula_62, interpreted as points in the complex plane, is the point obtained by building a parallelogram from the three vertices formula_67, and the points of the arrows labeled formula_61 and formula_62 (provided that they are not on a line). Equivalently, calling these points formula_70 respectively and the fourth point of the parallelogram formula_71 the triangles formula_72 and formula_73 are congruent. A visualization of the subtraction can be achieved by considering addition of the negative subtrahend.
Since the real part, the imaginary part, and the indeterminate formula_74 in a complex number are all considered as numbers in themselves, two complex numbers, given as formula_75 and formula_76 are multiplied under the rules of the distributive property, the commutative properties and the defining property formula_77 in the following way
Using the conjugation, the reciprocal of a nonzero complex number can always be broken down to
since "non-zero" implies that formula_80 is greater than zero.
This can be used to express a division of an arbitrary complex number formula_76 by a non-zero complex number formula_49 as
Formulas for multiplication, division and exponentiation are simpler in polar form than the corresponding formulas in Cartesian coordinates. Given two complex numbers and , because of the trigonometric identities
we may derive
In other words, the absolute values are multiplied and the arguments are added to yield the polar form of the product. For example, multiplying by corresponds to a quarter-turn counter-clockwise, which gives back . The picture at the right illustrates the multiplication of
Since the real and imaginary part of are equal, the argument of that number is 45 degrees, or π/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula
holds. As the arctan function can be approximated highly efficiently, formulas like this – known as Machin-like formulas – are used for high-precision approximations of π.
Similarly, division is given by
The square roots of (with ) are formula_90, where
and
where sgn is the signum function. This can be seen by squaring formula_90 to obtain . Here formula_94 is called the modulus of , and the square root sign indicates the square root with non-negative real part, called the principal square root; also formula_95 where formula_96
The exponential function formula_97 can be defined for every complex number by the power series
which has an infinite radius of convergence.
The value at of the exponential function is Euler's number
If is real, one has
formula_100
Analytic continuation allows extending this equality for every complex value of , and thus to define the complex exponentiation with base as
The exponential function satisfies the functional equation
formula_102
This can be proved either by comparing the power series expansion of both members or by applying analytic continuation from the restriction of the equation to real arguments.
Euler's formula states that, for any real number ,
The functional equation implies thus that, if and are real, one has
which is the decomposition of the exponential function into its real and imaginary parts.
If is real and complex, the exponentiation is defined as
where denotes the natural logarithm.
It seems natural to extend this formula to complex values of , but there are some difficulties resulting from the fact that the complex logarithm is not really a function, but a multivalued function.
In the real case, the natural logarithm can be defined as the inverse of the exponential function. For extending this to the complex domain, one can start from Euler's formula. It implies that, if a complex number is written in polar form
then its complex logarithm should be
However, because cosine and sine are periodic functions, the addition to formula_108 of an integer multiple of . does not change . For example, formula_109, so both formula_110 and formula_111 are possible values for the natural logarithm of formula_112.
Therefore, the complex logarithm must be defined as a multivalued function:
Alternatively, a branch cut can be used to define a true function. If is not a negative real number, the principal value of the complex logarithm is obtained with formula_114 This is an analytic function outside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real number.
It follows that if is as above, and if is another complex number, then the "exponentiation" is the multivalued function
If, in the preceding formula, is an integer, then the sine and the cosine are independent of . Thus, if the exponent is an integer, then formula_116 is well defined, and the exponentiation formula simplifies to de Moivre's formula:
The th roots of a complex number are given by
for . (Here formula_119 is the usual (positive) th root of the positive real number .) Because sine and cosine are periodic, other integer values of do not give other values.
While the th root of a positive real number is chosen to be the "positive" real number satisfying , there is no natural way of distinguishing one particular complex th root of a complex number. Therefore, the th root is a -valued function of . This implies that, contrary to the case of positive real numbers, one has
since the left-hand side consists of values, and the right-hand side is a single value.
The set C of complex numbers is a field. Briefly, this means that the following facts hold: first, any two complex numbers can be added and multiplied to yield another complex number. Second, for any complex number , its additive inverse is also a complex number; and third, every nonzero complex number has a reciprocal complex number. Moreover, these operations satisfy a number of laws, for example the law of commutativity of addition and multiplication for any two complex numbers and :
These two laws and the other requirements on a field can be proven by the formulas given above, using the fact that the real numbers themselves form a field.
Unlike the reals, C is not an ordered field, that is to say, it is not possible to define a relation that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so precludes the existence of an ordering on C.
When the underlying field for a mathematical topic or construct is the field of complex numbers, the topic's name is usually modified to reflect that fact. For example: complex analysis, complex matrix, complex polynomial, and complex Lie algebra.
Given any complex numbers (called coefficients) , the equation
has at least one complex solution "z", provided that at least one of the higher coefficients is nonzero. This is the statement of the "fundamental theorem of algebra", of Carl Friedrich Gauss and Jean le Rond d'Alembert. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial does not have a rational root, since is not a rational number) nor the real numbers R (the polynomial does not have a real root for , since the square of is positive for any real number ).
There are various proofs of this theorem, either by analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of "odd" degree has at least one real root.
Because of this fact, theorems that hold "for any algebraically closed field", apply to C. For example, any non-empty complex square matrix has at least one (complex) eigenvalue.
The field C has the following three properties: first, it has characteristic 0. This means that for any number of summands (all of which equal one). Second, its transcendence degree over Q, the prime field of C, is the cardinality of the continuum. Third, it is algebraically closed (see above). It can be shown that any field having these properties is isomorphic (as a field) to C. For example, the algebraic closure of Q"p" also satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields). Also, C is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that C contains many proper subfields that are isomorphic to C.
The preceding characterization of C describes only the algebraic aspects of C. That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of C as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. C contains a subset (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions:
Moreover, C has a nontrivial involutive automorphism (namely the complex conjugation), such that is in for any nonzero in C.
Any field with these properties can be endowed with a topology by taking the sets as a base, where ranges over the field and ranges over . With this topology is isomorphic as a "topological" field to C.
The only connected locally compact topological fields are R and C. This gives another characterization of C as a topological field, since C can be distinguished from R because the nonzero complex numbers are connected, while the nonzero real numbers are not.
William Rowan Hamilton introduced the approach to define the set of complex numbers as the set of of real numbers, in which the following rules for addition and multiplication are imposed:
It is then just a matter of notation to express as .
Though this low-level construction does accurately describe the structure of the complex numbers, the following equivalent definition reveals the algebraic nature of more immediately. This characterization relies on the notion of fields and polynomials. A field is a set endowed with addition, subtraction, multiplication and division operations that behave as is familiar from, say, rational numbers. For example, the distributive law
must hold for any three elements , and of a field. The set of real numbers does form a field. A polynomial with real coefficients is an expression of the form
where the are real numbers. The usual addition and multiplication of polynomials endows the set of all such polynomials with a ring structure. This ring is called the polynomial ring over the real numbers.
The set of complex numbers is defined as the quotient ring . This extension field contains two square roots of , namely (the cosets of) and , respectively. (The cosets of) and form a basis of as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs of real numbers. The quotient ring is a field, because is irreducible over , so the ideal it generates is maximal.
The formulas for addition and multiplication in the ring , modulo the relation , correspond to the formulas for addition and multiplication of complex numbers defined as ordered pairs. So the two definitions of the field are isomorphic (as fields).
Accepting that is algebraically closed, since it is an algebraic extension of in this approach, is therefore the algebraic closure of .
Complex numbers can also be represented by matrices that have the following form:
Here the entries and are real numbers. The sum and product of two such matrices is again of this form, and the sum and product of complex numbers corresponds to the sum and product of such matrices, the product being:
The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. Moreover, the square of the absolute value of a complex number expressed as a matrix is equal to the determinant of that matrix:
The conjugate formula_130 corresponds to the transpose of the matrix.
Though this representation of complex numbers with matrices is the most common, many other representations arise from matrices "other than" formula_131 that square to the negative of the identity matrix. See the article on 2 × 2 real matrices for other representations of complex numbers.
The study of functions of a complex variable is known as complex analysis and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane.
The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, C, endowed with the metric
is a complete metric space, which notably includes the triangle inequality
for any two complex numbers and .
Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the "exponential function" , also written , is defined as the infinite series
The series defining the real trigonometric functions sine and cosine, as well as the hyperbolic functions sinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such as tangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method of analytic continuation.
"Euler's formula" states:
for any real number "φ", in particular
Unlike in the situation of real numbers, there is an infinitude of complex solutions of the equation
for any complex number . It can be shown that any such solution – called complex logarithm of – satisfies
where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2"π", log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval .
Complex exponentiation is defined as
and is multi-valued, except when formula_140 is an integer. For , for some natural number , this recovers the non-uniqueness of th roots mentioned above.
Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy
Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right.
A function "f" : C → C is called holomorphic if it satisfies the Cauchy–Riemann equations. For example, any R-linear map C → C can be written in the form
with complex coefficients and . This map is holomorphic if and only if . The second summand formula_143 is real-differentiable, but does not satisfy the Cauchy–Riemann equations.
Complex analysis shows some features not apparent in real analysis. For example, any two holomorphic functions and that agree on an arbitrarily small open subset of C necessarily agree everywhere. Meromorphic functions, functions that can locally be written as with a holomorphic function , still share some of the features of holomorphic functions. Other functions have essential singularities, such as at .
Complex numbers have applications in many scientific areas, including signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some of these applications are described below.
Three non-collinear points formula_144 in the plane determine the shape of the triangle formula_145. Locating the points in the complex plane, this shape of a triangle may be expressed by complex arithmetic as
The shape formula_147 of a triangle will remain the same, when the complex plane is transformed by translation or dilation (by an affine transformation), corresponding to the intuitive notion of shape, and describing similarity. Thus each triangle formula_145 is in a similarity class of triangles with the same shape.
The Mandelbrot set is a popular example of a fractal formed on the complex plane. It is defined by plotting every location formula_149 where iterating the sequence formula_150 does not diverge when iterated infinitely. Similarly, Julia sets have the same rules, except where formula_149 remains constant.
Every triangle has a unique Steiner inellipse – an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem: Denote the triangle's vertices in the complex plane as , , and . Write the cubic equation formula_152, take its derivative, and equate the (quadratic) derivative to zero. Marden's Theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse.
As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in C. A fortiori, the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to , the algebraic closure of Q, which also contains all algebraic numbers, C has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem.
Another example are Gaussian integers, that is, numbers of the form , where and are integers, which can be used to classify sums of squares.
Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function is related to the distribution of prime numbers.
In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration.
In differential equations, it is common to first find all complex roots of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form . Likewise, in difference equations, the complex roots of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form .
In control theory, systems are often transformed from the time domain to the frequency domain using the Laplace transform. The system's zeros and poles are then analyzed in the "complex plane". The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane.
In the root locus method, it is important whether zeros and poles are in the left or right half planes, that is, have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are
If a system has zeros in the right half plane, it is a nonminimum phase system.
Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value of the corresponding is the amplitude and the argument is the phase.
If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex valued functions of the form
and
where ω represents the angular frequency and the complex number "A" encodes the phase and amplitude as explained above.
This use is also extended into digital signal processing and digital image processing, which utilize digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals.
Another example, relevant to the two side bands of amplitude modulation of AM radio, is:
In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus.
In electrical engineering, the imaginary unit is denoted by , to avoid confusion with , which is generally in use to denote electric current, or, more particularly, , which is generally in use to denote instantaneous electric current.
Since the voltage in an AC circuit is oscillating, it can be represented as
To obtain the measurable quantity, the real part is taken:
The complex-valued signal formula_158 is called the analytic representation of the real-valued, measurable signal formula_159.
In fluid dynamics, complex functions are used to describe potential flow in two dimensions.
The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers.
In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity.
The process of extending the field R of reals to C is known as the Cayley–Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension 4 and 8, respectively.
In this context the complex numbers have been called the binarions.
Just as by applying the construction to reals the property of ordering is lost, properties familiar from real and complex numbers vanish with each extension. The quaternions lose commutativity, that is, for some quaternions and the multiplication of octonions, additionally to not being commutative, fails to be associative: for some octonions
Reals, complex numbers, quaternions and octonions are all normed division algebras over R. By Hurwitz's theorem they are the only ones; the sedenions, the next step in the Cayley–Dickson construction, fail to have this structure.
The Cayley–Dickson construction is closely related to the regular representation of C, thought of as an R-algebra (an R-vector space with a multiplication), with respect to the basis . This means the following: the R-linear map
for some fixed complex number can be represented by a matrix (once a basis has been chosen). With respect to the basis , this matrix is
that is, the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of C in the 2 × 2 real matrices, it is not the only one. Any matrix
has the property that its square is the negative of the identity matrix: . Then
is also isomorphic to the field C, and gives an alternative complex structure on R2. This is generalized by the notion of a linear complex structure.
Hypercomplex numbers also generalize R, C, H, and O. For example, this notion contains the split-complex numbers, which are elements of the ring (as opposed to ). In this ring, the equation has four solutions.
The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Q"p" of "p"-adic numbers (for any prime number "p"), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Q"p", by Ostrowski's theorem. The algebraic closures formula_164 of Q"p" still carry a norm, but (unlike C) are not complete with respect to it. The completion formula_165 of formula_164 turns out to be algebraically closed. This field is called "p"-adic complex numbers by analogy.
The fields R and Q"p" and their finite field extensions, including C, are local fields. | https://en.wikipedia.org/wiki?curid=5826 |
Cryptozoology
Cryptozoology is a pseudoscience and subculture that aims to prove the existence of entities from the folklore record, such as Bigfoot, the chupacabra, or Mokele-mbembe. Cryptozoologists refer to these entities as "cryptids", a term coined by the subculture. Because it does not follow the scientific method, cryptozoology is considered a pseudoscience by the academic world: it is neither a branch of zoology nor folkloristics. It was originally founded in the 1950s by zoologists Bernard Heuvelmans and Ivan T. Sanderson.
Scholars have noted that the pseudoscience rejected mainstream approaches from an early date, and that adherents often express hostility to mainstream science. Scholars have studied cryptozoologists and their influence (including the pseudoscience's association with young Earth creationism), noted parallels in cryptozoology and other pseudosciences such as ghost hunting and ufology, and highlighted uncritical media propagation of cryptoozologist claims.
As a field, cryptozoology originates from the works of Bernard Heuvelmans, a Belgian zoologist, and Ivan T. Sanderson, a Scottish zoologist. Notably, Heuvelmans published "On the Track of Unknown Animals" (French "Sur la Piste des Bêtes Ignorées") in 1955, a landmark work among cryptozoologists that was followed by numerous other like works. Similarly, Sanderson published a series of books that assisted in developing hallmarks of cryptozoology, including "Abominable Snowmen: Legend Come to Life" (1961).
The term "cryptozoology" dates from 1959 or before – Heuvelmans attributes the coinage of the term "cryptozoology" ('the study of hidden animals') to Sanderson. Patterned after "cryptozoology", the term "cryptid" was coined in 1983 by cryptozoologist J. E. Wall in the summer issue of the International Society of Cryptozoology newsletter. According to Wall "[It has been] suggested that new terms be coined to replace sensational and often misleading terms like 'monster'. My suggestion is 'cryptid', meaning a living thing having the quality of being hidden or unknown ... describing those creatures which are (or may be) subjects of cryptozoological investigation." The "Oxford English Dictionary" defines the noun "cryptid" as "an animal whose existence or survival to the present day is disputed or unsubstantiated; any animal of interest to a cryptozoologist". While used by most cryptozoologists, the term "cryptid" is not used by academic zoologists. In a textbook aimed at undergraduates, academics Caleb W. Lack and Jacques Rousseau note that the subculture's focus on what it deems to be "cryptids" is a pseudoscientic extension of older belief in monsters and other similar entities from the folklore record, yet with a "new, more scientific-sounding name: cryptids".
While biologists regularly identify new species, cryptozoologists often focus on creatures from the folklore record. Most famously, these include the Loch Ness Monster, Bigfoot, the chupacabra, as well as other "imposing beasts that could be labeled as monsters". In their search for these entities, cryptozoologists may employ devices such as motion-sensitive cameras, night-vision equipment, and audio-recording equipment. While there have been attempts to codify cryptozoological approaches, unlike biologists, zoologists, botanists, and other academic disciplines, however, "there are no accepted, uniform, or successful methods for pursuing cryptids". Some scholars have identified precursors to modern cryptozoology in certain medieval approaches to the folklore record, and the psychology behind the cryptozoology approach has been the subject of academic study.
Few cryptozoologists have a formal science education, and fewer still have a science background directly relevant to cryptozoology. Adherents often misrepresent the academic backgrounds of cryptozoologists. According to writer Daniel Loxton and paleontologist Donald Prothero, "Cryptozoologists have often promoted 'Professor Roy Mackal, PhD.' as one of their leading figures and one of the few with a legitimate doctorate in biology. What is rarely mentioned, however, is that he had no training that would qualify him to undertake competent research on exotic animals. This raises the specter of 'credential mongering', by which an individual or organization feints a person's graduate degree as proof of expertise, even though his or her training is not specifically relevant to the field under consideration." Besides Heuvalmans, Sanderson, and Mackal, other notable cryptozoologists with academic backgrounds include Grover Krantz, Karl Shuker, and Richard Greenwell.
Historically, notable cryptozoologists have often identified instances featuring "irrefutable evidence" (such as Sanderson and Krantz), only for the evidence to be revealed as the product of a hoax. This may occur during a closer examination by experts or upon confession of the hoaxer.
A subset of cryptozoology promotes the pseudoscience of Young Earth creationism, rejecting conventional science in favor of a Biblical interpretation and promoting concepts such as "living dinosaurs". Science writer Sharon A. Hill observes that the Young Earth creationist segment of cryptozoology is "well-funded and able to conduct expeditions with a goal of finding a living dinosaur that they think would invalidate evolution." Anthropologist Jeb J. Card says that "Creationists have embraced cryptozoology and some cryptozoological expeditions are funded by and conducted by creationists hoping to disprove evolution." In a 2013 interview, paleontologist Donald Prothero notes an uptick in creationist cryptozoologists. He observes that "[p]eople who actively search for Loch Ness monsters or Mokele Mbembe do it entirely as creationist ministers. They think that if they found a dinosaur in the Congo it would overturn all of evolution. It wouldn't. It would just be a late-occurring dinosaur, but that's their mistaken notion of evolution."
Citing a 2013 exhibit at the Petersburg, Kentucky-based Creation Museum, which claimed that dragons were once biological creatures who walked the earth alongside humanity and is broadly dedicated to Young Earth creationism. Religious studies academic Justin Mullis notes that "Cryptozoology has a long and curious history with Young Earth Creationism, with this new exhibit being just one of the most recent examples".
Media outlets have often uncritically disseminated information from cryptozoologist sources, including newspapers that repeat false claims made by cryptozoologists or television shows that feature cryptozoologists as monster hunters (such as the popular and purportedly nonfiction American television show "MonsterQuest", which aired from 2007-2010). Media coverage of purported "cryptids" often fails to provide more likely explanations, further propagating claims made by cryptozoologists.
The 2003 discovery of the fossil remains of "Homo floresiensis" was cited by paleontologist Henry Gee, a senior editor at the journal "Nature", as possible evidence that "in geological terms, makes it more likely that stories of other mythical, human-like creatures such as yetis are founded on grains of truth." "Cryptozoology," Gee says, "can come in from the cold."
However, cryptozoology is widely criticised for an array of reasons and is rejected by the academic world. There is a broad consensus from academics that cryptozoology is a pseudoscience. The field is regularly criticized for reliance on anecdotal information and because in the course of investigating animals that most scientists believe are unlikely to have existed, cryptozoologists do not follow the scientific method. Hill notes that "there is no academic course of study in cryptozoology or no university degree program that will bestow the title 'cryptozoologist'."
Anthropologist Jeb J. Card summarizes cryptozoology in a survey of pseudoscience and pseudoarchaeology:
Card notes that "cryptozoologists often show their disdain and even hatred for professional scientists, including those who enthusiastically participated in cryptozoology", which he traces back to Heuvelmans's early "rage against critics of cryptozoology". He finds parallels with cryptozoology and other pseudosciences, such as ghost hunting and ufology, and compares the approach of cryptozoologists to colonial big-game hunters, and to aspects of European imperialism. According to Card, "Most cryptids are framed as the subject of indigenous legends typically collected in the heyday of comparative folklore, though such legends may be heavily modified or worse. Cryptozoology's complicated mix of sympathy, interest, and appropriation of indigenous culture (or non-indigenous construction of it) is also found in New Age circles and dubious "Indian burial grounds" and other legends ... invoked in hauntings such as the "Amityville" hoax ...".
In a 2011 foreword for "The American Biology Teacher", then National Association of Biology Teachers president Dan Ward uses cryptozoology as an example of "technological pseudoscience" that may confuse students about the scientific method. Ward says that "Cryptozoology ... is not valid science or even science at all. It is monster hunting." Historian of science Brian Regal includes an entry for cryptozoology in his "Pseudoscience: A Critical Encyclopedia" (2009). Regal says that "as an intellectual endeavor, cryptozoology has been studied as much as cryptozoologists have sought hidden animals".
In a 1992 issue of "Folklore", folklorist Véronique Campion-Vincent says:
Campion-Vincent says that "four currents can be distinguished in the study of mysterious animal appearances": "Forteans" ("compiler[s] of anomalies" such as via publications like the "Fortean Times"), "occultists" (which she describes as related to "Forteans"), "folklorists", and "cryptozoologists". Regarding cryptozoologists, Campion-Vincent says that "this movement seems to deserve the appellation of parascience, like parapsychology: the same corpus is reviewed; many scientists participate, but for those who have an official status of university professor or researcher, the participation is a private hobby".
In her "Encyclopedia of American Folklore", academic Linda Watts says that "folklore concerning unreal animals or beings, sometimes called monsters, is a popular field of inquiry" and describes cryptozoology as an example of "American narrative traditions" that "feature many monsters".
In his analysis of cryptozoology, folklorist Peter Dendle says that "cryptozoology devotees consciously position themselves in defiance of mainstream science" and that:
In a paper published in 2013, Dendle refers to cryptozoologists as "contemporary monster hunters" that "keep alive a sense of wonder in a world that has been very thoroughly charted, mapped, and tracked, and that is largely available for close scrutiny on Google Earth and satellite imaging" and that "on the whole the devotion of substantial resources for this pursuit betrays a lack of awareness of the basis for scholarly consensus (largely ignoring, for instance, evidence of evolutionary biology and the fossil record)."
According to historian Mike Dash, few scientists doubt there are thousands of unknown animals, particularly invertebrates, awaiting discovery; however, cryptozoologists are largely uninterested in researching and cataloging newly discovered species of ants or beetles, instead focusing their efforts towards "more elusive" creatures that have often defied decades of work aimed at confirming their existence.
Paleontologist George Gaylord Simpson (1984) lists cryptozoology among examples of human gullibility, along with creationism:
Paleontologist Donald Prothero (2007) cites cryptozoology as an example of pseudoscience, and categorizes it along with Holocaust denial and UFO abductions claims as aspects of American culture that are "clearly baloney".
In "Scientifical Americans: The Culture of Amateur Paranormal Researchers" (2017), Hill surveys the field and discusses aspects of the subculture, noting internal attempts at creating more scientific approaches and the involvement of Young Earth creationists and a prevalence of hoaxes. She concludes that many cryptozoologists are "passionate and sincere in their belief that mystery animals exist. As such, they give deference to every report of a sighting, often without critical questioning. As with the ghost seekers, cryptozoologists are convinced that they will be the ones to solve the mystery and make history. With the lure of mystery and money undermining diligent and ethical research, the field of cryptozoology has serious credibility problems."
There have been several organizations, of varying types, dedicated or related to cryptozoology. These include: | https://en.wikipedia.org/wiki?curid=5828 |
Craig Charles
Craig Joseph Charles (born 11 July 1964) is a British actor, television presenter, DJ and poet. He plays Dave Lister in the science fiction sitcom "Red Dwarf" and played Lloyd Mullaney in the soap opera "Coronation Street". As a funk and soul DJ he regularly appears on BBC Radio 6 Music and BBC Radio 2, and was the presenter of the gladiator-style game show "Robot Wars" from 1998 to 2004.
Charles first appeared on television as a performance poet, which led to minor presenting roles. After finding fame in "Red Dwarf", he regularly featured on national television with celebrity appearances on many popular shows while he continued to host a wide variety of programmes.
Charles also narrated the comedy endurance show "Takeshi's Castle". From 2017, he has hosted "The Gadget Show" for Channel 5. His acting credits include playing inmate Eugene Buffy in the ITV drama "The Governor", and leading roles in the British films "Fated" and "Clubbing to Death". He has toured the UK extensively as a stand-up comedian.
Charles has hosted "The Craig Charles Funk and Soul Show" on BBC radio since 2002, and performs DJ sets at numerous clubs and festivals, nationally and internationally. In September 2015, he left "Coronation Street" after ten years, to film new episodes of "Red Dwarf".
Charles was born in Liverpool to a Guyanese father and Irish mother. He grew up on the Cantril Farm housing estate with his older brother, Dean (who died in 2014) and his 2 other brothers Jimmy and Emile. He attended West Derby Comprehensive School followed by Childwall Hall College of Further Education, studying A-levels in History, Government and Politics, English Literature and General Studies. Charles won a national competition, run by "The Guardian" newspaper, for a poem he wrote when he was 12 years old. On leaving school Charles spent time working in a studio at Central Hall, Renshaw Street, Liverpool.
Charles began his career as a contemporary and urban performance poet on the British cabaret circuit. His performances were considered original, with Charles described as having a natural ironic wit which appealed to talent scouts. In 1981, Charles climbed on stage at a Teardrop Explodes concert and recited a humorous, but derogatory, poem about the band's singer, Julian Cope. He was invited to open subsequent gigs for the group and went on to perform as a support act in pubs and clubs for the following three years, and at events such as the "Larks in the Park" music festival at Sefton Park (1982). He performed poetry at Liverpool's Everyman Theatre (1983) with such poets as Roger McGough and Adrian Henri.
Charles was involved in the Liverpool music scene, writing and singing lyrics for a number of local rock bands. In 1980, he played keyboards, bass and provided voice in the rock band "Watt 4". He performed his political rap lyrics as a 'Wordsmith'. In 1983, Charles was invited to record a session on the John Peel BBC Radio show, performing his poems backed by a band. This was his first professional engagement. He recorded a further Peel Session in 1984.
Charles realised he was using poetry as a vehicle for his sense of humour and progressed into stand-up comedy. He was part of the "Red Wedge" comedy tour in 1986, which aimed to raise awareness of the social problems of the time, in support of the Labour Party. He also performed his first one-man show in 1986, which premiered in Edinburgh, and then toured internationally. Charles was a guest on programmes including Janice Long's Radio 1 show, and was a regular panellist on Ned Sherrin's chat show "Loose Ends" (1987–88) on BBC Radio 4.
Charles first appeared on television as the resident poet on the arts programme "Riverside" on BBC2 and on the day-time BBC1 chat show "Pebble Mill at One". Charles was the resident poet on the Channel 4 programme "Black on Black" (1985) and its entertainment-based successor "Club Mix" (1986), and he appeared, weekly, as a John Cooper Clarke-style 'punk poet' on the BBC2 pop music programme "Oxford Road Show" (ORS). Charles performed his political poems as stand-up comedy on the late-night show "Saturday Live" (1985–87) and on the prime-time BBC1 chat show "Wogan" (1986–87), where he performed a topical poem in a weekly feature. He also appeared as a guest on shows including "Open Air" (1988). Charles included significant acting in his performance style, enabling him to put the emotion across.
In September 2015, Charles performed his "epic" poem "Scary Fairy and the Tales of the Dark Wood" live with the BBC Philharmonic orchestra, in a concert to be broadcast on BBC Radio 2's "Friday Night is Music Night" at Halloween.
Charles' first television acting role was the Liverpudlian slob Dave Lister in science fiction comedy series "Red Dwarf". Charles has appeared in all twelve series. Charles' younger brother, Emile Charles, guest-starred in the Series III episode "Timeslides", and the songs "Bad News" and "Cash" in this episode were written by Charles and performed by his band. The role has involved Charles playing a variety of alternative characters, including a gangster, a cowboy and angelic and evil versions of Lister, and in him carrying out a wide range of stunts, and acting involving special effects. All series, except VII and IX, were recorded in front of a studio audience. Along with Danny John-Jules [Cat], Charles is one of only two cast members to appear in every episode of "Red Dwarf" to date.
Charles reads the audiobook editions of both the "Red Dwarf" novel "Last Human" and his book "The Log: A Dwarfer's Guide to Everything", and he regularly attends sci-fi, comedy and memorabilia conventions in connection with the "Red Dwarf" franchise. During "", Lister visits the set of "Coronation Street", where he meets the actor Craig Charles.
Charles presented "Robot Wars" on BBC2 (1998–2003) and Channel 5 (2003–04), from series 2 until series 7, which included two "Extreme" series and numerous 'specials'. Charles was the main host and presided over the arena in which teams of amateur engineers battled their home-made radio-controlled robots against each other, and against the house robots. Charles introduced the show, enthusiastically announced the results of the battles and spoke to the contestants after the main events. He ended each episode with a short "Robot Wars"-themed poem. Charles' son, Jack, appeared on the show on several occasions, and was a contestant on "Team Nemesis" during series 4. Charles also hosted the "Robot Wars Live" UK tour, in 2001 and shows performed at the Wembley Arena.
On 13 January 2016, the BBC announced that the show would be rebooted with 6 episodes in the year. On 14 January 2016, Charles posted a tweet on his Twitter page saying that he would love to present the new series, however he received no reply from the BBC. On 3 February 2016, Dara Ó Briain was announced as Charles' replacement as the presenter. The BBC also stated that they had denied Charles' request to reprise his role by stating that they don't comment on individual casting enquiries.
Charles provided the English voice-over commentary for the Challenge (2002–04) rebroadcast of the popular game show "Takeshi's Castle", originally by Tokyo Broadcasting System in Japan. In each episode, between 100 and 142 contestants attempted to pass a series of wacky and near-impossible physical challenges to reach the Show Down at the castle against a Japanese actor Takeshi Kitano for a chance to win large cash prizes. Charles co-wrote the programme and commentated throughout all 122 episodes of the four series, and also some special and "best of" episodes. He provided comedy insights into the contestants' abilities, which were designed to appeal to adult audiences, as well as younger viewers - and also coined the term "Keshi Heads" to describe fans of the show.
Charles' commentary was so well received that the 2013 reboot featuring Dick and Dom on the voice-overs did not fare as well because he wasn't there, according to complaints that Challenge received via social media. To this day, the Challenge rebroadcasts of "Takeshi's Castle" regularly place in the weekly top 10 for ratings with an average of 130,000 viewers per episode.
In 2005, Charles joined the main cast of "Coronation Street", playing philandering taxicab driver, Lloyd Mullaney. Charles introduced aspects of the character himself, making Lloyd a Northern Soul DJ and record collector, and funk music enthusiast. Charles has chosen funk and soul songs playing as backing tracks during scenes, and posters for The Craig Charles Funk & Soul Club and Red Dwarf have appeared in the background.
Charles portrayed Lloyd as tough, but kind hearted and romantic, and the character was popular with viewers. Charles added a comedy element to the role, but was also involved in traumatic and emotional scenes with intricate storylines. In 2010, his character was involved in the show's dramatic 50th anniversary tram crash storyline, which was broadcast live. Charles presented documentaries for the show, including "50 Years of Corrie Stunts" (2010), which is included on the "Tram Crash" DVD. In November 2011, Charles took time off from "Coronation Street" to film a new series of "Red Dwarf", returning in April 2012. In February 2014 an online mini-series, "Steve & Lloyd's Streetcar Stories", ran alongside the television show's storyline.
In May 2015, Craig announced he would be leaving "Coronation Street" for "Red Dwarf." Lloyd left in a red Cadillac during the live episode on 23 September, although his final pre-recorded farewell scenes with Steve were shown during the following episode.
Charles has acted in episodes of popular dramas such as "The Bill" (1995), "EastEnders" (2002) and "Holby City" (2003) and in the comedy "The 10 Percenters" (1996). Charles played the emotionally disturbed and violent prisoner, Eugene Buffy, in the highly successful Lynda La Plante drama series "The Governor" (1996); the title role in the Channel 4 pirate sitcom "Captain Butler" (1997); the warden of a women's prison in the Canadian sci-fi fantasy "Lexx" (2001); Detective Chief Inspector Mercer in seven episodes of the BBC soap opera "Doctors" (2003); and soccer agent, Joel Brooks, in the Sky TV football soap "Dream Team" (2004).
Charles has presented children's television programmes, including "What's That Noise?" (1989) and "Parallel 9" (1992) on BBC1 and "Go Getters" (1994) on ITV. He was the travelling reporter for the highly acclaimed, but controversial, BBC 'mockumentary' "Ghostwatch", which tricked viewers into believing it was a live investigation into ghost sightings in a suburban home on Halloween night (1992). Charles presented the virtual reality game show "Cyberzone" (1993) on BBC2; the late-night entertainment show "Funky Bunker" (1997) on ITV; the reality show "Jailbreak" (2000) on Channel 5; the discussion show "Amazing Space: The Pub Guide to the Universe" (2001) on National Geographic; and the late-night current affairs chat show "Weapons of Mass Distraction" (2004) on ITV.
Charles has appeared on celebrity editions of "University Challenge" (1998), "Can't Cook, Won't Cook" (1998), "The Weakest Link" (2004), "The Chase" (2012) and "Pointless" (2013), and comedy panel shows such as "Have I Got News for You" (1995), "Just a Minute" (1995) and "They Think It's All Over" (1996) and Keith Lemon's Through the Keyhole (2014). He was a team captain on the sci-fi quiz series "Space Cadets" (1997) on Channel 4, which guest starred William Shatner. Charles has opened the National Lottery draw (1997) and his home has featured on "Through the Keyhole". Charles was a contestant in the "Celebrity Poker Club" tournament (2004) on Challenge, where he reached the semi-finals, and in the Channel 4 reality game show, "The Games" (2005), which documented the contestants' intensive training regime and each live Olympic Games-style sporting event.
From 16 November 2014, Charles took part in the fourteenth series of "I'm a Celebrity...Get Me Out of Here!". However, on 20 November, Charles left the series soon after learning of his brother Dean's death from a heart attack.
As well as his early appearances on shows such as Radio 4's "Loose Ends" (1987–88), and "Kaleidoscope", in the early 1990s, Charles could be heard on the London Radio Station Kiss 100 (Kiss FM) as the Breakfast show presenter. In 1995, Charles played the Porter in Steven Berkoff's adaptation of Shakespeare's "Macbeth", on Radio 4.
Since 2002, Charles has been a DJ on BBC Radio 6 Music presenting "The Craig Charles Funk and Soul Show", on air on Saturday evenings 6pm to 9pm, where he plays a diverse range of funk and soul music, from classic tracks to the latest releases, and provides publicity for less familiar bands. Charles explains the context for the music and carries out interviews with guest musicians. He was with the station at its launch, and while it was being tested during the previous year, under the name Network Y. Charles has also hosted the station's Breakfast Show (2004), and sits in for other presenters including Andrew Collins, Phil Wilding and Phill Jupitus and Radcliffe & Maconie.
From January until November 2014, Charles also broadcast the Funk and Soul Show live on BBC Radio 2, immediately after his 6 Music show. He regularly sits in for Janice Long, Steve Wright and Jo Whiley, and has presented numerous programmes on the station, including "The Craig Charles Soul All-Nighter" (2011), which he hosted continuously for 12 hours, and the "Beatleland" (2012) documentary on "The Beatles". Charles has also chosen music as a guest of other broadcasters such as Ken Bruce on Radio 2 and Liz Kershaw on 6 Music. Charles covered for Graham Norton on Radio 2's Saturday mid-morning show during Norton's 10-week 2015 summer break.
From 16 April 2016, Charles presents the House Party on Saturday nights on BBC Radio 2, with the show airing between 10pm and midnight. For eight weeks from April-June 2020, he also presented "Craig Charles At Teatime" between 4pm and 7pm on weekdays on Radio 6 Music. The show was sometimes billed as "Craig Charles Weekend Workout" on Fridays.
Charles has been involved in the music industry through much of his career. His bands have included "Watt 4" (1980), in which he played keyboards and sang, "Craig Charles and the Beat Burglars" (1989), "The Sons of Gordon Gekko" (1989), where he wrote lyrics and also composed tunes and "The Eye" (2000–02), with whom he recorded the rock album "Giving You The Eye, Live at the Edinburgh Festival". Charles plays guitar and piano.
In 1987, Charles provided the poem track used for the opening credits of the BBC series "The Marksman" (in which he also acted), which is included on the album "The Marksman: Music from the BBC TV series". Charles wrote lyrics for Suzanne Rhatigan's album "To Hell With Love" (1992). In 1993, Charles was signed to the Acid Jazz record label.
In 2009, Charles formed the Fantasy Funk Band from the leading British musicians in the genre, and has presented the band at festivals, including Glastonbury and the BBC's Proms in the Park. As a continuation of his 6 Music show, Charles regularly takes the Craig Charles Funk & Soul Club to varied venues across the UK and abroad, and to the major UK music festivals. He performs live DJ sets, occasionally comperes and curates events, including his own Craig Charles Fantasy Weekender, and has broadcast the radio show live from festival locations.
In 2012, Charles released the compilation album "The Craig Charles Funk & Soul Club", on CD and as a digital download, as part of a three-album deal with Freestyle Records. The second volume was released in the same format in 2013, and the third in 2014. He followed these with a "Craig Charles Funk and Soul Classics" album of 3 CDs in 2015.
Charles returned to stand-up comedy between 1995 and 2001, regularly touring his one-man adult-rated shows nationally and releasing the videos "Craig Charles: Live on Earth!" (1995), "Live Official Bootleg" (1996) and "Sickbag" (2000). International performances included the Great Norwegian Comedy Festival and the Melbourne International Comedy Festival.
Charles appeared in the John Godber comedy play "Teechers", in which he swapped in and out of various roles, at the Arts Theatre, London, and at the Edinburgh Festival (1989), and he played Idle Jack in the pantomime "Dick Whittington", at the Hull New Theatre (1997). In 2000, he performed the show "Craig Charles and His Band" at the Edinburgh Festival.
Charles has a regular slot at Butlins Minehead House Of Fun Weekend every November for 3 nights, of DJing, Comedy and singing.
Charles played Eddie in the 1987 political drama "Business as Usual". In 2006, Charles starred in two British feature films: the fantasy film "Fated" and the gangster film "Clubbing to Death". Charles voiced Zipper the Cat in the animation "Prince Cinders" (1993) and Asterix in "Asterix Conquers America" (1994). Roles in short films include playing Keith Dennis in the comedy "The Colour of Funny" (1999) and Mark in the drama "Ten Minutes" (2004).
In 1993, Charles worked with Russell Bell on the "Craig Charles Almanac of Total Knowledge", writing about his 'streetwise' sense of humour on a range of topics, from the world's most embarrassing stories to how to explain the mysteries of the universe. In 1997, Charles and Bell wrote Charles' "Red Dwarf" character's book "The Log", in which Lister decides to leave a log detailing mankind's greatest achievements. In 1998, Charles published "No Other Blue", a collection of his poetry, with illustrations by Philippa Drakeford, on diverse personal subjects including prison, his mother's final illness, love and politics at home and abroad. More recently he has written a series of nursery rhymes titled "Scary Fairy and the Tales of the Dark Wood".
In 2000, Charles wrote his first autobiography about his experiences growing up in Liverpool, titled "No Irish, No Niggers". In 2007, he announced he would release his autobiography, planned for March 2008, published by Hodder Headline and titled "On the Rocks", which would cover the recent incidents in his life and be based on much of his journal, which Charles said he kept while in rehab.
Charles has been involved in journalism and has had a column in "Time Out" magazine. In 1994, he launched a single issue of "Comedy" magazine with articles dedicated to the comedy circuit. In 2005 and 2006, Charles was a monthly columnist for the "Liverpool Echo" newspaper. His television writing credits include The Easter Stories (1994), Funky Bunker (1997) and Takeshi's Castle (2002). He is also involved in music journalism as he wrote "Liner notes" for the funk and soul music producer "Mr. Confuse" for his albums 'Feel The Fire' (2008), 'Do You Realize' (2012) and 'Only A Man' (2018) regarding to his work as a music presenter for "The Craig Charles Funk and Soul Show" on BBC Radio 6 Music.
He has 3 children, a son Jack from his first marriage to actress Cathy Tyson, and two daughters Anna-Jo and Nellie from his second marriage to Jackie Fleming.
In July 1994, Charles and a friend were arrested and remanded in custody for three months on a rape charge. In March 1995, both men were acquitted in their trial. After being cleared, Charles spoke of the need to restore anonymity for those accused of rape. He stated that "the fact that my name and address along with my picture can appear on the front of the papers before the so-called 'victim' has even signed a statement proves that anonymity for rape defendants is a must and that the law must be changed." While in prison, Charles was attacked by a man wielding a makeshift knife.
In June 2006, newspaper allegations of crack cocaine use resulted in Charles being suspended from both "Coronation Street" and BBC Radio 6 Music. In August, Charles was arrested and released on bail pending further enquiries, and in September he accepted a caution for possession of a Class A drug. Charles returned to hosting his 6 Music show from November 2006 and filming "Coronation Street" from January 2007. | https://en.wikipedia.org/wiki?curid=5829 |
County Mayo
County Mayo (, meaning "Plain of the yew trees") is a county in Ireland. In the West of Ireland, in the province of Connacht, it is named after the village of Mayo, now generally known as Mayo Abbey. Mayo County Council is the local authority. The population was 130,507 at the 2016 census. The boundaries of the county, which was formed in 1585, reflect the Mac William Íochtar lordship at that time.
It is bounded on the north and west by the Atlantic Ocean; to the south by County Galway; the east by County Roscommon; and the northeast by County Sligo. Mayo is the third-largest of Ireland's 32 counties in area and 15th largest in terms of population. It is the second-largest of Connacht’s five counties in both size and population. Mayo has the longest coastline of any county in Ireland, at or approximately 21% of the total coastline of the State. There is a distinct geological difference between the west and the east of the county. The west consists largely of poor subsoils and is covered with large areas of extensive Atlantic blanket bog, whereas the east is largely a limestone landscape. Agricultural land is therefore more productive in the east than in the west.
There are nine historic baronies, four in the northern area and five in the south of the county:
North Mayo
South Mayo
A survey of the terrestrial and freshwater algae of Clare Island was made between 1990 and 2005 and published in 2007. A record of "Gunnera tinctoria" is also noted.
Consultants working for the Corrib gas project have carried out extensive surveys of wildlife flora and fauna in Kilcommon Parish, Erris between 2002 and 2009. This information is published in the Corrib Gas Proposal Environmental impact statements 2009 and 2010.
There is evidence of human occupation of what is now County Mayo going far back into prehistory. At Belderrig on the north Mayo coast, there is evidence for Mesolithic (Middle Stone Age) communities around 4500 BC. while throughout the county there is a wealth of archaeological remains from the Neolithic (New Stone Age) period (ca. 4,000 BC to 2,500 BC), particularly in terms of megalithic tombs and ritual stone circles.
The first people who came to Ireland – mainly to coastal areas as the interior was heavily forested – arrived during the Middle Stone Age, as far back as eleven thousand years ago. Artefacts of hunter/gatherers are sometimes found in middens, rubbish pits around hearths where people would have rested and cooked over large open fires. Once cliffs erode, midden-remains become exposed as blackened areas containing charred stones, bones, and shells. They are usually found a metre below the surface. Mesolithic people did not have major rituals associated with burial, unlike those of the Neolithic (New Stone Age) period.
The Neolithic period followed the Mesolithic around 6,000 years ago. People began to farm the land, domesticate animals for food and milk, and settle in one place for longer periods. These people had skills such as making pottery, building houses from wood, weaving, and knapping (stone tool working). The first farmers cleared forestry to graze livestock and grow crops. In North Mayo, where the ground cover was fragile, thin soils washed away and blanket bog covered the land farmed by the Neolithic people.
Extensive pre-bog field systems have been discovered under the blanket bog, particularly along the North Mayo coastline in Erris and north Tyrawley at sites such as the Céide Fields, centred on the northeast coast.
The Neolithic people developed rituals associated with burying their dead; this is why they built huge, elaborate, galleried stone tombs for their dead leaders, known nowadays as megalithic tombs. There are over 160 recorded megaliths in County Mayo, such as Faulagh.
There are four distinct types of Irish megalithic tombs—court tombs, portal tombs, passage tombs and wedge tombs—examples of all of which can be found in County Mayo. Areas particularly rich in megalithic tombs include Achill, Kilcommon, Ballyhaunis, Moygownagh, Killala and the Behy/Glenurla area around the Céide Fields.
Megalithic tomb building continued into the Bronze Age when metal began to be used for tools alongside the stone tools. The Bronze Age lasted approximately from 4,500 years ago to 2,500 years ago (2,500 BC to 500 BC). Archaeological remains from this period include stone alignments, stone circles and fulachta fiadh (early cooking sites). They continued to bury their chieftains in megalithic tombs which changed design during this period, more being of the wedge tomb type and cist burials.
Around 2,500 years ago the Iron Age took over from the Bronze Age as more and more metalworking took place. This is thought to have coincided with the arrival of Celtic speaking peoples and the introduction of the ancestor of the Irish language. Towards the end of this period, the Roman Empire was at its height in Britain but it is not thought that the Roman Empire extended into Ireland. Remains from this period, which lasted until the Early Christian period began about AD 325 (with the arrival of St. Patrick into Ireland, as a slave) include crannógs (Lake dwellings), promontory forts, ringforts and souterrains of which there are numerous examples across the county. The Iron Age was a time of tribal warfare and kingships, each fighting neighbouring kings, vying for control of territories and taking slaves. Territories were marked by tall stone markers, Ogham stones, using the first written down words using the Ogham alphabet. The Iron Age is the time period in which the mythological tales of the Ulster Cycle and sagas took place, as well as that of the Táin Bó Flidhais, whose narrative is set in mainly in Erris.
Christianity came to Ireland around the start of the 5th century. It brought many changes including the introduction of the Latin alphabet. The tribal 'tuatha' and new Christian religious settlements existed side by side. Sometimes it suited the chieftains to become part of the early Churches, other times they remained as separate entities. St. Patrick (4th century) may have spent time in County Mayo and it is believed that he spent forty days and forty nights on Croagh Patrick praying for the people of Ireland. From the middle of the 6th-century hundreds of small monastic settlements were established around the county. Some examples of well-known early monastic sites in Mayo include Mayo Abbey, Aughagower, Ballintubber, Errew Abbey, Cong Abbey, Killala, Turlough on the outskirts of Castlebar, and island settlements off the Mullet Peninsula like the Inishkea Islands, Inishglora and Duvillaun.
In 795 the first of the Viking raids took place. The Vikings came from Scandinavia to raid the monasteries as they were places of wealth with precious metal working taking place in them. Some of the larger ecclesiastical settlements erected round towers to prevent their precious items being plundered and also to show their status and strength against these pagan raiders from the north. There are round towers at Aughagower, Balla, Killala, Turlough and Meelick. The Vikings established settlements which later developed into towns (Dublin, Cork, Wexford, Waterford etc) but none were in County Mayo. Between the reigns of Kings of Connacht Cathal mac Conchobar mac Taidg (973–1010) and Tairrdelbach Ua Conchobair (1106–1156), various tribal territories were incorporated into the kingdom of Connacht and ruled by the Siol Muirdaig dynasty, based initially at Rathcroghan in County Roscommon, and from 1050 at Tuam. The families of O'Malley and O'Dowd of Mayo served as admirals of the fleet of Connacht, while families such as O'Lachtnan, Mac Fhirbhisigh, and O'Cleary were ecclesiastical and bardic clans.
In AD 1169 when one of the warring kings in the east of Ireland, Dermot MacMurrough, appealed to the King of England for help in his fight with a neighbouring king, the response resulted in int the Anglo-Norman colonisation of Ireland.
County Mayo came under Norman control in AD 1235. Norman control meant the eclipse of many Gaelic lords and chieftains, chiefly the O'Connors of Connacht. During the 1230s, the Anglo-Normans and Welsh under Richard Mór de Burgh (c. 1194 – 1242) invaded and settled in the county, introducing new families such as Burke, Gibbons, Staunton, Prendergast, Morris, Joyce, Walsh, Barrett, Lynott, Costello, Padden and Price, Norman names are still common in County Mayo. Following the collapse of the lordship in the 1330s, all these families became estranged from the Anglo-Irish administration based in Dublin and assimilated with the Gaelic-Irish, adopting their language, religion, dress, laws, customs and culture and marrying into Irish families. They became "more Irish than the Irish themselves".
The most powerful clan to emerge during this era were the Mac William Burkes, also known as the Mac William Iochtar (see Burke Civil War 1333–1338), descended from Sir William Liath de Burgh, who defeated the Gaelic-Irish at the Second Battle of Athenry in August 1316. They were frequently at war with their cousins, Clanricarde of Galway, and in alliance with or against various factions of the O'Conor's of Siol Muiredaig and O'Kelly's of Uí Maine. The O'Donnell's of Tyrconnell regularly invaded in an attempt to secure their right to rule.
The Anglo-Normans encouraged and established many religious orders from continental Europe to settle in Ireland. Mendicant orders—Augustinians, Carmelites, Dominicans and Franciscans began new settlements across Ireland and built large churches, many under the patronage of prominent Gaelic families. Some of these sites include Cong, Strade, Ballintubber, Errew Abbey, Burrishoole Abbey and Mayo Abbey. During the 15th and 16th centuries, despite regular conflicts between them as England chopped and changed between religious beliefs, the Irish usually regarded the King of England as their King. When Elizabeth I came to the throne in the mid-16th century, the English people, as was customary at that time, followed the religious practices of the reigning monarch and became Protestant. Many Irish people such as Gráínne O'Malley, the famous pirate queen, had close relationships with the English monarchy, and the English kings and queens were welcome visitors to Irish shores. The Irish however, generally held onto their Catholic religious practices and beliefs. The early plantations of settlers in Ireland began during the reign of Queen Mary in the mid-16th century and continued throughout the long reign of Queen Elizabeth I until 1603. By then the term "County Mayo" had come into use. In the summer of 1588, the galleons of the Spanish Armada were wrecked by storms along the west coast of Ireland. Some of the hapless Spaniards came ashore in Mayo, only to be robbed and imprisoned, and in many cases slaughtered.
Almost all the religious foundations set up by the Anglo-Normans were suppressed in the wake of the Reformation in the 16th century.
Protestant settlers from Scotland, England, and elsewhere in Ireland, settled in the County in the early 17th century. Many would be killed or forced to flee because of the 1641 Rebellion, during which a number of massacres were committed by the Catholic Gaelic Irish, most notably at Shrule in 1642. A third of the overall population was reported to have perished due to warfare, famine and plague between 1641 and 1653, with several areas remaining disturbed and frequented by Reparees into the 1670s.
Pirate Queen Gráinne O'Malley is probably the best-known person from County Mayo between the mid-16th century and the turn of the 17th century. In the 1640s, when Oliver Cromwell overthrew the English monarchy and set up a parliamentarian government, Ireland suffered severely. With a stern regime in absolute control needing to pay its armies and allies, the need to pay them with grants of land in Ireland led to the 'to hell or to Connaught' policies. Displaced native Irish families from other (eastern and southern mostly) parts of the country were either forced to leave the country or were awarded grants of land 'west of the Shannon' and put off their own lands in the east. The land in the west was divided and sub-divided between more and more people as huge estates were granted on the best land in the east to those who best pleased the English. Mayo does not seem to have been affected much during the Williamite War in Ireland, though many natives were outlawed and exiled.
For the vast majority of people in County Mayo the 18th century was a period of unrelieved misery. Because of the penal laws, Catholics had no hope of social advancement while they remained in their native land. Some, like William Brown (1777–1857), left Foxford with his family at the age of nine and thirty years later was an admiral in the fledgeling Argentine Navy. Today he is a national hero in that country.
The general unrest in Ireland was felt just as keenly across Mayo, and as the 18th century approached and news reached Ireland about the American War of Independence and the French Revolution, the downtrodden Irish, constantly suppressed by Government policies and decisions from Dublin and London, began to rally themselves for their own stand against British rule in their country. 1798 saw Mayo become a central part of the United Irishmen Rebellion when General Humbert from France landed in Killala with over 1,000 soldiers playing to support the main uprising. They marched across the county towards the administrative centre of Castlebar, leading to the Battle of Castlebar. Taking the garrison by surprise Humbert's army was victorious. He established a 'Republic of Connacht' with John Moore of the Moore family from Moore Hall near Partry as its head. Humbert's army marched on towards Sligo, Leitrim and Longford where they were suddenly faced with a massive British army and were forced to surrender in less than half an hour. The French soldiers were treated honourably, but for the Irish the surrender meant slaughter. Many died on the scaffold in towns like Castlebar and Claremorris, where the high sheriff for County Mayo, the Honourable Denis Browne, M.P., brother of Lord Altamont, wreaked a terrible vengeance – thus earning for himself the nickname which has survived in folk memory to the present day, 'Donnchadh an Rópa' (Denis of the Rope).
In the 18th century and early 19th century, sectarian tensions arose as evangelical Protestant missionaries sought to 'redeem the Irish poor from the errors of Popery'. One of the best known was the Rev. Edward Nangle's mission at Dugort in Achill. These too were the years of the campaign for Catholic Emancipation and, later, for the abolition of the tithes, which a predominately Catholic population was forced to pay for the upkeep of the clergy of the Established (Protestant) Church.
During the early years of the 19th century, famine was a common occurrence, particularly where population pressure was a problem. The population of Ireland grew to over eight million people prior to the Irish Famine (or Great Famine) of 1845-47. The Irish people depended on the potato crop for their sustenance. Disaster struck in August 1845, when a killer fungus (later diagnosed as Phytophthora infestans) started to destroy the potato crop. When widespread famine struck, about a million people died and a further million left the country. People died in the fields of starvation and disease. The catastrophe was particularly bad in County Mayo, where nearly ninety per cent of the population depended on the potato as their staple food. By 1848, Mayo was a county of total misery and despair, with any attempts at alleviating measures in complete disarray.
There are numerous reminders of the Great Famine to be seen on the Mayo landscape: workhouse sites, famine graves, sites of soup kitchens, deserted homes and villages and even traces of undug 'lazy-beds' in fields on the sides of hills. Many roads and lanes were built as famine relief measures. There were nine workhouses in the county: Ballina, Ballinrobe, Belmullet, Castlebar, Claremorris, Killala, Newport, Swinford and Westport.
A small poverty-stricken place called Knock, County Mayo, made headlines when it was announced that an apparition of the Blessed Virgin Mary, St. Joseph and St. John had taken place there on 21 August 1879, witnessed by fifteen local people.
A national movement was initiated in County Mayo during 1879 by Michael Davitt, James Daly, and others, which brought about a major social change in Ireland. Michael Davitt, a labourer whose family had moved to England joined forces with Charles Stewart Parnell to win back the land for the people from the landlords and stop evictions for non-payment of rents. The organisation became known as the Irish National Land League, and its struggle to win rights for poor farmers in Ireland was known as the Land War.
It was in this era of agrarian unrest that a new verb was introduced to the English language by Mayo - "to boycott". Charles Boycott was an English landord deeply unpopular with his tenants. When Charles Steward Parnell made a speech in Ennis, County Clare urging Nonviolent resistance against landlords, his tactics were enthusiastically taken in Mayo against Boycott. The entire Catholic community around Lough Mask in South Mayo where Boycott had his estate became a campaign of total social ostracisation against Boycott, a tactic that would one day come to bear his name. The campaign against Boycott became a in the British press after he wrote a letter to "The Times". The British elite rallied to his cause and Fifty Orangemen from County Cavan and County Monaghan travelled to his estate to harvest the crops, while a regiment of the 19th Royal Hussars and more than 1,000 men of the Royal Irish Constabulary were deployed to protect the harvesters. However, this cost of doing this was completely uneconomic: It cost the British government somewhere in the region of £10,000 to simple harvest £500 worth of crops. Boycott sold off the estate and the British government's resolve to try and break boycotts in this completely dissolved, resulting in victory for the tenants.
The "Land Question" was gradually resolved by a scheme of state-aided land purchase schemes. The tenants became the owners of their lands under the newly set-up Land Commission.
A Mayo nun, Mother Agnes Morrogh-Bernard, set up the Foxford Woollen Mill in 1892. She made Foxford synonymous throughout the world with high quality tweeds, rugs and blankets.
Mayo, as all parts of what became the Republic of Ireland, was affected by the events of the Irish revolutionary period, including the Irish War of Independence and the subsequent Irish Civil War. Major John MacBride of Westport was amongst those who took part in the 1916 Easter Rising and was subsequently executed by the British for his participation. His death served as a rallying call for Republicans in Mayo and lead to Mayo men such as P. J. Ruttledge, Ernie O'Malley, Michael Kilroy and Thomas Derrig to rise up during the War of Independence. In the ensuing Civil War many of these leading figures chose the Anti-treaty side and fought in bitter battles such as those in Ballina, which changed hands between pro and anti-treaty forces a number of times.
In the aftermath of the Civil War, there was a consolidation of many of those with anti-treaty feelings into the new political party Fianna Fáil. PJ Ruttledge and Thomas Derrig would become founding members of the party and served in Eamon de Valera's first-ever Fianna Fáil government as ministers. Mayo politicians would continue to contribute to the national political scene over the decades. In 1990 Mary Robinson became the first-ever female President of Ireland, and is widely credited with revitalising the position with an importance and focus it had never possessed before. In 2011 Enda Kenny became the first politician from Mayo to serve as Taoiseach, the head of government for the Republic of Ireland. Kenny went on to become the longest-serving Fine Gael Taoiseach in Irish history.
In the early historic period, what is now County Mayo consisted of a number of large kingdoms, minor lordships and tribes of obscure origins. They included:
Mayo County Council () is the authority responsible for local government. As a county council, it is governed by the Local Government Act 2001. The County is divided into four municipal areas Castlebar, Ballina, Claremorris and West (an area which stretches from Westport to Belmullet), each with a population of roughly 32,000 to 34,000 people. The council is responsible for housing and community, roads and transportation, urban planning and development, amenity and culture, and environment.
For the purpose of local elections, the county is divided into six local election areas (LEAs), each centred around a major town. Each LEA elects a number of councillors who then represent the area for a span of 5 years on the County Council. The number of councillors allotted to an LEA is based on its population.
The county town is at "Áras an Contae" in Castlebar, the main population centre located in the centre of the county. For national elections, half of the Claremorris Municipal District is in Galway West, and stretches from Ashford Castle to Ireland West Airport Knock.
Since 2016, Mayo has been represented on a national political level by four Teachta Dála who represent the constituency of Mayo in Dáil Eireann. Previous to 2016 the constituency had five TDs but this was reduced based on the county's current population in line with proportional representation.
Historically, Mayo has tended to vote Fianna Fáil, as Fianna Fáil managed to position themselves in the 20th century as the party best fit to represent farmers with small holdings, who were plentiful in Mayo. With so many of Mayo's electorate being small farmers, the county became a base for the emergence of Clann na Talmhan, an agrarian party in the 1940s and 1950s. Clann an Talmhan's second leader, Joseph Blowick came from South Mayo and that is where his seat was. The party was not able to last in the long run though as it was unable to hold together its voting bloc of both small farmers in the west of Ireland and large farmers in the east.
Towards the start of the 21st century, the balance of power in Mayo began to shift towards Fine Gael, thanks in part to the emergence of Enda Kenny and Michael Ring. Kenny, who became Taoiseach in 2011, was able to lead Fine Gael to a historic victory in the 2011 Irish general election which included securing four out of five available seats for his party in Mayo.
In 2020, Rose Conway-Walsh came within 200 votes of topping the poll and became the first Sinn Féin TD for Mayo since 1927, riding a nationwide surge Sinn Féin experienced that year.
Despite being historically the third-largest party in the Republic of Ireland, Labour has struggled to ever make inroads into Mayo. The party has only ever had one TD for Mayo, former party leader Thomas J. O'Connell, who represented South Mayo between 1927 and 1932. While Labour has not proven itself electorally successful in Mayo, Mayo has provided important members to the Labour Party. Mary Robinson from Ballina became the first ever female President of Ireland as a Labour candidate while Pat Rabbitte, originally from Claremorris, served as leader of the Labour Party from 2002 to 2007. Serving alongside Rabbitte was Emmet Stagg, one of the longest standing Labour TDs of the modern era, himself from Hollymount not far from Claremorris.
The county has experienced perhaps the highest emigration out of Ireland. In the 1840s–1880s, waves of emigrants left the rural townlands of the county. Initially triggered by the Great Famine and then in search of work in the newly industrialising United Kingdom and the United States, the population fell considerably. From 388,887 in 1841, the population fell to 199,166 in 1901. The population reached a low of 109,525 in 1971 as emigration continued. Emigration slowed down as the Irish economy began to expand in the 1990s and early 2000s. Consequently, the population of Mayo increased from 110,713 in 1991 to 130,638 in 2011.
According to figures in the 2006 National Census the religious demographic breakdown for County Mayo was 114,215 Roman Catholics, 2,476 Church of Ireland, 733 Muslims, 409 other Christians, 280 Presbyterians, 250 Orthodox Christians, 204 Methodists, 853 other stated religions, 3,267 no religion and 1,152 no stated religion.
9% of the population of County Mayo live in the Gaeltacht. The Gaeltacht Irish-speaking region in County Mayo is the third-largest in Ireland with 10,886 inhabitants. Tourmakeady is the largest village in this area. All schools in the area use Irish as the language of instruction. Mayo has four gaelscoileanna in its four major towns, providing primary education to students through Irish.
Mayo is well served by rail travel. Westport railway station is the terminus station on the Dublin to Westport Rail service. Railway stations are also located at Ballyhaunis, Claremorris, Castlebar, Manulla, Ballina and Foxford. All railway stations are located on the same railway line, with exception to Ballina and Foxford which requires passengers to change at Manulla Junction. There are currently four services each way every day on the line.
There are also proposals to reopen the currently disused Western Railway Corridor connecting Limerick with Sligo.
There are a number of national primary roads in the county including the N5 road connecting Westport with Dublin, the N17 road connecting the county with Galway and Sligo and the N26 road connecting Ballina with Dublin via the N5. There are a number of national secondary roads in the county also including the N58 road, N59 road, N60 road, N83 road & N84 road. There are plans in place for a new road running from northwest Westport to east Castlebar. The proposal is a type two dual carriageway with junctions at the N59, N84 and N60.
Ireland West Airport Knock is an international airport located in the county. The name is derived from the nearby village of Knock. Recent years have seen the airport's passenger numbers grow to over 650,000 yearly with a number of UK and European destinations. August 2014 saw the airport have its busiest month on record with 102,774 passengers using the airport.
Newspapers in County Mayo include "The Mayo News", the "Connaught Telegraph", the "Connacht Tribune", "Western People", and "Mayo Advertiser", which is Mayo's only free newspaper. Mayo has its own online TV channel "Mayo TV" which was launched in 2011. It covers news and events from around the county and regularly broadcasts live to a worldwide audience. Local radio stations include Erris FM, Community Radio Castlebar, Westport Community Radio, BCR FM (Ballina Community Radio) and M.W.R. (Mid West Radio).
The documentary "Pipe Down", which won best feature documentary at the 2009 Waterford Film Festival, was made in Mayo.
There is local resistance to Shell's decision to process raw gas from the Corrib gas field at an onshore terminal. In 2005, five local men were jailed for contempt of court after refusing to follow an Irish court order. Subsequent protests against the project led to the Shell to Sea and related campaigns.
The Mayo Energy Audit 2009–2020 is an investigation into the implications of peak oil and subsequent fossil fuel depletion for a rural county in west of Ireland. The study draws together many different strands to examine current energy supply and demand within the area of study, and assesses these demands in
the face of the challenges posed by the declining production of fossil fuels and expected disruptions to supply chains, and by long-term economic recession.
The Mayo GAA senior team last won the Sam Maguire Cup in 1951, when the team was captained by Seán Flanagan. The team's third title, this followed victories in 1936 and the previous year, 1950. Since 1951, the team have made numerous All-Ireland Final appearances (in 1989, twice in 1996, 1997, 2004, 2006, 2012, 2013, twice again in 2016 against Dublin, with their latest appearance coming in 2017 against Dublin, again), though the team have failed on all occasions to achieve victory over their opponents. Mayo are the current Allianz national league football champions having beaten Kerry in the national league final at Croke Park in April 2019
The team's unofficial supporters club are Mayo Club '51, named after the last team who won the Sam Maguire. The county colours of Mayo GAA are traditionally green and red.
The County's most popular association football teams are Westport United and Castlebar Celtic.
Although Gaelic football and association football are the most popular sport in the county, other sports are popular in the county as well such as rugby, basketball, hurling, swimming, tennis, badminton, athletics, handball and racquetball. | https://en.wikipedia.org/wiki?curid=5830 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.